Multimedia Over Wireless Networks - MULTIMEDIA

We have studied the evolution of current 2G networks to future high - capacity 3G networks, but is there a demand for 3G networks? Multimedia over wireless will certainly need a higher bandwidth. Suggested multimedia applications range from web browsing, streaming video, videoconferencing, collaborative work, and slide - show presentations to enhanced roadside assistance and downloadable GPS maps for drivers.

In this section we are concerned mainly with sending video robustly over wireless channels, such as for a videoconferencing application. This application should be prominent on 3G handhelds, since it is a natural extension to voice communication.

Because wireless data transmissions incur the most data loss and distortion, error resilience and error correction become primary concerns. We have thus included some brief description of synchronization loss, error - resilient entropy coding, error concealment, and Forward Error Correction (FEC) in this section, although most of these techniques are also applicable to other networks.

A few characteristics of wireless handheld devices are worth keeping in mind when designing multimedia transmission, in particular video transmission. First, both the hand ­ held size and battery life limit the processing power and memory of the device. Thus, encoding and decoding must have relatively low complexity. Of course, one advantage of the smaller device size is that lower - resolution videos are acceptable, which helps reduce processing time.

Second, due to memory constraints and reasons for the use of wireless devices, as well as billing procedures, real - time communication is likely to be required. Long delays before starting to see a video are either not possible or not acceptable.

Finally, wireless channels have much more interference than wired channels, with specific loss patterns depending on the environment conditions. The bitrate for wireless channels is also much more limited, although the 3G bitrates are more suitable for video. This implies that although a lot of bit protection must be applied, coding efficiency has to be maintained as well. Error - resilient coding is important.

3G standards specify that video shall be standard compliant. Moreover, most companies will concentrate on developing products using standards, in the interest of interoperability of mobiles and networks. The video standards reasonable for use over wireless channels are MPEG - 4 and H.263 and its variants, since they have low bitrate requirements.

The 3GPP2 group has defined the following QoS parameters for wireless videoconferencing services. The QoS parameters specified for the wireless part are more stringent than those required for end - to - end transmissions. The 3GPP QoS requirements for multimedia transmission are nearly identical.

  1. Synchronization. Video and audio should be synchronized to within 20 msec.
  2. Throughput. The minimum video bitrate to be supported is 32 kbps. Video rates of 128 kbps, 384 kbps, and above should be supported as well.
  3. Delay.The maximum end - to - end transmission delay is defined to be 400 msec.
  4. Jitter. The maximum delay jitter (maximum difference between the average delay and the 95th percentile of the delay distribution) is 200 msec.
  5. Errorrate. The videoconferencing system should be able to tolerate a frame error rate of 10 - 2 or a bit error rate of 10 - 3 for circuit - switched transmission.

In the following, we discuss the vulnerability of a video sequence to bit errors and ways to improve resilience to errors.

Synchronization Loss

A video stream is either packetized and transmitted over a packet - switched channel or transmitted as a continuous bitstream over a circuit - switched channel. In either case, it is obvious that packet loss or bit error will reduce video quality. If a bit loss or packet loss is localized in the video in both space and time, the loss can still be acceptable, since a frame is displayed for a very short period, and a small error might go unnoticed.

However, digital video coding techniques involve variable - length codes, and frames are coded with different prediction and quantization levels. Unfortunately, when a packet containing variable bit - length data (such as DCT coefficients) is damaged, that error, if unconstrained, will propagate all the way throughout the stream. This is called loss of decoder synchronization. Even if the decoder can detect the error due to an invalid coded symbol or coefficients out of range, it still cannot establish the next point from which to start decoding.

As we have learned in Chapter this complete bitstream loss does not happen for videos coded with standardized protocol layers. The Picture layer and the Group Of Blocks (GOB) layer or Slice headers have synchronization markers that enable decoder ^synchronization, For example, the H.263 bitstream has four layers — the Picture layer, GOB layer, Mac - roblock layer, and Block layer.

The Picture Layer starts with a unique 22 - bit picture start code (PSC). The longest entropy - coded symbol possible is 13 bits, so the PSC serves as a synchronization marker as well. The GOB layer is provided for synchronization after a few blocks rather than the entire frame. The group of blocks start code (GBSC) is 17 bits long and also serves as a synchronization marker. The macroblock and the Block layers do not contain unique start codes, as these are deemed high overhead.

ITU standards after H.261 (e.g., H.263, H.263+, etc.) support slice - structured mode instead of GOBs (H.263 Annex K), where slices group blocks together according to the block's coded bit length rather than the number of blocks. The objective is to space slice headers within a known distance of each other. That way, when a bitstream error looks like a synchronization marker, if the marker is not where the slice headers should be it is discarded, and no false resynchronization occurs.

Since slices need to group an integral number of macroblocks together, and macroblocks are coded using VLCs, it is not possible to have all slices the same size. However, there is a minimum distance after which the next scanned macroblock will be added to a new slice. We know that DC coefficients in macroblocks and motion vectors of macroblocks are differentially coded. Therefore, if a macroblock is damaged and the decoder locates the next synchronization marker, it might still not be able to decode the stream.

To alleviate the problem, slices also reset spatial prediction parameters; differential coding across slice boundaries is not permitted. The ISO MPEG standards (and H.264 as well) specify slices that are not required to be of similar bit length and so do not protect against false markers well.

Other than synchronization loss, we should note that errors in prediction reference frames cause much more damage to signal quality than errors in frames not used for prediction. That is, a frame error for an I - frame will deteriorate the quality of a video stream more than a frame error for a P - or B - frame. Similarly, if the video is scalable, an error at the base layer will deteriorate the quality of a video stream more than in enhancement layers.

MPEG - 4 defines additional error - resilient tools that are useful for coding under noisy and wireless channel conditions. These are in addition to slice coding and Reversible Variable Length Codes (RVLCs). To further help with synchronization, a data partitioning scheme will group and separate header information, motion vectors, and DCT coefficients into different packets and put synchronization markers between them.

Additionally, an adaptive intra - frame refresh mode is allowed, where each macroblock can be coded independently of the frame as an inter - or intra - block according to its motion, to assist with error concealment. A faster - moving block will require more frequent refreshing — that is, be coded in intra - mode more often. Synchronization markers are easy to recognize and are particularly well suited to devices with limited processing power, such as cell phones and mobile devices.

For interactive applications, if a back channel is available to the encoder, a few additional error control techniques are available, classified as sender - receiver feedback. According to the bandwidth available at any moment, the receiver can ask the sender to lower or increase the video bitrate (transmission rate control), which combats packet loss due to congestion. If the stream is scalable, it can also ask for enhancement layers.

Additionally, Annex N of H.263+ specifies that the receiver can notice damage in a reference frame and request that the encoder use a different reference frame for prediction — a reference frame the decoder has reconstructed correctly.

The above techniques can be used in wireless real-time video applications such as video ­ conferencing, since wireless cell communication supports a back channel if necessary. However, it is obviously cheaper not to use one (it would reduce multiple - access interference in the uplink).

Error Resilient Entropy Coding

The main purpose of GOBs, slices, and synchronization markers is to reestablish decoder synchronization as soon as possible after an error. In Annex K of H.263+, slices achieve better resilience, since they impose further constraints on where the stream can be synchronized. However, another algorithm, called Error Resilient Entropy Coding (EREC), can achieve synchronization after every single macroblock, without any of the overhead of the slice headers or GOB headers. The algorithm is called EREC because it takes entropy - coded variable - length macroblocks and rearranges them in an error - resilient fashion. In addition, it can provide graceful degradation.

EREC takes a coded bitstream of a few blocks and rearranges them so that the beginning of all the blocks is a fixed distance apart. Although the blocks can be of any size and any media we wish to synchronize, the following description will refer to macroblocks in videos.

Initially, EREC slots (rows) of fixed bit - length are allocated with total bit - length equal to (or exceeding) the total bit - length of all the macroblocks. The number of slots is equal to the number of macroblocks, except that the macroblocks have varying bit - length and the slots have a fixed bit - length (approximately equal to the average bit - length of all the macroblocks). As shown, the last EREC slot (row) is shorter when the total number of bits does not divide evenly by the number of slots.

Let k be the number of macroblocks which is equal to the number of slots, l be the total bit - length of all the macroblocks, mbs[ ] be the macroblocks, slots[ ] be the EREC slots, the procedure for encoding the macroblocks is shown below.

The macroblocks are shifted into the corresponding slots until all the bits of the macroblock have been assigned or remaining bits of the macroblock don't fit into the slot. Then the macroblocks are shifted down, and this procedure repeats.

Example of macroblock encoding using EREC

Example of macroblock encoding using EREC

Procedure

Procedure

Example of macroblock decoding using EREC

Example of macroblock decoding using EREC

The decoder side works in reverse, with the additional requirement that it has to detect when a macroblock has been read in full. It accomplishes this by detecting the end of macroblock when all DCT coefficients have been decoded (or a block end code).

The transmission order of the data in the slots is row major — that is, at first the data in slot 0 is sent, then slot 1, and so on, left to right. It is easy to see how this technique is resilient to errors. No matter where the damage is, even at the beginning of a macroblock, we still know where the next macroblock starts — it is a fixed distance from the previous one. In this case, no synchronization markers are used, so the GOB layer or slices are not necessary either (although we still might want to restrict spatial propagation of error).

When the macroblocks are coded using a data partitioning technique (such as the one for MPEG - 4 described in the previous section) and also bitplane partitioning, an error in the bitstream will destroy less significant data while receiving the significant, data. It is obvious that the chance for error propagation is greater for bits at the end of the slot than at the beginning. On average, this will also reduce visual deterioration over a nonpartitioned encoding. This achieves graceful degradation under worsening error conditions.

Error Concealment

Despite all the efforts to minimize occurrences of errors and their significance, errors can still be visually annoying. Error concealment techniques are thus introduced to approximate the lost data on the decoder side.

Many error concealment techniques apply either in the spatial, temporal, or frequency domain, or a combination of them. All the techniques use neighboring frames temporally or neighboring macroblocks spatially. The transport stream coder interleaves the video packets, so that in case of a burst packet loss, not all the errors will be at one place, and the missing data can be estimated from the neighborhood.

Error concealment is necessary for wireless video communication, since the error rates are higher than for wired channels and might even be higher than can be transmitted with appropriate bit protection. Moreover, the error rate fluctuates more often, depending on various mobility or weather conditions. Decoding errors due to missing or wrong data received are more noticeable on devices with limited resolution and small screen sizes. This is especially true if macroblock size remains large, to achieve encoding efficiency for lower wireless bitrates.

  1. Dealing with lost macroblock(s). A simple and popular technique for concealment can be used when DCT blocks are damaged but the motion vectors are received correctly. The missing block coefficients are estimated from the reference frame, assuming no prediction errors. Since the goal of motion - compensated video is to minimize prediction errors, this is an appropriate assumption. The missing block is hence temporally masked using the block in the reference frame.

    We can achieve even better results if the video is scalable. In that case, we assume that the base layer is received correctly and that it contains the motion vectors and base layer coefficients that are most important. Then, for a lost macroblock at the enhancement layer, we use the motion vectors from the base layer, replace the DCT coefficients at the enhancement layer, and decode as usual from there. Since coefficients of less importance are estimated (such as higher - frequency coefficients), even if the estimation is not too accurate due to prediction errors, the concealment is more effective than in a nonscalable case.

    If motion vector information is damaged as well, this technique can be used only if the motion vectors are estimated using another concealment technique (to be discussed next). The estimation of the motion vector has to be good, or the visual quality of the video could be inauspicious. To apply this technique for intra - frames, some standards, such as MPEG - 2, also allow the acquisition of motion vectors for intra - coded frames (i.e., treating them as intra - as well as inter - frames). These motion vectors are discarded if the block has no error.

  2. Combining temporal, spatial and frequency coherences.Instead of just relying on the temporal coherence of motion vectors, we can combine it with spatial and frequency coherences. By having rules for estimating missing block coefficients using the received coefficients and neighboring blocks in the same frame, we can conceal errors for intra - frames and for frames with damaged motion vector information.

    Additionally, combining with prediction using motion vectors will give us a better approximation of the prediction error block. Missing block coefficients can be estimated spatially by minimizing the error of a smoothness function defined over the block and neighboring blocks. For simplicity, the smoothness function can be chosen as the sum of squared differences of pairwise neighboring pixels in the block. The function unknowns are the missing coefficients. In the case where motion information is available, prediction smoothness is added to the objective function for minimization, weighted as desired.

    The simple smoothness measure defined above has the problem that it smoothes edges as well. We can attempt to do better by increasing the order of the smoothing criterion from linear to quadratic or cubic.This will increase the chances of having both edge reconstruction and smoothing along the edge direction. At a larger computational cost, we can use an edge - adaptive smoothing method, whereby the edge directions inside the block are first determined, and smoothing is not permitted across edges.

  3. Frequency smoothing for high - frequency coefficients. Smoothing can be defined much more simply, to save on computational cost. Although the human visual system is more sensitive to low frequencies, it would be disturbing to see a checkerboard pattern where it does not belong. This will happen when a high - frequency coefficient is erroneously assigned a high value. The simplest remedy is to set high - frequency coefficients to 0 if they are damaged.

    If the frequencies of neighboring blocks are correlated, it is possible to estimate lost coefficients in the frequency domain directly. For each missing frequency coefficient in a block, we estimate its value using an interpolation of the same frequency coefficient values from the four neighboring blocks. This is applicable at higher frequencies only if the image has regular patterns. Unfortunately that is not usually the case for natural images, so most of the time the high coefficients are again set to 0. Temporal prediction error blocks are even less correlated at all frequencies, so this method applies only for intra - frames.

  4. Estimation of lost motion vectors. Loss of motion vectors prevents decoding of an entire predicted block, so it is important to estimate motion vectors well. The easiest way to estimate lost motion vectors is to set them to 0. This works well only in the presence of very little motion. A better estimation is obtained by examining the motion vectors of reference macroblocks and of neighboring macroblocks. Assuming motion is also coherent, it is reasonable to take the motion vectors of the corresponding macroblock in the reference frame as the motion vectors for the damaged target block.

    Similarly, assuming objects with consistent motion fields occupy more than one macroblock, the motion vector for the damaged block can be approximated as an inter ­ polation of the motion vectors of the surrounding blocks that were received correctly. Typical simple interpolation schemes are weighted - average and median. Also, the spatial estimation of the motion vector can be combined with the estimation from the reference frame using weighted sums.

Forward Error Correction (FEC)

Some data are vitally important for correct decoding. Missing DCT coefficients may be estimated or their effect visually concealed to some degree. However, some lost and improperly estimated data, such as picture coding mode, quantization level, or most data in higher layers of a video standard protocol stack, will cause catastrophic video decoding failure. In such cases, we would like to ensure "error - free" transmission. However, most channels, in particular wireless channels, are noisy, and to ensure correct transmission, we must provide adequate redundant retransmissions (when no back channel is available).

Forward Error Correction (FEC) is a technique that adds redundant data to a bitstream to recover some random bit errors in it. Ideally, the channel - packet error rate (or bit error rate) is estimated, and enough redundancy is added to make the probability of error after FEC recovery low.

The interval over which the packet error rate is estimated is chosen to be the smallest possible (to minimize latency and computation cost) that reliably estimates the frame loss probability. Naturally, when burst frame loss occurs, the estimation may no longer be adequate. Frame errors are also called erasures, since the entire packet is dropped on an error, Videos have to be transmitted over a channel with limited bandwidth. Therefore, it is important to minimize redundancy, because it comes at the expense of bitrates available for video source coding. At the same time, enough redundancy is needed so that the video can maintain required QoS under the current channel error conditions. There is an optimal amount of redundancy that minimizes video distortion, given certain channel conditions.

FEC codes in general fall into two categories: block codes and convolutional codes, Block codes apply to a group of bits at once to generate redundancy. Convolutional codes apply to a string of bits one at a time and have memory that can store previous bits as well. The following presents both types of FEC codes in brief.

Block Codes. Block codes take as input k bits and append r = n - k bits of FEC data, resulting in an n - bit - long string. These codes are referred to as (n, k) codes. The two types of block codes are linear and cyclic. All error correction codes operate by adding space between valid source strings. The space is measured using a Hamming distance, defined as the minimum number of bits between any coded strings that need to be changed so as to be identical to a second string.

To detect r errors, the Hamming distance has to at least equal r; otherwise, the corrupt string might seem valid again. This is not sufficient for correcting r errors however, since there is not enough distance among valid codes to choose a preferable correction. To correct r errors, the Hamming distance must be at least 2r. Linear codes are simple to compute but have higher coding overhead than cyclic codes.

Cyclic codes are stated in terms of generator polynomials of maximum degree equal to the number of source bits. The source bits are the coefficients of the polynomial, and redundancy is generated by multiplying with another polynomial. The code is cyclic, since the modulo operation in effect shifts the polynomial coefficients.

One of the most used classes of cyclic codes is the Bose - Chaudhuri - Hocquenghem (BCH) codes, since they apply to any binary string. The generator polynomial for BCH is given over GF(2) (the binary Galois field) and is the lowest - degree polynomial with roots of α', where a is a primitive element of the field (i.e., 2) and i goes over the range of 1 to twice the number of bits we wish to correct.

BCH codes can be encoded and decoded quickly using integer arithmetic, since they use Galois fields. H.261 and H.263 use BCH to allow for 18 parity bits every 493 source bits. Unfortunately, the 18 parity bits will correct at most two errors in the source. Thus, the packets are still vulnerable to burst bit errors or single - packet errors.

Interleaving scheme for redundancy codes. Packets or bits are stored in rows, and redundancy is generated in the last r columns. The sending order is by columns, top to bottom, then left to right

Interleaving scheme for redundancy codes. Packets or bits are stored in rows, and redundancy is generated in the last r columns. The sending order is by columns, top to bottom, then left to right

An important subclass of BCH codes that applies to multiple packets is the Reed - Solomon (RS) codes. RS codes have a generator polynomial over GF(2m), with m being the packet size in bits. RS codes take a group of k source packets and output n packets with r = n - k redundancy packets. Up to r lost packets can be recovered from n coded packets if we know the erasure points. Otherwise, as with all FEC codes, recovery can be applied only to half the number of packets (similarly, the number of bits), since error - point detection is now necessary as well.

In the RS codes, only [r / 2] packets can be recovered. Fortunately, in the packet FEC scenario the packets have headers that can contain a sequence number and CRC codes on the physical layer. In most cases, a packet with an error is dropped, and we can tell the location of the missing packet from the missing sequence number. RS codes are used in storage media such as CD - ROMs and in network multimedia transmissions that can have burst errors.

It is also possible to use packet interleaving to increase resilience to burst packet loss. The RS code is generated for each of the h rows of k source video packets. Then it is transmitted in column - major order, so that the first packet of each of the h rows is transmitted first, then the second, and so on. If a burst packet loss occurs, we tan tolerate more than r erasures, since there is enough redundancy data. This scheme introduces additional delay but does not increase computational cost.

RS codes can be useful for transmission over packet networks. When there are burst packet losses, packet interleaving, and packet sequencing, it is possible to detect which packets were received incorrectly and recover them using the available redundancy. If the video has scalability, a better use of allocated bandwidth is to apply adequate FEC protection on the base layer, containing motion vectors and all header information required to decode video to the minimum QoS. The enhancement layers can receive either less protection or none at all, relying just on resilient coding and error concealment. Either way, the minimum QoS is already achieved.

A disadvantage of block codes is that they cannot be selectively applied to certain bits. It is difficult to protect higher - protocol - layer headers with more redundancy bits than for, say, DCT coefficients, if they are sent in the same transport packet (or even group of packets). On the other hand, convolutional codes can do this, which makes them more efficient for data in which unequal protection is advantageous, such as videos. Although convolutional codes are not as effective against burst packet loss, for wireless radio channels burst packet loss is not predominant (and not present in most propagation models).

Convolutional Codes. Convolutional FEC codes are defined over generator polynomials as well. They are computed by shifting k message bits into a coder that convolves them with the generator polynomial to generate n bits. The rate of such code is defined to be k / n. The shifting is necessary, since coding is achieved using memory (shift) registers. There can be more than k registers, in which case past bits also affect the redundancy code generated.

After producing the n bits, some redundancy bits can be deleted (or "punctured") to decrease the size of n, and increase the rate of the code. Such FEC schemes are known as rate compatible punctured convolutional (RCPC) codes. The higher the rate, the lower the bit protection will be, but also the less overhead on the bitrate. A Viterbi algorithm with soft decisions decodes the encoded bit stream, although turbo codes are gaining popularity.

RCPC codes provide an advantage over block codes for wireless (sections of the) network, since burst packet losses are not likely. RCPC puncturing is done after generation of parity information. Knowing the significance of the source bits for video quality, we can apply a different amount of puncturing and hence a different amount of error protection. Studies and simulations of wireless radio models have shown that applying unequal protection using RCPC according to bit significance information results in better video quality (up to 2 dB better) for the same allocated bitrate than videos protected using RS codes.

Simplistically, the Picture layer in a video protocol should get the highest protection, the macroblock layer that is more localized will get lower protection, and the DCT coefficients in the block layer can get little protection, or none at all. This could be extended further to scalable videos in similar ways.

The cdma2000 standard uses convolutional codes to protect transmitted bits for any data type, with different code rates for different transmission bitrates. If future 3G networks incorporate data - type - specific provisions and recognize the video standard chosen for transmission, they can adaptively apply transport coding of the video stream with enough unequal redundancy suitable to the channel conditions at the time and QoS requested.

Trends in Wireless interactive Multimedia

The UMTS forum foresees that by 2010, the number of subscribers of wireless multimedia communication will exceed a billion worldwide, and such traffic will be worth over several hundred billion dollars to operators. Additionally, 3G will also speed the convergence of telecommunications, computers, multimedia content, and content providers to support enhanced services.

Most cellular networks around the world have already offered 2.5G services for a few years. Initial 3G services are also being offered globally, with cdma2000 IX service already commercially available in most countries.

Some of the present and future 3G applications are:

  1. Multimedia Messaging Service (MMS), a new messaging protocol for multimedia data on mobile phones that incorporates audio, images, and other multimedia content, along with traditional text messages
  2. Mobile videophone, VoIP, and voice - activated network access
  3. Mobile Internet access, with streaming audio and video services
  4. Mobile intranet / extranet access, with secure access to corporate LANs, Virtual Private Networks (VPNs), and the Interne
  5. Customized infotainment service that provides access to personalized content any ­ time, anywhere, based on mobile porta
  6. Mobile online multiuser gaming
  7. Ubiquitous and pervasive computing, such as automobile telematics, where an automated navigation system equipped with GPS and voice recognition can interact with the driver to obviate reading maps while driving.

The industry has long envisioned the convergence of IT, entertainment, and telecommunications. A major portion of the telecommunication field is dedicated to handheld wireless devices — the mobile stations (cell phones). At the same time, the computer industry has focused on creating handheld computers that can do at least some important tasks necessary for people on the go. Handheld computers are classified as Pocket PCs or PDAs.

Pocket PCs are typically larger, have a keyboard, and support most functions and programs of a desktop PC. PDAs do simpler tasks, such as storing event calendars and phone numbers. PDAs normally use a form of handwriting recognition for input, although some incorporate keyboards as well. PDA manufacturers are striving to support more PC - like functions and at the same time provide wireless packet services (including voice over IP), so that a PDA can be used as a phone as well as for wireless Internet connectivity.

As with all small portable computers, the Human Computer Interaction (HCI) problem is more significant than when using a desktop computer. Where there is no space for a key ­ board, it is envisioned that command input will be accomplished through voice recognition.

Most of the new PDA products support image and video capture, MP3 playback, e - mail, and wireless protocols such as 802.11b and Bluetooth. Some also act as cell phones when connected to a GPRS or PCS network (e.g., the Handspring Treo). They have color screens and support web browsing and multimedia e - mail messaging. Some Bluetooth - enabled PDAs rely on Bluetooth - compatible cell phones to access mobile networks. However, as cell phones become more powerful and PDAs incorporate 802. 11b interface cards, Bluetooth might become less viable.

As PDA manufacturers look to the future, they wish to support not only voice communication over wireless networks but also multimedia, such as video communication. Some PDAs incorporate advanced digital cameras with flash and zoom (e.g., the Sony CLIE). The encoding of video can be done using MPEG - 4 or H.263, and the PDA could support multiple playback formats.

Cell phone manufacturers, for their part, are trying to incorporate more computer - like functionality, including the basic tasks supported by PDAs, web browsing, games, image and video capture, attachments to e - mail, streaming video, videoconferencing, and so on. Growth in demand is steady for interactive multimedia, in particular image and video communications.

Most cell phone manufacturers and mobile service providers already support some kind of image or video communication, either in the form of e - mail attachments, video streaming, or even videoconferencing. Similarly to Short - text Messaging Service (SMS), the new messaging protocol Multimedia Messaging Service (MMS) is gaining support in the industry as an interim solution to the bandwidth limitation. New cell phones feature color displays and have built - in digital cameras. Most cell phones use integrated CMOS sensors, and some handsets even have two of them. By 2004, the number of camera sensors on mobile phones is estimated to exceed the number of digital cameras sold worldwide.

Cell phones have supported web browsing and e - mail functionality for a few years, but with packet services, Bluetooth, and MMS, they can support video streaming in various formats and MP3 playback. Some cell phones even include a touch screen that uses handwriting recognition and a stylus, as most PDAs do. Other cell phones are envisioned to be small enough to be wearable, instead of a wrist watch.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

MULTIMEDIA Topics