Modulation
Modulation is the process where a Radio Frequency or Light Wave's amplitude, frequency, or phase is changed in order to transmit intelligence. The characteristics of the carrier wave are instantaneously varied by another "modulating" waveform.
There are many ways to modulate a signal:
Amplitude Modulation
Frequency Modulation
Phase Modulation
Pulse Modulation
Additionally, digital signals usually require an intermediate modulation step for transport across wideband, analog-oriented networks.
Amplitude Modulation (AM)
Amplitude Modulation occurs when a voice signal's varying voltage is applied to a carrier frequency. The carrier frequency's amplitude changes in accordance with the modulated voice signal, while the carrier's frequency does not change.
When combined the resultant AM signal consists of the carrier frequency, plus UPPER and LOWER sidebands. This is known as Double Sideband - Amplitude Modulation (DSB-AM), or more commonly referred to as plain AM.
The carrier frequency may be suppressed or transmitted at a relatively low level. This requires that the carrier frequency be generated, or otherwise derived, at the receiving site for demultiplexing. This type of transmission is known as Double Sideband - Suppressed Carrier (DSB-SC).
It is also possible to transmit a SINGLE sideband for a slight sacrifice in low frequency response (it is difficult to suppress the carrier and the unwanted sideband, without some low frequency filtering as well). The advantage is a reduction in analog bandwidth needed to transmit the signal. This type of modulation, known as Single Sideband - Suppressed Carrier (SSB-SC), is ideal for Frequency Division Multiplexing (FDM).
Another type of analog modulation is known as Vestigial Sideband. Vestigial Sideband modulation is a lot like Single Sideband, except that the carrier frequency is preserved and one of the sidebands is eliminated through filtering. Analog bandwidth requirements are a little more than Single Sideband however.
Vestigial Sideband transmission is usually found in television broadcasting. Such broadcast channels require 6 MHz of ANALOG bandwidth, in which an Amplitude Modulated PICTURE carrier is transmitted along with a Frequency Modulated SOUND carrier.
Frequency Modulation (FM)
Frequency Modulation occurs when a carrier's CENTER frequency is changed based upon the input signal's amplitude. Unlike Amplitude Modulation, the carrier signal's amplitude is UNCHANGED. This makes FM modulation more immune to noise than AM and improves the overall signal-to-noise ratio of the communications system. Power output is also constant, differing from the varying AM power output.
The amount of analog bandwidth necessary to transmit a FM signal is greater than the amount necessary for AM, a limiting constraint for some systems.
Phase Modulation
Phase Modulation is similar to Frequency Modulation. Instead of the frequency of the carrier wave changing, the PHASE of the carrier changes.
As you might imagine, this type of modulation is easily adaptable to data modulation applications.
Pulse Modulation (PM)
With Pulse Modulation, a "snapshot" (sample) of the waveform is taken at regular intervals. There are a variety of Pulse Modulation schemes:
Pulse Amplitude Modulation
Pulse Code Modulation
Pulse Frequency Modulation
Pulse Position Modulation
Pulse Width Modulation
Pulse Amplitude Modulation (PAM)
In Pulse Amplitude Modulation, a pulse is generated with an amplitude corresponding to that of the modulating waveform. Like AM, it is very sensitive to noise.
While PAM was deployed in early AT&T Dimension PBXs, there are no practical implementations in use today. However, PAM is an important first step in a modulation scheme known as Pulse Code Modulation.
Pulse Code Modulation (PCM)
In Pulse Code Modulation, PAM samples (collected at regular intervals) are quantized. That is to say, the amplitude of the PAM pulse is assigned a digital value (number). This number is transmitted to a receiver that decodes the digital value and outputs the appropriate analog pulse.
The fidelity of this modulation scheme depends upon the number of bits used to represent the amplitude. The frequency range that can be represented through PCM modulation depends upon the sample rate. To prevent a condition known as "aliasing", the sample rate MUST BE AT LEAST twice that of the highest supported frequency. For typical voice channels (4 Khz frequency range), the sample rate is 8 KHz.
Where is PCM today? Well, its EVERYWHERE! A typical PCM voice channel today operates at 64 KBPS (8 bits/sample * 8000 samples/sec). But other PCM schemes are widely deployed in today's audio (CD/DAT) and video systems!
Pulse Frequency Modulation (PFM)
With PFM, pulses of equal amplitude are generated at a rate modulated by the signal's frequency. The random arrival rate of pulses makes this unsuitable for transmission through Time Division Multiplexing (TDM) systems.
Pulse Position Modulation (PPM)
Also known as Pulse Time Modulation, PPM is a scheme where the pulses of equal amplitude are generated at a rate controlled by the modulating signal's amplitude. Again, the random arrival rate of pulses makes this unsuitable for transmission using TDM techniques.
Pulse Width Modulation (PWM)
In PWM, pulses are generated at a regular rate. The length of the pulse is controlled by the modulating signal's amplitude. PWM is unsuitable for TDM transmission due to the varying pulse width.
Digital Signal Modulation
Digital signals need to be processed by an intermediate stage for conversion into analog signals for transmission. The device that accomplishes this conversion is known as a "Modem" (MODulator/DEModulator).
Multiplexing Techniques
Multiplexing is the process where multiple channels are combined for transmission over a common transmission path.
There are two predominant ways to multiplex:
Frequency Division Multiplexing
Time Division Multiplexing
Frequency Division Multiplexing (FDM)
In FDM, multiple channels are combined onto a single aggregate signal for transmission. The channels are separated in the aggregate by their FREQUENCY.
There are always some unused frequency spaces between channels, known as "guard bands". These guard bands reduce the effects of "bleedover" between adjacent channels, a condition more commonly referred to as "crosstalk".
FDM was the first multiplexing scheme to enjoy widescale network deployment, and such systems are still in use today. However, Time Division Multiplexing is the preferred approach today, due to its ability to support native data I/O (Input/Output) channels.
FDM Data Channel Applications
Data channel FDM multiplexing is usually accomplished by "modem stacking". In this case, a data channel's modem is set to a specific operating frequency. Different modems with different frequencies could be combined over a single voice line. As the number of these "bridged" modems on a specific line changes, the individual modem outputs need adjustment ("tweaking") so that the proper composite level is maintained. This VF level is known as the "Composite Data Transmission Level" and is almost universally -13 dBm0.
Although such units supported up to 1200 BPS data modem rates, the most popular implementation was a low-speed FDM multiplexer known as the Voice Frequency Carrier Terminal (VFCT).
Go here for more information about the Voice Frequency Carrier Terminal.
FDM Voice Channel Applications
Amplitude Modulation (AM), using Single Sideband-Suppressed Carrier (SSB-SC) techniques, is used for voice channel multiplexing. Basically, a 4 KHz signal is multiplexed ("heterodyned") using AM techniques. Filtering removes the upper sideband and the carrier signal. Other channels are multiplexed as well, but use different carrier frequencies.
Advances in radio technology, particulary the developments of the Reflex Klystron and integrated modulators, resulted in huge FDM networks. One of the most predominate FDM schemes was known as "L-Carrier", suitable for transmission over coaxial cable and wideband radio systems.
Go here for more information about L-Carrier FDM systems.
Time Division Multiplexing
Timeplex is probably the best in the business (IMHO) at Time Division Multiplexing, as it has 25+ years or experience. When Timeplex was started by a couple of ex-Western Union guys in 1969 it was among the first commercial TDM companies in the United States. In fact, "Timeplex" was derived from TIME division multiPLEXing!
In Time Division Multiplexing, channels "share" the common aggregate based upon time! There are a variety of TDM schemes, discussed in the following sections:
Conventional Time Division Multiplexing
Statistical Time Division Multiplexing
Cell-Relay/ATM Multiplexing
Conventional Time Division Multiplexing (TDM)
Conventional TDM systems usually employ either Bit-Interleaved or Byte-Interleaved multiplexing schemes as discussed in the subsections below.
Clocking (Bit timing) is critical in Conventional TDM. All sources of I/O and aggregate clock frequencies should be derived from a central, "traceable" source for the greatest efficiency.
Bit-Interleaved Multiplexing
In Bit-Interleaved TDM, a single data bit from an I/O port is output to the aggregate channel. This is followed by a data bit from another I/O port (channel), and so on, and so on, with the process repeating itself.
A "time slice" is reserved on the aggregate channel for each individual I/O port. Since these "time slices" for each I/O port are known to both the transmitter and receiver, the only requirement is for the transmitter and receiver to be in-step; that is to say, being at the right place (I/O port) at the right time. This is accomplished through the use of a synchronization channel between the two multiplexers. The synchronization channel transports a fixed pattern that the receiver uses to acquire synchronization.
Total I/O bandwidth (expressed in Bits Per Second - BPS) cannot exceed that of the aggregate (minus the bandwidth requirements for the synchronization channel).
Bit-Interleaved TDM is simple and efficient and requires little or no buffering of I/O data. A single data bit from each I/O channel is sampled, then interleaved and output in a high speed data stream.
Unfortunately, Bit-Interleaved TDM does not fit in well with today's microprocessor-driven, byte-based environment!
Byte-Interleaved Multiplexing
In Byte-Interleaved multiplexing, complete words (bytes) from the I/O channels are placed sequentially, one after another, onto the high speed aggregate channel. Again, a synchronization channel is used to synchronize the multiplexers at each end of the communications facility.
For an I/O payload that consists of synchronous channels only, the total I/O bandwidth cannot exceed that of the aggregate (minus the synchronization channel bandwidth). But for asynchronous I/O channels, the aggregate bandwidth CAN BE EXCEEDED if the aggregate byte size is LESS than the total asynchronous I/O character size (Start + Data + Stop bits). (This has to do with the actual CHARACTER transmission rate of the asynchronous data being LESS THAN the synchronous CHARACTER rate serviced by the TDM).
Byte-Interleaved TDMs were heavily deployed from the from the late 1970s to around 1985. These units could support up to 256 KBPS aggregates but were usually found in 4.8 KBPS to 56 KBPS DDS and VF-modem environments. In those days, 56 KBPS DDS pipes were very high speed circuits. Imagine!
In 1984, with the divestiture of AT&T and the launch of of T1 facilities and services, many companies jumped into the private networking market; pioneering a generation of intelligent TDM networks.
Go here for more information about North American T1 framing.
Go here for more information about CCITT (ITU-T) E1 framing.
Statistical Time Division Multiplexing (STDM)
Statistical TDMs are such that they only utilize aggregate bandwidth when there is actual data to be transported from I/O ports. Data STDMs can be divided into two categories:
Conventional STDM
Frame Relay/X.25 Networking
An additional ANALOG TDM system, known as Time Assignment Speech Interpolation is also discussed.
Conventional STDM
The Statistical Multiplexer (or "statmux") utilizes a different form of Time Division Multiplexing. These multiplexers typically use a HDLC-like frame for aggregate communications between units. As I/O traffic arrives at the mux it is buffered, then inserted into the I-Field of the HDLC frame. The receiving units removes the I/O traffic from the aggregate HDLC frame.
Statistical Multiplexers are ideally suited for the transport of asynchronous I/O data; as it can take advantage of the inherent latency in asynchronous communications. However, they can also multiplex synchronous protocols by "spoofing" and prioritization; again taking advantage of the latency between blocks/frames.
Statistical Multiplexers are typically faster at transporting I/O data End-To-End than X.25 systems, but some of these multiplexers can also perform network switching functions between I/O ports. The total I/O bandwidth can (and usually does) exceed the aggregate port bandwidth.
Later, many of these multiplexers incorporated "switching" mechanisms that allowed I/O ports to "intelligently" connect themselves to other destination ports upon user command. While somewhat functioning as an X.25 switch, these Statistical Multiplexers were usually faster, and provided more transparent I/O data-carrying capacity.
Statistical TDM's biggest disadvantage is that it is I/O protocol sensitive. Therefore, they have difficulty supporting "transparent" I/O data and unusual protocols. To support these I/O data types, many statmux systems have provisions to support Conventional TDM I/O traffic through the use of adjunct/integrated modules.
Conventional STDM was very popular in the late 1970s to mid 1980s and is still used today, although the market for these units is dwindling.
Frame Relay and X.25 Networking
Frame Relay and X.25 systems are also categorized as Statistical TDMs. Both of these systems utilize aggregate HDLC frame structures, and both of these systems can interoperate with both Private and Public systems.
The advantage of Frame Relay over X.25 is that it can support the same traffic as X.25, while, while facilitating "bandwidth on demand" requirements for "bursty" traffic (e.g. LANs). Public Frame Relay services are available, offering customers additional methods to interconnect LANs, rather than having dedicated Wide Area Network (WAN) links.
Frame Relay, however, cannot adequately support voice or video traffic because of variable End-To-End delivery times (e.g. variable delay). Voice and video transmissions are of a "Constant Bit Rate" (CBR) nature, and do not fare well sitting in a queue waiting for a big LAN packet to finish transmitting.
Care must be taken when deploying Frame Relay technologies. In the Internet world, it is so easy to overload trunk capacities at the end-points of an IP connection. In fact, this overloading is a "cheap" way to add increased I/O traffic (and users) without increasing aggregate bandwidth. Unfortunately, when Frame Relay starts chucking out I/O data frames, the impact on Internet applications is very noticeable, since IP retransmissions are so long! This same detrimental effect is also experienced in wireless LANs.
Time Assignment Speech Interpolation (TASI)
TASI systems represent an example of an ANALOG Statistical Time Division Multiplexing scheme. These systems enjoyed limited use in the 1980s, and were particularly adept at sharing voice circuits; specifically PBX trunks.
A TASI multiplexer is interconnected between the PBX and the trunk facilities. Usually, one analog trunk circuit is used for signaling purposes between TASI units at each end of the link. The remaining voice trunks support analog TASI TDM voice conversations.
In normal telephone conversations, a majority of time is spent in a latent (idle) state. TASI trunks will allocate "snippets" of voice from another channel during this idle time. If an individual were to monitor these TASI trunks, they would hear bits and pieces of various conversations. The signaling channel is used for the signaling conversion between End-Point PBX (Private Branch Exchange) units and also for the allocation of bandwidth once incoming speech energy has been detected.
As digital speech processing became more common, TASI systems were created that had analog inputs, and digital outputs. This type of multiplexing technique is more commonly known as "Digital Speech Interpolation" (DSI).
Unfortunately, TASI and DSI systems suffer from a few drawbacks. First, there can be a lot of voice "clipping" noticed by users. This occurs when a little bit of speech is lost while waiting for the TASI mux to detect valid speech and allocate bandwidth. Clipping also occurs when there just isn't bandwidth present at the moment. Also, TASI and DSI units are very susceptible to audio input levels and may have problems with the transport of voiceband data (e.g. VF modem) signals.
Cell-Relay Transmission
In Cell-Relay systems, data is broken up into basic units (called "cells") and transported through the network. A standard cell-size is defined as consisting of 53 8-bit bytes. These 53 bytes consist of 48 bytes of Payload (data) and 5 bytes of Header (routing) information.
Cell-Relay operation is somewhat analogous to a processor bus. Instead of a 32-bit data buss, there is a 53-byte data buss. Instead of a 64-bit address buss, there is a 5-byte address buss. While the bus operates synchronously, under control of a buss clock, the buss function itself is asynchronous (similar in operation to an ordinary microprocessor bus). But instead of the transfer being parallel, a a high-speed serial ATM facility is used. That's Cell Relay!
As mentioned in the paragraph above, the buss function is asynchronous. That means that the I/O data (CPU modules in the above example) will immediately arbitrate for the facility (processor buss) when there is data destined for it. If there is a conflict, somebody loses, and data is lost. It is up to the application to recover (or not!) from the error condition.
Cell-Relay operation can be summarized as being similar to Conventional TDM, except that is has properties of asynchronous transfer. When operating with very high speed facilities, Cell-Relay has the ability to integrate Local Area Networks (LANs) and Wide Area Networks (WANs).
Some Cell-Relay transmission services are now under development, or available on a limited basis:
Asynchronous Transfer Mode (ATM)
Switched MultiMegabit Data Service (SMDS)
Asynchronous Transfer Mode (ATM)
ATM is a cell-based transport mechanism that evolved from the development of the Broadband ISDN (B-ISDN) standards. ATM does not stand for Automatic Teller Machine, or Ascom Timeplex Multiplexers (although that might apply!); rather, it defines the asynchronous transport of cells (Cell-Relay). Perhaps even more important, ATM is associated with a process known as ATM Adaptation Layer. AAL describes how various I/O traffic types are converted into cells!
The Adaptation process and the serial transport of cells is commonly referred to as "Fast Packet Multiplexing" (FPM). While similar in concept, FPMs do not necessarily conform to ATM standards or switching conventions.
Switched MultiMegabit Data Service (SMDS)
Similar to ATM, but designed for operation at lower rates (64 KBPS - 155.520 MBPS). SMDS services ARE AVAILABLE NOW from many of Local Exchange Carriers (LECs). International and domestic Carrier services are available as well.
SMDS offers customers alternatives to Frame Relay transport. SMDS addressing utilizes the CCITT (now ITU) E.164 addressing scheme, making addressing much more manageable for customers. Also, SMDS is available at higher rates than Frame Relay (which typically tops out at T1 rate - 1.544 MBPS).
Switching Techniques
Think how things would be if you could only use your telephone to talk to just one other person! You would not be very productive. So there are requirements for switching systems to route your calls around the world.
There are a number of ways to perform switching:
Circuit Switching
Packet Switching
Message Switching
Cell Switching
Circuit Switching
This method involves the physical interconnection of two devices. A good example of circuit switching involves the Public phone network. A data example would be the classic A/B switch!
Packet Switching
Packet Switching techniques switch packets of data between destinations. Traditionally, this applied to X.25 techniques, but this also applies to TCP/IP and IPX/SPX routers also. Proprietary Frame Relay switches can switch voice signals.
Message Switching
Message Switching techniques were originally used in data communications. An example would be early "store and forward" paper tape relay systems. E-Mail delivery is another example of message switching. In voice systems, you can find Voice Mail delivery systems on the Internet. The classic "forward voice mail" capability in some voice mail systems is another example.
Cell Switching
Cell Switching is similar to packet switching, except that the switching does not necessarily occur on packet boundaries. This is ideal for an integrated environment and is found within Cell-based networks, such as ATM. Cell-switching can handle both digital voice and data signals.
Introduction to ISDN
What is ISDN?
ISDN is an abbreviation for Integrated Services Digital Network. It is the next-generation, digital telephone network that integrates circuit-switched voice and data services over a common access facility. There are two types of ISDN lines. Basic Rate ISDN (BRI) is designed for residential customers and small businesses. Primary Rate ISDN (PRI) is designed for larger businesses.
The Benefits of ISDN
Reduces the cost of network administration
Simplifies wiring
Combines separate voice and data networking requirements
Compatible with BRI/PRI, plus existing analog voice and Switched 56 services
For residential customers, Basic Rate ISDN (BRI) costs about the equivalent of two phone lines. BRI customers can gain high speed Internet access (64 KBPS to 128 KBPS). BRI improves the quality of speech in telephone calls. BRI provides an ideal way to keep in touch through personal videoconferencing. BRI offers improved modem connectivity to non-ISDN systems.
For business customers, ISDN offers cost savings through the integration of voice and data services. PRI provides a great backup solution for leased data lines. PRI offers high-quality video conferencing capabilities. PRI costs about the same as standard "channelized T1" services.
Basic Rate ISDN
Basic Rate ISDN (BRI) consists of three channels. Two channels are used for actual voice or data traffic with each one operating at a rate of 64 KBPS. These are called "Bearer" channels, or B-channels for short. There is a third channel that is used for call supervision (connecting, disconnecting, etc.). This channels operates at a rate of 16 KBPS and is called the "Delta" channel, or D-channel for short.
Using the S-bus interface of Basic Rate ISDN, it is possible to add multiple BRI devices (up to eight) to the same line.
Primary Rate ISDN
Primary Rate ISDN (PRI) operates at the Plesiochronous Level 1 digital rate. That is to say that it operates at T1 rates in North America and at the E1 rate for International systems.
In North American T1 PRI lines, 23 "Bearer" (B-channels) are available for voice or data traffic. Each B-channel operates at a 64 KBPS rate. There is an additional 64 KBPS channel that is used for call supervision, called the "Delta" channel (D-channel). The D-Channel is usually transported over DS0 #24.
In International E1 PRI lines, 30 "Bearer" (B-channels) are available for use. Again, each B-channel operates at 64 KBPS. A single D-Channel operates at 64 KBPS and is used for call supervision. The D-Channel is usually transported over DS0 #16.
ISDN Interfaces
Overview
The ITU standards that define ISDN standards define specific equipment types within the ISDN network. Also, specific 2-wire and 4-wire interconnection points are defined as indicated in the following diagrams:
The Exchange Termination (ET) and Line Termination (LT) comprise the Central Office interface, called the V-Interface.
The U-Interface is the standard interface in the United States. It terminates the 2-wire 2B1Q-coded line from the Central Office.
The NT-1 is a Network Termination Unit. It converts the 2-Wire 2B1Q line code into a 4-wire code used on the S and T interfaces. Most equipment used in North America WILL NOT have a S or T interface connector.
The NT-2 is a Network Termination Unit. It is ISDN capable at both sides of the unit. A good example would be an ISDN PBX or ISDN concentrator.
The S and T interface terms are largely interchangeable. They are both 4-wire interfaces.
A TE1 device is an ISDN-capable Terminal Equipment.
A TE2 device is a non-ISDN capable Terminal Equipment (e.g. standard telephone set, etc.).
The TA is a Terminal Adapter, and is responsible for interfacing non-ISDN equipment to the ISDN line.
BRI Cabling and Pinouts
Basic Rate ISDN lines can use straight ("flat") modular cable, if necessary. However, twisted-pair is most often used.
ISDN U-interface
RJ-45 Pin Description
--------- -----------
1 N/C
2 N/C
3 N/C
4 U-interface
5 U-interface
6 N/C
7 Optional -48 VDC Power
8 Optional -48 VDC Return
The U-Interface is most commonly used in North American Basic Rate ISDN systems. The U-Interface is an 8-conductor, modular, RJ-45 jack. The two center pins (pins 4 and 5) are used for the 2-wire loop. The wiring of pins 4 and 5 are not polarity sensitive (e.g. 4 and 5 may be crossed).
Optional -48 VDC power may be utilized on RJ-45 pins 7 and 8. This can be used to power the NT-1 and TE equipments if sufficient power is available. In most cases, power is not provided from the telephone company.
BRI S-Interface
RJ-45 Pin Description
--------- -----------
1 N/C
2 N/C
3 Receive +
4 Transmit +
5 Transmit -
6 Receive -
7 Optional -48 VDC Power
8 Optional -48 VDC Return
The BRI S-Interface is a 4-wire interface, with separate Transmit and Receive pairs. It can operate in four modes:
1. Point-to-Point Mode allows one logical terminal that may be up to 1 KM from the NT-1
2. Short Passive Bus Mode allows connection of up to 8 terminals in parallel on the S/T bus. The terminals can be within 100 to 200 meters from the NT-1.
3. Extended Passive Bus Mode allows connection to 8 terminals at distances of up to 500 meters from the NT-1.
4. Star Bus Mode allows interconnection of up to 8 terminals to a central NT-1. Each terminal can be up to 1 KM from the NT-1.
o The NT-1 unit has configurable jumpers to apply a 100-ohm termination to the S-Interface signal pairs. Usually, these jumpers will be in. Rules of termination is similar to that of the SCSI-bus: the NT-1 will have 100-ohm terminators applied, and the farthest TE1/TA device will have 100-ohm terminators.
o Optional power to the NT-1 can be applied from pins 7 and 8 of the S-Interface RJ-45 jack. This is usually not used.
o Many NT-1 units provide "phantom-power" (sealing current) between the transmit (4,5) and receive (3,6) leads. In these cases, the Transmit (4,5) leads will be negative with respect to the Receive (3,6) leads. This is often selectable in the NT-1. Whenever possible, it is recommended that this option be disabled. As a general rule-of-thumb, most PC cards, routers, and other data communications equipment do not require power from the S/T interface. ISDN telephones usually do require power from the S/T interface.
o Most NT-1 units will ship with a wall-mount transformer power supply to supply power to the NT-1.
Error detection and Error correction
• Error detection is the ability to detect errors caused by noise or other impairments during transmission from the transmitter to the receiver.
• Error correction has an additional feature that enables identification and correction of the errors.
There are two ways to design the channel code and protocol for an error correcting system.
• Automatic repeat request (ARQ): The transmitter sends the data and also an error detection code, which the receiver uses to check for errors. If it does not find any errors, it sends a message (an ACK, or acknowledgment) back to the transmitter. The transmitter re-transmits any data that was not ACKed.
• Forward error correction (FEC): The transmitter encodes the data with an error-correcting code and sends the coded message. The receiver never sends any messages back to the transmitter. The receiver decodes what it receives into the "most likely" data. The codes are designed so that it would take an "unreasonable" amount of noise to trick the receiver into misinterpreting the data.
[edit] Error detection schemes
Several schemes exist to achieve error detection, and are generally quite simple. All error detection codes (which include all error-detection-and-correction codes) transmit more bits than were in the original data. Most codes are "systematic" — the transmitter sends the original data bits, followed by check bits — extra bits (usually referred to as redundancy in the literature) which accompany data bits for the purpose of error detection.
(In a system that uses a "non-systematic" code, such as some raptor codes, data bits are transformed into at least as many code bits, and the transmitter sends only the code bits).
[edit] Repetition schemes
Main article: triple modular redundancy
Variations on this theme exist. Given a stream of data that is to be sent, the data is broken up into blocks of bits, and in sending, each block is sent some predetermined number of times. For example, if we want to send "1011", we may repeat this block three times each.
Suppose we send "1011 1011 1011", and this is received as "1010 1011 1011". As one group is not the same as the other two, we can determine that an error has occurred. This scheme is not very efficient, and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g. "1010 1010 1010" in the example above will be detected as correct in this scheme).
The scheme however is extremely simple, and is in fact used in some transmissions of numbers stations.
[edit] Parity schemes
Main article: Parity bit
The stream of data is broken up into blocks of bits, and the number of 1 bits is counted. Then, a "parity bit" is set (or cleared) if the number of one bits is odd (or even). (This scheme is called even parity; odd parity can also be used.) If the tested blocks overlap, then the parity bits can be used to isolate the error, and even correct it if the error affects a single bit: this is the principle behind the Hamming code.
There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of bit errors (one, three, five, and so on). If an even number of bits (two, four, six and so on) are flipped, the parity bit appears to be correct, even though the data are corrupt.
[edit] Polarity schemes
One less commonly used form of error correction and detection is transmitting a polarity reversed bitstream simultaneously with the bitstream it is meant to correct. This scheme is very weak at detecting bit errors, and marginally useful for byte or word error detection and correction. However, at the physical layer in the OSI model, this scheme can aid in error correction and detection.
Polarity symbol reversal is (probably) the simplest form of Turbo code, but technically not a Turbo code at all.
• Turbo codes DO NOT work at the bit level.
• Turbo codes typically work at the character or symbol level depending on their placement in the OSI model.
• Character here refers to Baudot, ASCII-7, the 8-bit byte or the 16-bit word.
Original transmitted symbol 1011
• transmit 1011 on carrier wave 1 (CW1)
• transmit 0100 on carrier wave 2 (CW2)
Receiver end
• do bits polarities of (CW1) <> (CW2)?
• if CW1 == CW2, signal bit error (triggers more complex ECC)
This polarity reversal scheme works fairly well at low data rates (below 300 baud) with very redundant data like telemetry data.[specify]
[edit] Cyclic redundancy checks
Main article: Cyclic redundancy check
More complex error detection (and correction) methods make use of the properties of finite fields and polynomials over such fields.
The cyclic redundancy check considers a block of data as the coefficients to a polynomial and then divides by a fixed, predetermined polynomial. The coefficients of the result of the division is taken as the redundant data bits, the CRC.
On reception, one can recompute the CRC from the payload bits and compare this with the CRC that was received. A mismatch indicates that an error occurred.
[edit] Checksum
Main article: Checksum
A checksum of a message is an arithmetic sum of message code words of a certain word length, for example byte values, and their carry value. The sum is negated by means of ones-complement, and stored or transferred as an extra code word extending the message.
On the receiver side, a new checksum may be calculated, from the extended message. If the new checksum is not 0, error is detected.
[edit] Hamming distance based checks
If we want to detect d bit errors in an n bit word we can map every n bit word into a bigger n+d+1 bit word so that the minimum Hamming distance between each valid mapping is d+1. This way, if one receives a n+d+1 word that doesn't match any word in the mapping (with a Hamming distance x <= d+1 from any word in the mapping) it can successfully detect it as an errored word. Even more, d or less errors will never transform a valid word into another, because the Hamming distance between each valid word is at least d+1, and such errors only lead to invalid words that are detected correctly. Given a stream of m*n bits, we can detect x <= d bit errors successfully using the above method on every n bit word. In fact, we can detect a maximum of m*d errors if every n word is transmitted with maximum d errors.
[edit] Error correction
The above methods are sufficient to determine whether some data has been received in error. But often, this is not enough. Consider an application such as simplex teletype over radio (SITOR). If a message needs to be received quickly and without error, merely knowing where the errors occurred may not be enough, the second condition is not satisfied as the message will be incomplete. Suppose then the receiver waits for a message to be repeated (since the situation is simplex), the first condition is not satisfied since the receiver will have to wait (possibly a long time) for the message to be repeated to fill the gaps left by the errors.
It would be advantageous if the receiver could somehow determine what the error was and thus correct it. Is this even possible? Yes, consider the NATO phonetic alphabet -- if a sender were to be sending the word "WIKI" with the alphabet by sending "WHISKEY INDIA KILO INDIA" and this was received (with * signifying letters received in error) as "W***KEY I**I* **LO **DI*", it would be possible to correct all the errors here since there is only one word in the NATO phonetic alphabet which starts with "W" and ends in "KEY", and similarly for the other words. This idea is also present in some error correcting codes (ECC).
[edit] Automatic repeat request
Main article: Automatic repeat-request
Automatic Repeat-reQuest (ARQ) is an error control method for data transmission which makes use of error detection codes, acknowledgment and/or negative acknowledgement messages and timeouts to achieve reliable data transmission. An acknowledgment is a message sent by the receiver to the transmitter to indicate that it has correctly received a data frame.
Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e. within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions.
A few types of ARQ protocols are Stop-and-wait ARQ, Go-Back-N ARQ and Selective Repeat ARQ.
Hybrid ARQ is a combination of ARQ and forward error correction.
[edit] Error-correcting code
Main article: Forward error correction
An error-correcting code (ECC) is a code in which each data signal conforms to specific rules of construction so that departures from this construction in the received signal can generally be automatically detected and corrected. It is used in computer data storage, for example in dynamic RAM, and in data transmission.
Some codes can correct a certain number of bit errors and only detect further numbers of bit errors. Codes which can correct one error are termed single error correcting (SEC), and those which detect two are termed double error detecting (DED). The simplest error correcting codes can correct single-bit errors and detect double-bit errors. There are codes which can correct and detect more errors than these.
An error-correcting code which corrects all errors of up to n bits correctly is also an error-detecting code which can detect at least all errors of up to 2n bits.
Two main categories are convolutional codes and block codes. Examples of the latter are Hamming code, BCH code, Reed-Solomon code, Reed-Muller code, Binary Golay code, and turbo code.
Shannon's theorem is an important theorem in error correction which describes the maximum attainable efficiency of an error-correcting scheme versus the levels of noise interference expected. In general, these methods put redundant information into the data stream following certain algebraic or geometric relations so that the decoded stream, if damaged in transmission, can be corrected. The effectiveness of the coding scheme is measured in terms of code rate, which is the code length divided by the useful information, and the Coding gain, which is the difference of the SNR levels of the uncoded and coded systems required to reach the same BER levels.
[edit] Error-correcting memory
Because soft errors are extremely common in the DRAM of computers used in satellites and space probes, such memory is structured as ECC memory (also called "EDAC protected memory"). Typically every bit of memory is refreshed at least 15 times per second. During this memory refresh, the memory controller reads each word of memory and writes the (corrected) word back. [citation needed] Such memory controllers traditionally use a Hamming code, although some use triple modular redundancy. Even though a single cosmic ray can upset many physically neighboring bits in a DRAM, such memory systems are designed so that neighboring bits belong to different words, so such single event upsets (SEUs) cause only a single error in any particular word, and so can be corrected by a single-bit error correcting code. As long as no more than a single bit in any particular word is hit by an error between refreshes, such a memory system presents the illusion of an error-free memory. [1] [2]
ECC memory provides greater data accuracy and system uptime by protecting against soft errors in computer memory.
[edit] Applications
Applications that require low latency (such as telephone conversations) cannot use Automatic Repeat reQuest (ARQ); they must use Forward Error Correction (FEC). By the time an ARQ system discovers an error and re-transmits it, the re-sent data will arrive too late to be any good.
Applications where the transmitter immediately forgets the information as soon as it is sent (such as most television cameras) cannot use ARQ; they must use FEC because when an error occurs, the original data is no longer available. (This is also why FEC is used in data storage systems such as RAID and distributed data store).
Applications that require extremely low error rates (such as digital money transfers) must use ARQ.
[edit] The Internet
In a typical TCP/IP stack, error detection is performed at multiple levels:
• Each Ethernet frame carries a CRC-32 checksum. The receiver discards frames if their checksums don't match.
• The IPv4 header contains a header checksum of the contents of the header (excluding the checksum field). Packets with checksums that don't match are discarded.
• The checksum was omitted from the IPv6 header, because most current link layer protocols have error detection.
• UDP has an optional checksum. Packets with wrong checksums are discarded.
• TCP has a checksum of the payload, TCP header (excluding the checksum field) and source- and destination addresses of the IP header. Packets found to have incorrect checksums are discarded and eventually get retransmitted when the sender receives a triple-ack or a timeout occurs.
[edit] Deep-space telecommunications
NASA's Deep Space Missions ECC Codes (code imperfectness)
NASA has used many different error correcting codes. For missions between 1969 and 1977 the Mariner spacecraft used a Reed-Muller code. The noise these spacecraft were subject to was well approximated by a "bell-curve" (normal distribution), so the Reed-Muller codes were well suited to the situation.
The Voyager 1 & Voyager 2 spacecraft transmitted color pictures of Jupiter and Saturn in 1979 and 1980.
• Color image transmission required 3 times the amount of data, so the Golay (24,12,8) code was used.
• This Golay code is only 3-error correcting, but it could be transmitted at a much higher data rate.
• Voyager 2 went on to Uranus and Neptune and the code was switched to a concatenated Reed-Solomon code-Convolutional code for its substantially more powerful error correcting capabilities.
• Current DSN error correction is done with dedicated hardware.
• For some NASA deep space craft such as those in the Voyager program, Cassini-Huygens (Saturn), New Horizons (Pluto) and Deep Space 1 -- the use of hardware ECC may not be feasible for the full duration of the mission.
The different kinds of deep space and orbital missions that are conducted suggest that trying to find a "one size fits all" error correction system will be an ongoing problem for some time to come.
• For missions close to the earth the nature of the "noise" is different from that on a spacecraft headed towards the outer planets
• In particular, if a transmitter on a spacecraft far from earth is operating at a low power, the problem of correcting for noise gets larger with distance from the earth
[edit] Satellite broadcasting (DVB)
Block 2D & 3D bit allocation models used by ECC coding systems in terrestrial telecommunications
The demand for satellite transponder bandwidth continues to grow, fueled by the desire to deliver television (including new channels and High Definition TV) and IP data. Transponder availability and bandwidth constraints have limited this growth, because transponder capacity is determined by the selected modulation scheme and Forward error correction (FEC) rate.
Overview
• QPSK coupled with traditional Reed Solomon and Viterbi codes have been used for nearly 20 years for the delivery of digital satellite TV.
• Higher order modulation schemes such as 8PSK, 16QAM and 32QAM have enabled the satellite industry to increase transponder efficiency by several orders of magnitude.
• This increase in the information rate in a transponder comes at the expense of an increase in the carrier power to meet the threshold requirement for existing antennas.
• Tests conducted using the latest chipsets demonstrate that the performance achieved by using Turbo Codes may be even lower than the 0.8 dB figure assumed in early designs.
[edit] Data storage
Error detection and correction codes are often used to improve the reliability of data storage media.
A "parity track" was present on the first magnetic tape data storage in 1951. The "Optimal Rectangular Code" used in group code recording tapes not only detects but also corrects single-bit errors.
Some file formats, such as the ZIP (file format) include a checksum (most often CRC32) to detect corruption and truncation.
Reed Solomon codes are used in compact discs to correct errors caused by scratches.
Modern hard drives use CRC codes to detect and Reed-Solomon codes to correct minor errors in sector reads, and to recover data from sectors that have "gone bad" and store that data in the spare sectors[3].
RAID systems use a variety of error correction techniques, to correct errors cause when a hard drive completely fails.
[edit] Information theory and error detection and correction
Information theory tells us that whatever the probability of error in transmission or storage, it is possible to construct error-correcting codes in which the likelihood of failure is arbitrarily low, although this requires adding increasing amounts of redundant data to the original, which might not be practical when the error probability is very high. Shannon's theorem sets an upper bound to the error correction rate that can be achieved (and thus the level of noise that can be tolerated) using a fixed amount of redundancy, but does not tell us how to construct such an optimal encoder.
Error-correcting codes can be divided into block codes and convolutional codes. Other block error-correcting codes, such as Reed-Solomon codes, transform a chunk of bits into a (longer) chunk of bits in such a way that errors up to some threshold in each block can be detected and corrected.
However, in practice errors often occur in bursts rather than at random. This is often compensated for by shuffling (interleaving) the bits in the message after coding. Then any burst of bit-errors is broken up into a set of scattered single-bit errors when the bits of the message are unshuffled (de-interleaved) before being decoded.
[edit] List of error-correction, error-detection methods
This list contains methods of error correction (Reed-Solomon, for example is a method) and practical techniques for error correction (like the Check digit, a practical method).
• Berger code
• Chipkill, an application of ECC techniques to volatile system memory.
• Constant-weight code
• Convolutional codes are usually decoded with iterative Viterbi decoding techniques
• Differential space–time codes, related to space–time block codes.
• Dual modular redundancy, subset of N-modular redundancy, related to triple modular redundancy
• Erasure codes are a superset of Fountain codes
• Forward error correction
• Group code
• Golay code, the Binary Golay codes are the most commonly used Golay codes
• Goppa code that is used to create the McEliece cryptosystem
• Hadamard code
• Hagelbarger code
• Hamming code
• Lexicographic code
• Longitudinal redundancy check
• Low-density parity-check code
• LT codes are near optimal rateless erasure correcting codes.
• m of n codes
• Online codes are an example of rateless erasure codes.
• Parity bit
• Raptor codes are high speed (near real time) fountain codes.
• Reed-Solomon error correction
• Reed-Muller code
• Repeat-accumulate code
• Sparse graph code
• Space–time code
• Space–time trellis code
• Tornado codes are optimal Fountain codes
• Triple modular redundancy
• Turbo code
• Viterbi algorithm
• Walsh code used in cellular telephony for its high noise immunity, not just its ECC capabilities
Practical uses of Error Correction methods
• Concatenated error correction codes, the Compact Disc and Voyager Program spacecraft use concatenated error correction technologies
• Check digit, commonly used on UPC barcodes
• Luhn algorithm, the most commonly used base 10 checksum that can perform limited error detection but not error correction
• Luhn mod N algorithm, the above algorithm but implementable in a non base 10 form
• Verhoeff algorithm, a modular based form not related to the Luhn algorithms that can detect most forms of transposition errors in financial cryptographic applications
UART
Transmitting and receiving serial data
Bits have to be moved from one place to another using wires or some other medium. Over many miles, the expense of the wires becomes large. To reduce the expense of long communication, links carrying several bits in parallel data bits are sent sequentially, one after another, using a UART to convert the transmitted bits between sequential and parallel form at each end of the link. Each UART contains a shift register which is the fundamental method of conversion between serial and parallel forms.
By convention, teletype-style UARTs send a "start" bit, five to eight data bits, least-significant-bit first, an optional "parity" bit, and then one, one and a half, or two "stop" bits. The start bit is the opposite polarity of the data-line's idle state. The stop bit is the data-line's idle state, and provides a delay before the next character can start. (This is called asynchronous start-stop transmission). In mechanical teletypes, the "stop" bit was often stretched to two bit times to give the mechanism more time to finish printing a character. A stretched "stop" bit also helps resynchronization. The parity bit can either make the number of "one" bits between any start/stop pair odd, or even, or it can be omitted. Odd parity is more reliable because it assures that there will always be at least one data transition, and this permits many UARTs to resynchronize.
The UART usually does not directly generate or receive the external signalling levels (such as voltages on wires) that are used between different equipment. Typically, an interface is used to convert the logic level signals of the UART to the external signalling levels. "Signalling levels" is a very broad term encompassing all the various possible schemes to convey a level from one place to another. Voltage is by far the most common kind of signalling used. Examples of standards for voltage signalling are RS-232, RS-422 and RS-485 from the EIA. Historically, the presence or absence of current (in current loops) was the dominant kind of signalling used. Depending on the limits of the communication channel to which the UART is ultimately connected, communication may be "full duplex" (both send and receive at the same time) or "half duplex" (devices take turns transmitting and receiving). Some signalling schemes do not use electrical wires. Examples of such are optical fiber, infrared, and (wireless) Bluetooth in its Serial Port Profile (SPP). Some signalling schemes use modulation (with or without wires). Examples are modulation of audio signals with phone line modems, RF modulation with data radios, and the DC-LIN for power line communication.
As of 2006, UARTs are commonly used with RS-232 for embedded systems communications. It is useful to communicate between microcontrollers and also with PCs. Many chips provide UART functionality in silicon, and low-cost chips exist to convert logic level signals (such as TTL voltages) to RS-232 level signals (for example, Maxim's MAX232).
[edit] Asynchronous receive and transmit
The word "asynchronous" indicates that UARTs recover character timing information from the data stream, using designated "start" and "stop" bits to indicate the framing of each character. In synchronous transmission, the clock data is recovered separately from the data stream and no start/stop bits are used. This improves the efficiency of transmission on suitable channels; more of the bits sent are data. An asynchronous transmission sends nothing over the interconnection when the transmitting device has nothing to send; but a synchronous interface must send "pad" characters to maintain synchronism between the receiver and transmitter. The usual filler is the ASCII "SYN" character. This may be done automatically by the transmitting device.
USART chips have both synchronous and asynchronous modes.
[edit] Universal data rates
The UART is a subset of "asynchronous receiver/transmitters" in that UARTs add the ability to receive and transmit serial data using different serial bit rates. (Receive and transmit rates are usually the same in most applications.) For example, better teletypes and computers with early MODEMs might use Baud (nearly the same as bit rates) of roughly 110 or 300 bits per second for data telecommunication, while computers might use rates of 9600 to 38400 bits per second internally or locally; one UART would fit these applications universally.
Speeds for UARTs are in bits per second (bit/s or bps), although often incorrectly called the Baud. Standard mechanical teletype rates are 45.5, 110, and 150 bit/s. Computers have used from 110 to 230,400 bit/s. Standard speeds are 110, 300, 1200, 2400, 4800, 9600, 14400, 19200, 28800, 38400, 57600, 76800, 115200, 230400, 460800, 921600, 1382400, 1843200 and 2764800 bit/s.
[edit] Serial to Parallel Algorithm
A multiplexed data communication pulse can only be in one of two states but there are many names for the two states. When on, circuit closed, low voltage, current flowing, or a logical zero, the pulse is said to be in the "space" condition. When off, circuit open, high voltage, current stopped, or a logical one, the pulse is said to be in the "mark" condition. A character code begins with the data communication circuit in the space condition. If the mark condition appears, a logical one is recorded otherwise a logical zero.
Figure 1 shows this multiplexing format.
start|<- five to eight data bits ->| stop bit(s)
0 ---- - - - - - - - - - - Space (logic low, low data-wire voltage)
| | | | | | | | | | | |
| S | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | S | S |
| | | | | | | | | | | |
1 - - - - - - - - - - - -------- Mark (logic high, high data-wire voltage)
Figure 1. Asynchronous Code Format.
The right-most bit is always transmitted first. If parity is present,
the parity bit comes after the data bits but before the stop bit(s).
The start bit is always a '1' (logic high), which is also called a mark. The start bit signals the receiving DTE that a character code is coming. The next five to eight bits, depending on the code set employed, represent the character. In the ASCII code set the eighth data bit may be a parity bit. The next one or two bits are always in the mark (logic high, i.e., '1') condition and called the stop bit(s). They provide a "rest" interval for the receiving DTE so that it may prepare for the next character which may be after the stop bit(s). The rest interval was required by the old mechanical Teletypes which used a motor driven camshaft to decode each character. At the end of each character the motor needed time to strike the character bail (print the character) and reset the camshaft.
There are six basic steps in receiving a serial character code into a parallel register. First, to keep track of time, the receiver employs a clock which "ticks." When the line is in the space condition, the receiver samples the line 16 times the data rate. In other words, a data interval is equal to 16 clock ticks. In this way the receiver can determine the beginning of the start bit and "move over" to the center of the bit time for data sampling. Second, when the line goes into the mark state, declare a "looking for start bit" condition and wait one half the bit interval or eight clock ticks. Third, sample the line again and if it has not remained in the mark condition, consider this to be a spurious voltage change and go back to step one. Fourth, if the line was still in the mark state, then consider this a valid start bit. Shift the start bit into an eight-bit shift register and wait one bit time or 16 clock ticks. Fifth, after one bit time sample the line (the data should have been there for the last eight clock ticks, and should remain for eight more clock ticks). Now shift the sample into the shift register. Sixth, continue steps four and five seven more times. After the eighth shift, the start bit will "migrate" into a flip-flop indicating character received. Go to step one.
Before the transmitter and receiver UARTs will work, they must also agree on the same values of five parameters. First, both sides must agree on the number of bits per character. Second, the speed or Baud of the line must be the same on both sides. Third, both sides must agree to use or not use parity. Fourth, if parity is used, both sides must agree on using odd or even parity. Fifth, the number of stop bits must be agreed upon. Having said all this, most DTEs today employ eight data bits, no parity, and one stop bit. Thus there is a rule-of-thumb that the number of characters per second is equal to the Baud divided by 10 for a typical RS-232 or RS-423 data line.
[edit] History
The first UART-like devices were rotating mechanical commutators. These sent 5-bit Baudot codes for mechanical teletypewriters, and replaced morse code. Later, ASCII required a seven bit code. When IBM rationalized computers in the early 1960s with 8-bit characters, it became customary to store the ASCII code in 8 bits.
Gordon Bell designed the UART for the PDP series of computers. Western Digital made first single-chip UART WD1402A circa 1971 and it was the first example of a medium scale integrated circuit.
An example of an early 1980s UART was the National Semiconductor 8250. In the 1990s, newer UARTs were developed with on-chip buffers. This allowed higher transmission speed without data loss and without requiring such frequent attention from the computer. For example, the popular National Semiconductor 16550 has a 16 byte FIFO, and spawned many variants, including the 16C550, 16C650, 16C750, and 16C850.
Depending on the manufacturer, different terms are used to identify devices that perform the UART functions. Intel called their 8251 device a "Programmable Communication Interface". MOS Technology 6551 was known under the name "Asynchronous Communications Interface Adapter" (ACIA). The term "Serial Communications Interface" (SCI) was first used at Motorola around 1975 to refer to their start-stop asynchronous serial interface device, which others were calling a UART.
The less-common 5, 6 and 7 bit codes are now sometimes simulated with 8-bit UARTs. The unused high-order bits are set to 1, the value of the stop bit and idle line. This technique cannot send or receive at full speed, but provides some level of compatibility for older equipment.
Some very low-cost home computers or embedded systems dispensed with a UART and used the CPU to sample the state of an input port or directly manipulate an output port for data transmission. While very CPU-intensive, since the CPU timing was critical, these schemes avoided the purchase of a costly UART chip. The technique was known as a bit-banging serial port.
[edit] Structure
A UART usually contains the following components:
• a clock generator, usually a multiple of the bit rate to improve sampling in the middle of a bit period.
• input and output shift registers
• transmit/receive control
• read/write control logic
• optional transmit/receive buffers
• optional parallel data bus buffer
• FIFO (optional)
[edit] Special Receiver Conditions
[edit] Overrun Error
An "overrun error" occurs when the UART cannot process the byte that just came in before the next one arrives. Various UART devices have differing amounts of buffer space to hold received characters. The CPU must service the UART in order to remove characters from the buffer. If the CPU does not service the UART and the buffer becomes full, Overrun Error will occur.
[edit] Framing Error
A "Framing Error" occurs when the designated "start" and "stop" bits are not valid. As the "start" bit is used to identify the beginning of an incoming character, it acts as a reference for the remaining bits. If the data line is not in the expected idle state when the "stop" bit is expected, a Framing Error will occur.
[edit] Parity Error
A "Parity Error" occurs when the number of "active" bits does not agree with the specified parity configuration of the UART, producing a Parity Error. Because the "parity" bit is optional, this error will not occur if parity has been disabled. Parity error is set when the parity of an incoming data character does not match the expected value.
[edit] Break Condition
A "Break Condition" occurs when the receiver input is in at the "break" level for longer than some duration of time, typically, for more than a character time. This is not necessarily an error, but appears to the receiver as a zero byte with a framing error.
Some equipment will deliberately transmit the "break" level for longer than a character as an out of band signal. When signalling rates are mismatched, no meaningful characters can be sent, but a long "break" signal can be a useful way to get the attention of a mismatched receiver to do something (such as resetting itself to some predefined Baud). UNIX systems and UNIX-like systems such as Linux usually can use the long "break" level as a request to change the signalling rate. DOS and Microsoft Windows typically do not have such ability.
Telephone network modems history
News wire services in the 1920s used multiplex equipment that met the definition, but the modem function was incidental to the multiplexing function, so they are not commonly included in the history of modems. George Stibitz connected a New Hampshire teletype to a computer in New York City by phone lines in 1940, but the word "modem" had not yet been invented, so this too doesn't count.[citation needed]
Modems in the United States were part of the SAGE air-defense system in the 1950s, connecting terminals at various airbases, radar sites, and command-and-control centers to the SAGE director centers scattered around the U.S. and Canada. SAGE ran on dedicated communications lines, but the devices at each end were otherwise similar in concept to today's modems. IBM was the primary contractor for both the computers and the modems used in the SAGE system.
A few years later, a chance meeting between the CEO of American Airlines and a regional manager of IBM led to development of a "mini-SAGE" as an automated airline ticketing system. The terminals were at ticketing offices, tied to a central computer that managed availability and scheduling. The system, known as SABRE, is the ancestor of today's Sabre system.
[edit] AT&T monopoly in the United States
For many years, AT&T maintained a monopoly in the United States on the use of its phone lines, allowing only AT&T-supplied devices to be attached to its network. For the growing group of computer users, AT&T introduced two digital sub-sets in 1958. One is the wideband device shown in the picture to the left. The other was a low-speed modem, which ran at 200 baud.
Legacy modem for leased line operation.
In the summer of 1960, the name Data-Phone was introduced to replace the earlier term digital subset. The 202 Data-Phone was a half-duplex asynchronous service that was marketed extensively in late 1960. In 1962, the 201A and 201B Data-Phones were introduced. They were synchronous modems using two-bit-per-baud phase-shift keying (PSK). The 201A operated half-duplex at 2000 bit/s over normal phone lines, while the 201B provided full duplex 2400 bit/s service on four-wire leased lines, the send and receive channels running on their own set of two wires each.
The famous 103A was also introduced in 1962. It provided full-duplex service at up to 300 baud over normal phone lines. Frequency-shift keying (FSK) was used with the call originator transmitting at 1070 or 1270 Hz and the answering modem transmitting at 2025 or 2225 Hz. The readily available 103A2 gave an important boost to the use of remote low-speed terminals such as the KSR33, the ASR33, and the IBM 2741. AT&T reduced modem costs by introducing the originate-only 113D and the answer-only 113B/C modems.
[edit] The Carterfone decision
The Novation CAT acoustically coupled modem
Before 1968, AT&T maintained a monopoly on what devices could be electrically connected to its phone lines. This led to a market for 103A-compatible modems that were mechanically connected to the phone, through the handset, known as acoustically coupled modems. Particularly common models from the 1970s were the Novation CAT (shown in the image) and the Anderson-Jacobson, spun off from an in-house project at the Lawrence Livermore National Laboratory.
In 1967, the U.S. Supreme Court broke AT&T's monopoly on the lines in the landmark Carterfone decision. Now, the lines were open to anyone, as long as they passed a stringent set of AT&T-designed tests. AT&T made these tests complex and expensive, so acoustically coupled modems remained common into the early 1980s.
In December 1972, Vadic introduced the VA3400. This device was remarkable because it provided full duplex operation at 1200 bit/s over the dial network, using methods similar to those of the 103A in that it used different frequency bands for transmit and receive. In November 1976, AT&T introduced the 212A modem to compete with Vadic. It was similar in design to Vadic's model, but used the lower frequency set for transmit from originating modem. It was also possible to use the 212A with a 103A modem at 300 bit/s. According to Vadic, the change in frequency assignments made the 212 intentionally incompatible with acoustic coupling, thereby locking out many potential modem manufacturers.
In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic's 1200-bit/s mode, AT&T's 212A mode, and 103A operation.
[edit] The Smartmodem
The next major advance in modems was the Smartmodem, introduced in 1981 by Hayes Communications. The Smartmodem was an otherwise standard 103A 300-bit/s modem, but was attached to a small controller that let the computer send commands to it and enable it to operate the phone line. The command set included instructions for picking up and hanging up the phone, dialing numbers, and answering calls. The basic Hayes command set remains the basis for computer control of most modern modems.
Prior to the Hayes Smartmodem, modems almost universally required a two-step process to activate a connection: first, the user had to manually dial the remote number on a standard phone handset, and then secondly, plug the handset into an acoustic coupler. Hardware add-ons, known simply as dialers, were used in special circumstances, and generally operated by emulating someone dialing a handset.
With the Smartmodem, the computer could dial the phone directly by sending the modem a command, thus eliminating the need for an associated phone for dialing and the need for an acoustic coupler. The Smartmodem instead plugged directly into the phone line. This greatly simplified setup and operation. Terminal programs that maintained lists of phone numbers and sent the dialing commands became common.
The Smartmodem and its clones also aided the spread of bulletin-board systems (BBSs). Modems had previously been typically either the call-only, acoustically coupled models used on the client side, or the much more expensive, answer-only models used on the server side. The Smartmodem could operate in either mode depending on the commands sent from the computer. There was now a low-cost server-side modem on the market, and the BBSs flourished.
[edit] Increasing speeds (V.21 V.22 V.22bis)
Voiceband modems generally remained at 300 and 1200 bit/s (V.21 and V.22) into the mid 1980s, although, over this period, the acoustic coupler disappeared, seemingly overnight, as Smartmodem-compatible modems flooded the market.
A 2400 bit/s modem for a laptop.
A V.22bis 2400-bit/s system similar in concept to the 1200-bit/s Bell 212 signalling was introduced in the U.S., and a slightly different, and incompatible, one in Europe. By the late 1980s, most modems could support all of these standards, and 2400-bit/s operation was becoming common.
Many other standards were also introduced for special purposes, commonly using a high-speed channel for receiving, and a lower-speed channel for sending. One typical example was used in the French Minitel system, in which the user's terminals spent the majority of their time receiving information. The modem in the Minitel terminal thus operated at 1200 bit/s for reception, and 75 bit/s for sending commands back to the servers. Such solutions were useful in many circumstances in which one side would be sending more data than the other. In addition to a number of "medium-speed" standards, like Minitel, four U.S. companies became famous for high-speed versions of the same concept.
Telebit introduced its Trailblazer modem in 1984, which used a large number of low-speed channels to send data one-way at rates up to 18,400 bit/s. A single additional channel in the reverse direction allowed the two modems to communicate how much data was waiting at either end of the link, and the modems could switch which side had the high-speed channels on the fly. The Trailblazer modems also supported a feature that allowed them to "spoof" the UUCP "g" protocol, commonly used on Unix systems to send e-mail, and thereby speed UUCP up by a tremendous amount. Trailblazers thus became extremely common on Unix systems, and maintained their dominance in this market well into the 1990s.
U.S. Robotics (USR) introduced a similar system, known as HST, although this supplied only 9600 bit/s (in early versions at least) and provided for a larger backchannel. Rather than offer spoofing, USR instead created a large market among Fidonet users by offering its modems to BBS sysops at a much lower price, resulting in sales to end users who wanted faster file transfers.
Hayes was forced to compete, and introduced its own 9600-bit/s standard, Express 96 (also known as "Ping-Pong"), which was generally similar to Telebit's PEP. Hayes, however, offered neither protocol spoofing nor sysop discounts, and its high-speed modems remained rare.
[edit] Error Correction and Compression
Operations at these speeds pushed the limits of the phone lines, resulting in high error rates. This led to the introduction of error-correction systems built into the modems, made most famous with Microcom's MNP systems. A string of MNP standards came out in the 1980s, each slowing the effective data rate by a smaller amount each time, from about 25% in MNP 1, to 5% in MNP 4. MNP 5 took this a step further, adding data compression to the system, thereby actually increasing the data rate: generally, the user could expect an MNP modem to transfer at about 1.3 times the normal data rate of the modem. MNP was later "opened" and became popular on a series of 2400-bit/s modems, although it was never widespread.
Another common feature of these high-speed modems was the concept of fallback, allowing them to talk to less-capable modems. During the call initiation the modem would play a series of signals into the line and wait for the remote modem to "answer" them. They would start at high speeds and progressively get slower and slower until they heard an answer. Thus, two USR modems would be able to connect at 9600 bit/s, but, when a user with a 2400-bit/s modem called in, the USR would "fall back" to the common 2400-bit/s speed. Without such a system, the operator would be forced to have multiple phone lines for high- and low-speed use.
[edit] V.32
Echo cancellation was the next major advance in modem design. Local telephone lines use the same wires to send and receive, while longer distances use separate wires for the two directions. A small amount of the outgoing signal bounces back. This signal can confuse the modem: is the signal it is "hearing" from the remote modem, or its own transmission bouncing back? This was why earlier modems split the signal frequencies into answer and originate; each modem simply didn't listen to its own transmitting frequencies. Even with improvements to the phone system allowing higher speeds, this splitting of available phone signal bandwidth still imposed a half-speed limit on modems.
Echo cancellation got around this problem. Measuring the echo delays and magnitudes allowed the modem to tell if the received signal was from itself or the remote modem, and create an equal and opposite signal to cancel its own. Modems were then able to send at "full speed" in both directions at the same time, leading to the development of the 9600 bit/s V.32 standard.
Starting in the late 1980s a number of companies started introducing V.32 modems, most of them also using the newly opened MNP standards for error correction and compression. These earlier systems were not very popular due to their price, but by the early 1990s the prices started falling.
The "tipping point" occurred with the introduction of the SupraFax 14400 in 1991. Rockwell had introduced a new chipset supporting not only V.32 and MNP, but the newer 14,400 bit/s V.32bis and the higher-compression V.42bis as well, and even included 9600 bit/s fax capability. Supra, then known primarily for their hard drive systems for the Atari ST, used this chip set to build a low-priced 14,400 bit/s modem which cost the same as a 2400 bit/s modem from a year or two earlier (about 300 USD). The product was a runaway best-seller, and it was months before the company could keep up with demand.
The SupraFax was so successful that a huge number of companies joined the fray, and by the next year 14.4 modems from a wide variety of companies were available. The Rockwell chipset, while not terribly reliable, became extremely common, but Texas Instruments and AT&T Paradyne quickly responded with similar chipsets of their own.
V.32bis was so successful that the older high-speed standards had little to recommend them. USR fought back with a 16,800 bit/s version of HST, but this small increase in performance did little to keep HST interesting. AT&T introduced a one-off 19,200 bit/s "standard" they referred to as V.32ter (also known as V.32 terbo), but this also did little to increase demand, and typically this mode came into use only when two users with AT&T-based modems just happened to call each other. Motorola also introduced another, incompatible, 19.2 standard, but charged very high prices for their modems, which they had previously sold into commercial settings only.
[edit] V.34/V.34bis
An ISA modem manufactured to conform to the V.34 protocol.
Any interest in these systems was destroyed during the lengthy introduction of the 28,800 bit/s V.34 standard. While waiting, several companies decided to "jump the gun" and introduced modems they referred to as "V.FAST". In order to guarantee compatibility with V.34 modems once the standard was ratified (which happened in 1994), the manufacturers were forced to use more "flexible" parts, generally a DSP and microcontroller, as opposed to purpose-designed "modem chips".
A good example of this was USR, which changed their modems to use a DSP from Texas Instruments, and introduced a top-of-the-line Courier product, the V.everything. As the name implied, the new model supported practically every standard on the market, including all of the HST modes, V.32bis, V.FAST and, later, V.34. Rockwell also introduced a V.FAST chipset in late 1993, which they referred to as V.FC (for "Fast Class"). Rapid commoditization in 1994 forced almost all vendors out of the market; Motorola gave up and disappeared without a trace, AT&T throwing in the towel soon after. Their attempts to introduce their own standards were failures in both technical and business senses.
Eventually the V.34 standard was expanded to the 33.6 kbit/s V.34bis standard which was and still is the fastest speed using analog technology. Later modems achieved faster rates via using digital encoding (PCM).
[edit] V.70
In 1995, the first DSVD (Digital Simultaneous Voice and Data) modems became available to consumers, and the standard was ratified as V.70 by the ITU in 1996.
Two DSVD modems can establish a completely digital link between each other over standard phone lines. Sometimes referred to as "the poor man's ISDN", and employing a similar technology, V.70 compatible modems allow for a maximum speed of 33.6 kbit/s between peers. By using a majority of the bandwidth for data and reserving part for voice transmission, DSVD modems allow users (often while playing video games) to pick up a telephone handset interfaced with the modem, and initiate a call to the other peer.
Advocates of DSVD envisioned whiteboard sharing and other practical applications for the standard, however, with advent of cheaper 56kbit/s analog modems intended for internet connectivity, peer-to-peer data transmission over the PSTN became quickly irrelevant.
Today, Multi-Tech is the only known company to continue to support a V.70 compatible modem. While their device also offers V.92 at 56kbit/s, it remains significantly more expensive than comparable modems without V.70 support.
[edit] V.90
With the rapid introduction of all-digital phone systems in the 1990s, it became possible to use much greater bandwidth on the assumption that users would generally be based on digital lines – if not immediately, then in the near future. Digital lines are based on a standard using 8 bits of data for every voice sample, sampled 8000 times a second, for a total data rate of 64 kbit/s. However, many systems use in-band signaling for command data, "robbing" one bit of command data per six bytes of signal, thereby reducing real throughput to 56k. In 1996, modems came to market that took advantage of the widespread use of digital phone systems at ISPs in order to provide download speeds up to 56kbit/s. Originally, there were two available protocols for achieving such speeds, K56flex, designed and promoted by Rockwell and X2, designed and promoted by U.S. Robotics. The already widespread use of the Rockwell chip set made K56flex more popular. A standardization effort started around 1996 working towards a single standard for 56k modems that would replace K56flex and X2. Originally known as V.pcm (PCM referring to the pulse code modulation used in digital telephony), it became the V.90 protocol when finalized in 1998.
There are certain special requirements and restrictions associated with V.90 modems. In order for users to obtain up to 56k download speeds from their ISP, the telephone line had to be completely digital between the ISP and the telephone company central office (CO) of the user. From there the signal could be converted from digital to analog but only at this point. If there was a second conversion anywhere along the line 56k speeds were impossible. Also, the line quality of the user's telephone line could affect the speed of the 56k connection with line noise causing slow downs, sometimes to the point of only being marginally faster than a 33.6 kbit/s connection.
An important restriction with V.90 is that while V.90 modems can obtain up to 56kbit/s download speeds, they are limited to 33.6 kbit/s upload speeds. Only in the rarest of circumstances would a user actually see 56kbit/s speeds, with 48kbit/s to 50kbit/s being the most likely on a clean line. In addition, the FCC has limited the power output from modems to prevent crosstalk between bundled phone lines; thus even on a clean line 56K modems are limited to approximately 53 kbits/s to stay compliant with the regulation.
Prior to the adoption of the V.90 protocol in 1998, users were slow to adopt K56flex and X2 based 56k modems, many simply waited for V.90 to arrive. Some modem manufacturers promised and later offered firmware or driver updates for their modems so that users could add V.90 functionality. As ISPs slowly adopted either the K56flex or X2 (or both) protocols on their dial up lines, users would typically choose an ISP based on which modem type they had. However, when the V.90 spec was finally ratified and V.90 modems started to appear, they were backward compatible with the K56flex or X2 protocols, thus allowing ISPs (and users) to continue to connect to K56flex or X2 dial in lines, even if the ISP did not support V.90 directly.
Following the adoption of V.90, there was an attempt to adopt a protocol that would define a standard to allow all-digital communications (i.e where both the ISP and the user had digital connections to the telephone network). It was to be known as V.91 but the process essentially died with the advent of rapid introduction of short-haul high-speed solutions like ADSL and cable modems which offer much higher speeds from the user's local machine onto the Internet. With the exception of rural areas, the need for point-to-point calls has generally disappeared as a result, as the bandwidth and responsiveness of the Internet has improved greatly. It appears that V.90 will be the last analog modem standard to see widespread use.
[edit] V.92
V.92 is the standard that followed V.90. While it provides no increase in download bit rate (56kbit/s appears to be the maximum speed for voiceband modems), it does have slightly better compression and allows upload speeds up to 48kbit/s provided both ISP and caller have V.92 compatible modems. It also adds two features. The first is the ability for users who have call waiting to put their dial-up Internet connection on hold for extended periods of time while they answer a call. The second feature is the ability to "quick connect" to one's ISP. This is achieved by remembering key information about the telephone line one is using, and using this saved information to help speed up future calls made from the line to the ISP.
ISPs have been slow to adopt V.92 due to the high cost of upgrading their equipment and the lack of demand from their customers. With the rise in broadband take-up that has led to declining numbers of dial-up users, some ISPs have decided not to bother ever upgrading to V.92.
[edit] Long haul modems
In the 1960s, Bell began to digitize the telephone system, and developed early high-speed radio modems for this purpose. Once digital long-haul networks were in place, they were leased for every other purpose.
Optic fiber manufacturing was mastered in the 1980s, and optic modems were first invented for these early systems. The first systems simply used light-emitting diodes and PIN diodes. Faster modulation was quickly adopted for long-haul networks. In the 1990s, multispectral optical modems were adopted as well.
[edit] Narrowband/Various Encoding Methods
28.8 kbit/s serial-port modem from Motorola
A standard modem of today is what would have been called a "smart modem" in the 1980s. It contains two functional parts: an analog section for generating the signals and operating the phone, and a digital section for setup and control. This functionality is actually incorporated into a single chip, but the division remains in theory.
In operation the modem can be in one of two "modes", data mode in which data is sent to and from the computer over the phone lines, and command mode in which the modem listens to the data from the computer for commands, and carries them out. A typical session consists of powering up the modem (often inside the computer itself) which automatically assumes command mode, then sending it the command for dialing a number. After the connection is established to the remote modem, the modem automatically goes into data mode, and the user can send and receive data. When the user is finished, the escape sequence, "+++" followed by a pause of about a second, is sent to the modem to return it to command mode, and the command to hang up the phone is sent. One problem with this method of operation is that it is not really possible for the modem to know if a string is a command or data. When the modem misinterprets a string, it generally causes odd things to happen.
The commands themselves are typically from the Hayes command set, although that term is somewhat misleading. The original Hayes commands were useful for 300 bit/s operation only, and then extended for their 1200 bit/s modems. Hayes was much slower upgrading to faster speeds however, leading to a proliferation of command sets in the early 1990s as each of the high-speed vendors introduced their own command styles. Things became considerably more standardized in the second half of the 1990s, when most modems were built from one of a very small number of "chip sets", invariably supporting a rapidly converging command set. We call this the Hayes command set even today, although it has three or four times the numbers of commands as the actual standard.
The 300 bit/s modems used frequency-shift keying to send data. In this system the stream of 1s and 0s in computer data is translated into sounds which can be easily sent on the phone lines. In the Bell 103 system the originating modem sends 0s by playing a 1070 Hz tone, and 1s at 1270 Hz, with the answering modem putting its 0s on 2025 Hz and 1s on 2225 Hz. These frequencies were chosen carefully, they are in the range that suffer minimum distortion on the phone system, and also are not harmonics of each other. For the 103F leased line version, internal strapping selected originate or answer operation. For dial models, the selection was determined by which modem originated the call. Modulation was so slow and simple that some people were able to learn how to whistle short bits of data into the phone with some accuracy.
In the 1200 bit/s and faster systems, phase-shift keying was used. In this system the two tones for any one side of the connection are sent at the similar frequencies as in the 300 bit/s systems, but slightly out of phase. By comparing the phase of the two signals, 1s and 0s could be pulled back out, for instance if the signals were 90 degrees out of phase, this represented two digits, "1,0", at 180 degrees it was "1,1". In this way each cycle of the signal represents two digits instead of one, 1200 bit/s modems were, in effect, 600 bit/s modems with "tricky" signaling.
It was at this point that the difference between baud and bit per second became real. Baud refers to the signaling rate of a system, in a 300 bit/s modem the signals sent one bit per signal, so the data rate and signalling rate was the same. In the 1200 bit/s systems this was no longer true since the modems were actually 600 baud. This led to a series of flame wars on the BBSes of the 80s.
Increases in speed have since used increasingly complicated communications theory. The Milgo 4500 introduced the 8 phase shift key concept. This could transmit three bits per signaling instance (baud). The next major advance was introduced by the Codex Corporation in the late 1960s. Here the bits were encoded into a combination of amplitude and phase, known as Quadrature Amplitude Modulation. Best visualized as a two dimensional "eye pattern", the bits are mapped onto points on a graph with the x (real) and y (quadrature) coordinates transmitted over a single carrier. This technique became very effective and was incorporated into an international standard named V.29, by the CCITT (now ITU) arm of the United Nations. The standard was able to transmit 4 bits per baud, at a rate of 2400baud, giving an effective bit rate of 9600 bits per second (9.6k). The carrier frequency was 1650 Hz.
For many years, most engineers considered this rate to be the limit of data communications over telephone networks.
[edit] Breaking the 9.6k Barrier
In 1980 Godfried Ungerboeck from IBM applied powerful channel coding techniques to search for new ways to increase the speed of modems. His results were astonishing but only conveyed to a few colleagues. Finally in 1982, he agreed to publish what is now a landmark paper in the theory of information coding. By applying powerful parity check coding to the bits in each symbol, and mapping the encoded bits into a two dimensional "eye pattern", Ungerboeck showed that it was possible to increase the speed by a factor of two with the same error rate. The new technique was called mapping by set partitions (now known as trellis modulation). This new view was an extension of the "penny packing" problem and the related and more general problem of how to pack points into an N-dimension sphere such that they are far away from their neighbors. When bit sequences differ a lot, one faulty bit can still be corrected.
The industry was galvanized into new research and development. More powerful coding techniques were developed, commercial firms rolled out new product lines, and the standards organizations rapidly adopted to new technology. Today the ITU standard V.34 represents the culmination of the joint efforts. It employs the most powerful coding techniques including channel encoding and shape encoding. From the mere 4 bits per symbol (9.6 kbit/s), the new standards used the equivalent of 6 to 10 bits per symbol and very sophisticated algorithms to create 14.4k, 28.8, and 33.6 kbit/s modems. This rate is near the theoretical Shannon limit. When calculated, the Shannon capacity of a narrowband line is Bandwidth * log_2 (1 + Pu/Pn), with Pu/Pn the signal-to-noise ratio. Narrowband phone lines have a bandwidth from 300-3100 Hz, so using Pu/Pn=100,000: capacity is approximately 35kbit/s.
[edit] Using Digital Lines and PCM
In the late 1990s Rockwell and U.S. Robotics introduced new technology based upon the digital transmission used in modern telephony networks. The standard digital transmission in modern networks is 64 kbit/s but some networks use a part of the bandwidth for remote office signaling (eg to hang up the phone), limiting the effective rate to 56 kbit/s DS0. This new technology was adopted into ITU standards V.90 and is common in modern computers.
The 56 kbit/s rate is only possible from the central office to the user site (downlink) and in the United States, government regulation limits the maximum power output to only 53.3 kbit/s. The uplink (from the user to the central office) still uses V.34 technology at 33.6k. Later in V.92, the PCM technique was applied to increase the upload speed to a maximum of 48 kbit/s. If it becomes possible to approach more closely the DS0 rate of 64kbit/s, it may be due more to improvements in the underlying phone system (increasing signal-to-noise ratio) than anything in the technology of the modems themselves.
[edit] Using Compression to Exceed 56k
Software is as important to the operation of the modem today as the hardware. Even with the improvements in the performance of the phone system, modems still lose a considerable amount of data due to noise on the line. The MNP standards were originally created to automatically fix these errors, and later expanded to compress the data at the same time. Today's V.42 and V.42bis fill these roles in the vast majority of modems, and although later MNP standards were released, they are not common.
With such systems it is possible for the modem to transmit data faster than its basic rate would imply. For instance, a 56 kbit/s connection with V.42bis can transmit up to 53.3*3 == 160 kbit/s. In reality, the data rate rarely exceeds 2:1 compression. One problem is that the compression tends to get better and worse over time due to noise on the line, or due to the transfer of already-compressed files (ZIP files, JPEG images, MP3 audio, MPEG video). At some points the modem will be sending compressed files at 53 kbit/s, uncompressed files at 110 kbit/s, and pure text at 160 kbit/s, or any value in between. [1]
In such situations a small amount of memory in the modem, a buffer, is used to hold the data while it is being compressed and sent across the phone line, but in order to prevent overflow of the buffer, it sometimes becomes necessary to tell the computer to pause the datastream. This is accomplished through hardware flow control using extra pins on the modem–computer connection. The computer is then set to supply the modem at some higher rate, such as 230 kbit/s, and the modem will tell the computer when to start or stop sending data.
[edit] Winmodem
Main article: Softmodem
A PCI Winmodem/Softmodem (on the left) next to a traditional ISA modem (on the right). Notice the less complex circuitry of the modem on the left.
A Winmodem or Softmodem is a stripped-down modem for Windows that replaces tasks traditionally handled in hardware with software. In this case the modem is a simple digital signal processor designed to create sounds, or voltage variations, on the telephone line. Turn of the century computers often include a very simple card slot, the Communications and Networking Riser slot (CNR), to lower the cost of connecting it up. The CNR slot includes pins for sound, power and basic signaling, instead of the more expensive PCI slot normally used but the already low cost and almost universal applicability (unlike the CNR units which were restricted to machines that had such a slot) of the PCI winmodem meant that this was little used. Winmodems are cheaper than traditional modems, since they have fewer hardware components. One downside of a Winmodem is that the software generating the modem tones is not that simple, and the performance of the computer as a whole often suffers when it is being used. For online gaming this can be a real concern. Another problem with Winmodems is lack of flexibility, due to their strong tie to the underlying operating system. A given Winmodem might not be supported by other operating systems (such as Linux), because their manufacturers may neither support the other operating system nor provide enough technical data to create an equivalent driver. A Winmodem might not even work (or work well) with a later version of Microsoft Windows, if its driver turns out to be incompatible with that later version of the operating system.
Apple's GeoPort modems from the second half of the 1990s were similar, and are generally regarded as having been a bad move. Although a clever idea in theory, enabling the creation of more-powerful telephony applications, in practice the only programs created were simple answering-machine and fax software, hardly more advanced than their physical-world counterparts, and certainly more error-prone and cumbersome. The software was finicky and ate up significant processor time, and no longer functions in current operating system versions.
Almost all modern modems also do double-duty as a fax machine as well. Digital faxes, introduced in the 1980s, are simply a particular image format sent over a high-speed (9600/1200 bit/s) modem. Software running on the host computer can convert any image into fax-format, which can then be sent using the modem. Such software was at one time an add-on, but since has become largely universal.
[edit] Bonding Modems
A bonding modem, also known as an inverse multiplexing modem, takes two or more phone lines and combines them to allow dial-up speeds of 2X or greater. For example a dual-56k bonding modem allows up to 112k speeds using two phone lines. With most bonding modems the download speed of each file downloaded is limited to the maximum speed of each channel, thus a 56k bonding modem will download each file 56k speed only. The advantage of a bonding modem is that it can spread the downloading of multiple files (such a those making up a website) over the multiple channels of the bonding modem, making the download of websites faster. Also, using a download accelerator can also allow large files to be download faster spreading the download of different segments of the file over several channels.
For bonding modems to be used in multi-channel mode with one's ISP, the ISP must support multiple simultaneous connections from one user. Bonding modems first appeared in the mid-1990s. Due to lack of support by many ISPs and the introduction of affordable broadband internet connections, bonding modems have never been very popular among home users.
Today's modern audio modems (ITU-T V.92 standard) closely approach the Shannon capacity of the PSTN telephone channels they use. They are plug-and-play fax/data/voice modems.
[edit] Radio modems
Direct broadcast satellite, WiFi, and mobile phones all use modems to communicate, as do most other wireless services today. Modern telecommunications and data networks also make extensive use of radio modems where long distance data links are required. Such systems are an important part of the PSTN, and are also in common use for high-speed computer network links to outlying areas where fibre is not economical.
Even where a cable is installed, it is often possible to get better performance or make other parts of the system simpler by using radio frequencies and modulation techniques through a cable. Coaxial cable has a very large bandwidth, however signal attenuation becomes a major problem at high data rates if a digital signal is used. By using a modem, a much larger amount of digital data can be transmitted through a single piece of wire. Digital cable television and cable Internet services use radio frequency modems to provide the increasing bandwidth needs of modern households. Using a modem also allows for frequency-division multiple access to be used, making full-duplex digital communication with many users possible using a single wire.
Wireless modems come in a variety of types, bandwidths, and speeds. Wireless modems are often referred to as transparent or smart. They transmit information that is modulated onto a carrier frequency to allow many simultaneous wireless communication links to work simultaneously on different frequencies.
Transparent modems operate in a manner similar to their phone line modem cousins. Typically, they were half duplex, meaning that they could not send and receive data at the same time. Typically transparent modems are polled in a round robin manner to collect small amounts of data from scattered locations that do not have easy access to wired infrastructure. Transparent modems are most commonly used by utility companies for data collection.
Smart modems come with a media access controller inside which prevents random data from colliding and resends data that is not correctly received. Smart modems typically require more bandwidth than transparent modems, and typically achieve higher data rates. The IEEE 802.11 standard defines a short range modulation scheme that is used on a large scale throughout the world.
[edit] WiFi and WiMax
Wireless data modems are used in the WiFi and WiMax standards, operating at microwave frequencies.
WiFi is principally used in laptops for Internet connections (wireless access point) and wireless application protocol (WAP).
[edit] Mobile modems & routers
External modems for mobile phone lines (GPRS and UMTS), are also known as datacards and cellular routers. The datacard is a PC card, where a phone card is included, whereas a cellular router may or may not have an external datacard. Most cellular routers do, except for the WAAV CM3 mobile broadband cellular router.
Nowadays, there are USB modems with an integrated SIM cardholder (i.e, see Mobidata, Vodafone Mobile Connect USB Modem); that is, you only need a USB port and a modem to connect to the Internet.
See : flat rate.
[edit] Broadband
DSL modem
ADSL modems, a more recent development, are not limited to the telephone's "voiceband" audio frequencies. Some ADSL modems use coded orthogonal frequency division modulation (DMT).
Cable modems use a range of frequencies originally intended to carry RF television channels. Multiple cable modems attached to a single cable can use the same frequency band, using a low-level media access protocol to allow them to work together within the same channel. Typically, 'up' and 'down' signals are kept separate using frequency division multiple access.
New types of broadband modems are beginning to appear, such as doubleway satellite and powerline modems.
Broadband modems should still be classed as modems, since they use complex waveforms to carry digital data. They are more advanced devices than traditional dial-up modems as they are capable of modulating/demodulating hundreds of channels simultaneously.
Many broadband modems include the functions of a router (with Ethernet and WiFi ports) and other features such as DHCP, NAT and firewall features.
When broadband technology was introduced, networking and routers were unfamiliar to consumers. However, many people knew what a modem was as most internet access was through dialup. Due to this familiarity, companies started selling broadband modems using the familiar term "modem" rather than vaguer ones like "adapter" or "transceiver".
Most modems must be configured properly before they can use a router. This configuration is known as bridge mode
[edit] Deep Space Telecommunicatons
Many modern modems have their origin in deep space telecommunications systems of the 1960s.
Differences with deep space telecom modems vs landline modems
• digital modulation formats that have high doppler immunity are typically used
• waveform complexity tends to be low, typically binary phase shift keying
• error correction varies mission to mission, but is typically much stronger than most landline modems
[edit] Voice modem
Voice modems are regular modems that are capable of recording or playing audio over the telephone line. They are used for telephony applications. See Voice modem command set for more details on voice modems.
Tuesday, March 31, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment