Saturday 5 March 2011

Transmission Media

Transmission media are the physical pathways that connect computers, other devices, and people on a network—the highways and byways that comprise the information superhighway. Each transmission medium requires specialized network hardware that has to be compatible with that medium. You have probably heard terms such as Layer 1, Layer 2, and so on. These refer to the OSI reference model, which defines network hardware and services in terms of the functions they perform. (The OSI reference model is discussed in detail in Chapter 5, "Data Communications Basics.") Transmission media operate at Layer 1 of the OSI model: They encompass the physical entity and describe the types of highways on which voice and data can travel.

It would be convenient to construct a network of only one medium. But that is impractical for anything but an extremely small network. In general, networks use combinations of media types. There are three main categories of media types:
  • Copper cable—Types of cable include unshielded twisted-pair (UTP), shielded twisted-pair (STP), and coaxial cable. Copper-based cables are inexpensive and easy to work with compared to fiber-optic cables, but as you'll learn when we get into the specifics, a major disadvantage of cable is that it offers a rather limited spectrum that cannot handle the advanced applications of the future, such as teleimmersion and virtual reality.
  • Wireless—Wireless media include radio frequencies, microwave, satellite, and infrared. Deployment of wireless media is faster and less costly than deployment of cable, particularly where there is little or no existing infrastructure (e.g., Africa, Asia-Pacific, Latin America, eastern and central Europe). Wireless is also useful where environmental circumstances make it impossible or cost-prohibitive to use cable (e.g., in the Amazon, in the Empty Quarter in Saudi Arabia, on oil rigs).
  • There are a few disadvantages associated with wireless, however. Historically, wireless solutions support much lower data rates than do wired solutions, although with new developments in wireless broadband, that is becoming less of an issue (see Part IV, "Wireless Communications"). Wireless is also greatly affected by external impairments, such as the impact of adverse weather, so reliability can be difficult to guarantee. However, new developments in laser-based communications—such as virtual fiber—can improve this situation. (Virtual fiber is discussed in Chapter 15, "WMANs, WLANs, and WPANs.") Of course, one of the biggest concerns with wireless is security: Data must be secured in order to ensure privacy.
  • Fiber optics—Fiber offers enormous bandwidth, immunity to many types of interference and noise, and improved security. Therefore, fiber provides very clear communications and a relatively noise-free environment. The downside of fiber is that it is costly to purchase and deploy because it requires specialized equipment and techniques.
This chapter focuses on the five traditional transmission media formats: twisted-pair copper used for analog voice telephony, coaxial cable, microwave and satellite in the context of traditional carrier and enterprise applications, and fiber optics. (Contemporary transmission solutions are discussed in subsequent chapters, including Chapter 11, "Optical Networking," and Chapter 16, "Emerging Wireless Applications.") Table 2.1 provides a quick comparison of some of the important characteristics of these five media types. Note that recent developments in broadband alternatives, including twisted-pair options such as DSL and wireless broadband, constitute a new categorization of media.

Transmission Media Characteristics

Media Type                                 Bandwidth               Performance: Typical ErrorRate
 Twisted-pair for analog voice        1MHz                   Poor to fair(105)                        
 applications

Coaxial cable                                  1GHz                   Good (10–7 to 10–9)


Microwave                                      100GHz                Good (10–9)


Satellite                                          100GHz                Good (10–9)


Fiber                                               75THz                  Great (10–11 to 10–13)



          
The frequency spectrum in which a medium operates directly relates to the bit rate that can be obtained with that medium. You can see in Table 2.1 that traditional twisted-pair affords the lowest bandwidth (i.e., the difference between the highest and lowest frequencies supported), a maximum of 1MHz, whereas fiber optics affords the greatest bandwidth, some 75THz.
Another important characteristic is a medium's susceptibility to noise and the subsequent error rate. Again, twisted-pair suffers from many impairments. Coax and fiber have fewer impairments than twisted-pair because of how the cable is constructed, and fiber suffers the least because it is not affected by electrical interference. The error rate of wireless depends on the prevailing conditions, especially weather and the presence of obstacles, such as foliage and buildings.
Yet another characteristic you need to evaluate is the distance required between repeaters. This is a major cost issue for those constructing and operating networks. In the case of twisted-pair deployed as an analog telephone channel, the distance between amplifiers is roughly 1.1 miles (1.8 km). When twisted-pair is used in digital mode, the repeater spacing drops to about 1,800 feet (550 m). With twisted-pair, a great many network elements must be installed and subsequently maintained over their lifetime, and they can be potential sources of trouble in the network. Coax offers about a 25% increase in the distance between repeaters over twisted-pair. With microwave and satellite, the distance between repeaters depends on the frequency bands in which you're operating and the orbits in which the satellites travel. In the area of fiber, new innovations appear every three to four months, and, as discussed later in this chapter, some new developments promise distances as great as 4,000 miles (6,400 km) between repeaters or amplifiers in the network.
Security is another important characteristic. There is no such thing as complete security, and no transmission medium in and of itself can provide security. But using encryption and authentication helps ensure security. Also, different media types have different characteristics that enable rapid intrusion as well as characteristics that enable better detection of intrusion. For example, with fiber, an optical time domain reflectometer (OTDR) can be used to detect the position of splices that could be the result of unwanted intrusion. (Some techniques allow you to tap into a fiber cable without splices, but they are extremely costly and largely available only to government security agencies.)
Finally, you need to consider three types of costs associated with the media types: acquisition cost (e.g., the costs of the cable per foot [meter], of the transceiver and laser diode, and of the microwave tower), installation and maintenance costs (e.g., the costs of parts as a result of wear and tear and environmental conditions), and internal premises costs for enterprises (e.g., the costs of moves, adds, and changes, and of relocating workers as they change office spaces).
The following sections examine these five media types—twisted-pair, coaxial cable, microwave, satellite, and fiber optics—in detail.

Twisted-Pair

The historical foundation of the public switched telephone network (PSTN) lies in twisted-pair, and even today, most people who have access to networks access them through a local loop built on twisted-pair. Although twisted-pair has contributed a great deal to the evolution of communications, advanced applications on the horizon require larger amounts of bandwidth than twisted-pair can deliver, so the future of twisted-pair is diminishing.
                                                                                                                                                                                                                                                                       

Topology

The physical topology of a network refers to the configuration of cables, computers, and other peripherals. Physical topology should not be confused with logical topology which is the method used to pass information between workstations. Logical topology was discussed in the Protocol chapter .

Types of Physical Topologies:-

the physical topologies used in networks and other related topics. This Are 5 Types .
  1. Linear Bus
  2. Star
  3. Tree (Expanded Star)
  4. Considerations When Choosing a Topology
  5. Summary Chart.

1.Linear Bus:-

A linear bus topology consists of a main run of cable with a terminator at each end. All nodes (file server, workstations, and peripherals) are connected to the linear cable.

Advantages of a Linear Bus Topology:-


  • Easy to connect a computer or peripheral to a linear bus.

  • Requires less cable length than a star topology


  • Disadvantages of a Linear Bus Topology:-

    • Entire network shuts down if there is a break in the main cable.
    • Terminators are required at both ends of the backbone cable.
    • Difficult to identify the problem if the entire network shuts down.
    • Not meant to be used as a stand-alone solution in a large building.

    Star :-

    A star topology is designed with each node (file server, workstations, and peripherals) connected directly to a central network hub, switch, or concentrator .
    Data on a star network passes through the hub, switch, or concentrator before continuing to its destination. The hub, switch, or concentrator manages and controls all functions of the network. It also acts as a repeater for the data flow. This configuration is common with twisted pair cable; however, it can also be used with coaxial cable or fiber optic cable.

    Advantages of a Star Topology:-

    • Easy to install and wire.
    • No disruptions to the network when connecting or removing devices.
    • Easy to detect faults and to remove parts.

    Disadvantages of a Star Topology:-

    • Requires more cable length than a linear topology.
    • If the hub, switch, or concentrator fails, nodes attached are disabled.
    • More expensive than linear bus topologies because of the cost of the hubs, etc.

    Tree or Expanded Star:-

    A tree topology combines characteristics of linear bus and star topologies. It consists of groups of star-configured workstations connected to a linear bus backbone cable .Tree topologies allow for the expansion of an existing network, and enable schools to configure a network to meet their needs

    Advantages of a Tree Topology:-

    • Point-to-point wiring for individual segments.
    • Supported by several hardware and software venders.

    Disadvantages of a Tree Topology:-

    • Overall length of each segment is limited by the type of cabling used.
    • If the backbone line breaks, the entire segment goes down.
    • More difficult to configure and wire than other topologies.

    5-4-3 Rule:-

    A consideration in setting up a tree topology using Ethernet protocol is the 5-4-3 rule. One aspect of the Ethernet protocol requires that a signal sent out on the network cable reach every part of the network within a specified length of time. Each concentrator or repeater that a signal goes through adds a small amount of time. This leads to the rule that between any two nodes on the network there can only be a maximum of 5 segments, connected through 4 repeaters/concentrators. In addition, only 3 of the segments may be populated (trunk) segments if they are made of coaxial cable. A populated segment is one that has one or more nodes attached to it . In Figure 4, the 5-4-3 rule is adhered to. The furthest two nodes on the network have 4 segments and 3 repeaters/concentrators between them.
    This rule does not apply to other network protocols or Ethernet networks where all fiber optic cabling or a combination of a fiber backbone with UTP cabling is used. If there is a combination of fiber optic backbone and UTP cabling, the rule is simply translated to a 7-6-5 rule.

    Considerations Choosing a Topology:-

    • Money. A linear bus network may be the least expensive way to install a network; you do not have to purchase concentrators.
    • Length of cable needed. The linear bus network uses shorter lengths of cable.
    • Future growth. With a star topology, expanding a network is easily done by adding another concentrator.
    • Cable type. The most common cable in schools is unshielded twisted pair, which is most often used with star topologies.
    Physical Topology       Common Cable                        Common Protocol

    Linear Bus                   Twisted Pair Coaxial                   Ethernet                                       Fiber                      
     
    Star
                                  Twisted Pair                               Ethernet                 
                                            Fiber
    Tree                             Twisted Pair Coaxial                    Ethernet                                           
                                           Fiber                      
                                                     

    NIC Cards - (Network Interface Card)

    An NIC (network interface card) is an expansion card that provides connectivity between a PC and a network such as a LAN.

    Network Interface Cards are also referred to as ethernet adapters, network adapters, LAN cards, LAN adapters, or NICs (Network Interface Controllers).

    Internal network interface cards (NICs) can be either built-in to the system mainboard, or plugged into an expansion slot inside the device.

    One specification is the transfer rate, which is specified in Mbps (Megabits per second), or Gbps (Gigabits per second). Most modern network interface cards support up to 100Mbps, while the more expensive Gigabit ethernet cards support up to 1000Mbps (1 Gbps).

    The Ethernet family of technologies include:

    • 10BASE5 (also known as ThickNet)
      • The original Ethernet standard which used a single coaxial cable to transfer up to 10Mbit/sec. The 5 in the name refers to the maximum segment length of 500 metres.
    • 10BASE2 (also known as ThinNet)
      • This standard used a thinner coaxial cable than its 10BASE5 counterpart and was very common at one time. It could transfer 10Mbit/sec but had a shorter segment length than 10BASE5, although the 2 in its name suggests a 200 metre segment length, it was actually decreased to 185 metres. Each machine connected to the coaxial cable by means of a T-adaptor, and the ends of the cables required a terminating device.
    • 10BASE-T
      • This standard was the first to use twisted pair cabling, hence the T in the standard's name. It provided 10Mbit/sec transferred over two twisted pair cables, it used either a hub or a switch to network the devices, similar to the configurations in use today.
    • 100BASE-T (Fast Ethernet)
      • This describes up to 3 different standards, namely, 100BASE-TX, 100BASE-T4 and 100BASE-T2. It provides up to 100Mbit/sec over twisted pair cabling, with each standard using a different category of cable. 100BASE-TX, the most dominant standard in use today, uses two pairs of a Category-5 cable. 100BASE-T4 uses all 4 pairs of a Category-3 cable, and is limited to half-duplex mode. 100BASE-T2, although it never had any devices manufactured to support it, utilised two pairs of Category-3 cable and provided full-duplex support.
    • Gigabit Ethernet
      • 1000BASE-T (IEEE 802.3ab) - Provides 1000Mbit/sec over twisted pair Category 5, or Category 5e (recommended) copper cables.
      • 1000BASE-SX - Provides 1000Mbit/sec over short-range multi-mode fiber cables.
      • 1000BASE-LX - Provides 1000Mbit/sec over long-range single-mode fiber cables.
      • 1000BASE-CX - Provides 1000Mbit/sec over copper cables but limited to 25 metres (now obsolete).
    • 10 Gigabit Ethernet (10GE / 10GbE / 10 GigE)
      • This standard provides 10Gbit/sec data transfer using either single-mode fibre (long haul), multi-mode fibre (up to 300 metres), copper backplane (up to 1 metre) and copper twisted pair (up to 100 metres).
    • 40 Gigabit Ethernet (40GbE)
      • Currently still in development. Scheduled for June 2010.
    • 100 Gigabit Ethernet 100GbE
      • Currently still in development. Scheduled for June 2010

    Hardware Ports Information

    Switches And Hubs r doing  same Job For Conecting One lan systems andmediating the data between this lan systems.

    What is the difference between a  switch and ethernet hubs?

    Although hubs and switches both glue the PCs in a network together, a switch is more expensive and a network built with switches is generally considered faster than one built with hubs.

    When a hub receives a packet (chunk) of data (a frame in Ethernet lingo) at one of its ports from a PC on the network, it transmits (repeats) the packet to all of its ports and, thus, to all of the other PCs on the network.  If two or more PCs on the network try to send packets at the same time a collision is said to occur.  When that happens all of the PCs have to go though a routine to resolve the conflict.  The process is prescribed in the Ethernet Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol.  Each Ethernet Adapter has both a receiver and a transmitter.  If the adapters didn't have to listen with their receivers for collisions they would be able to send data at the same time they are receiving it (full duplex).   Because they have to operate at half duplex (data flows one way at a time) and a hub retransmits data from one PC to all of the PCs, the maximum bandwidth is 100 Mhz and that bandwidth is shared by all of the PC's connected to the hub. The result is when a person using a computer on a hub downloads a large file or group of files from another computer the network becomes congested.  In a 10 Mhz 10Base-T network the affect is to slow the network to nearly a crawl.  The affect on a small, 100 Mbps (million bits per scond), 5-port network is not as significant.

    Two computers can be connected directly together in an Ethernet with a crossover cable.  A crossover cable doesn't have a collision problem.  It hardwires the Ethernet transmitter on one computer to the receiver on the other.   Most 100BASE-TX Ethernet Adapters can detect when listening for collisions is not required with a process known as auto-negotiation and will operate in a full duplex mode when it is permitted. The result is a crossover cable doesn't have delays caused by collisions, data can be sent  in both directions simultaneously, the maximum available bandwidth is 200 Mbps, 100 Mbps each way, and there are no other PC's with which the bandwidth must be shared.

    An Ethernet switch automatically divides the network into multiple segments, acts as a high-speed, selective bridge between the segments, and supports simultaneous connections of multiple pairs of computers which don't compete with other pairs of computers for network bandwidth.  It accomplishes this by maintaining a table of each destination address and its port.  When the switch receives a packet, it reads the destination address from the header information in the packet, establishes a temporary connection between the source and destination ports, sends the packet on its way, and then terminates the connection.

    Picture a switch as making multiple temporary crossover cable connections between pairs of computers (the cables are actually straight-thru cables; the crossover function is done inside the switch).  High-speed electronics in the switch automatically connect the end of one cable (source port) from a sending computer to the end of another cable (destination port) going to the receiving computer on a per packet basis.  Multiple connections like this can occur simultaneously.  It's as simple as that. And like a crossover cable between two PCs, PC's on an Ethernet switch do not share the transmission media, do not experience collisions or have to listen for them, can operate in a full-duplex mode, have bandwidth as high as 200 Mbps, 100 Mbps each way, and do not share this bandwidth with other PCs on the switch.  In short, a switch is "more better."

    Questions & Ansewers

    1) What are the three main types of LAN architecture? What are their primary characteristics?
    Ans:- The three network architectures are bus, ring, and hub. There are others, but these three describe the vast majority of all LANs.
    A bus network is a length of cable that has a connector for each device directly attached to it. Both ends of the network cable are terminated. A ring network has a central control unit called a Media Access Unit to which all devices are attached by cables. A hub network has a backplane with connectors leading through another cable to the devices.
    2) What are the seven OSI layers and their responsibilities?
    Ans:- The OSI layers (from the bottom up) are as follows:
    Physical: Transmits data
    Data Link: Corrects transmission errors
    Network: Provides the physical routing information
    Transport: Verifies that data is correctly transmitted
    Session: Synchronizes data exchange between upper and lower layers
    Presentation: Converts network data to application-specific formats
    Application: End-user interface
    3) What is the difference between segmentation and reassembly, and concatenation and separation?
    Ans:- Segmentation is the breaking apart of a large N-service data unit (N-SDU) into several smaller N-protocol data units (N-PDUs), whereas reassembly is the reverse.
    Concatenation is the combination of several N-PDUs from the next higher layer into one SDU. Separation is the reverse.
    Define multiplexing and demultiplexing. How are they useful?
    Multiplexing is when several connections are supported by a single connection. According to the formal definition, this applies to layers (so that three presentation service connections could be multiplexed into a single session connection). However, it is a term generally used for all kinds of connections, such as putting four modem calls down a single modem line. Demultiplexing is the reverse of multiplexing, in which one connection is split into several connections.
    Multiplexing is a key to supporting many connections at once with limited resources. A typical example is a remote office with twenty terminals, each of which is connected to the main office by a telephone line. Instead of requiring twenty lines, they can all be multiplexed into three or four. The amount of multiplexing possible depends on the maximum capacity of each physical line.
    4) How many protocol headers are added by the time an OSI-based e-mail application (in the application layer) has sent a message to the physical layer for transmission?
    Ans:- Seven, one for each OSI layer. More protocol headers can be added by the actual physical network system. As a general rule, each layer adds its own protocol information

    Protocols

    Diplomats follow rules when they conduct business between nations, which you see referred to in the media as protocol. Diplomatic protocol requires that you don't insult your hosts and that you do respect local customs (even if that means you have to eat some unappetizing dinners!). Most embassies and commissions have specialists in protocol, whose function is to ensure that everything proceeds smoothly when communications are taking place. The protocol is a set of rules that must be followed in order to "play the game," as career diplomats are fond of saying. Without the protocols, one side of the conversation might not really understand what the other is saying.
    Similarly, computer protocols define the manner in which communications take place. If one computer is sending information to another and they both follow the protocol properly, the message gets through, regardless of what types of machines they are and what operating systems they run (the basis for open systems). As long as the machines have software that can manage the protocol, communications are possible. Essentially, a computer protocol is a set of rules that coordinates the exchange of information.
    Protocols have developed from very simple processes ("I'll send you one character, you send it back, and I'll make sure the two match") to elaborate, complex mechanisms that cover all possible problems and transfer conditions. A task such as sending a message from one coast to another can be very complex when you consider the manner in which it moves. A single protocol to cover all aspects of the transfer would be too large, unwieldy, and overly specialized. Therefore, several protocols have been developed, each handling a specific task.
    Combining several protocols, each with their own dedicated purposes, would be a nightmare if the interactions between the protocols were not clearly defined. The concept of a layered structure was developed to help keep each protocol in its place and to define the manner of interaction between each protocol (essentially, a protocol for communications between protocols!).
    As you saw earlier, the ISO has developed a layered protocol system called OSI. OSI defines a protocol as "a set of rules and formats (semantic and syntactic), which determines the communication behavior of N-entities in the performance of N-functions." You might remember that N represents a layer, and an entity is a service component of a layer.
    When machines communicate, the rules are formally defined and account for possible interruptions or faults in the flow of information, especially when the flow is connectionless (no formal connection between the two machines exists). In such a system, the ability to properly route and verify each packet of data (datagram) is vitally important. As discussed earlier, the data sent between layers is called a service data unit (SDU), so OSI defines the analogous data between two machines as a protocol data unit (PDU).
    The flow of information is controlled by a set of actions that define the state machine for the protocol. OSI defines these actions as protocol control information (PCI).

    Breaking Data Apart



    It is necessary to introduce a few more terms commonly used in OSI and TCP/IP, but luckily they are readily understood because of their real-world connotations. These terms are necessary because data doesn't usually exist in manageable chunks. The data might have to be broken down into smaller sections, or several small sections can be combined into a large section for more efficient transfer. The basic terms are as follows:
    Segmentation is the process of breaking an N-service data unit (N-SDU) into several N-protocol data units (N-PDUs).
    Reassembly is the process of combining several N-PDUs into an N-SDU (the reverse of segmentation).
    Blocking is the combination of several SDUs (which might be from different services) into a larger PDU within the layer in which the SDUs originated.
    Unblocking is the breaking up of a PDU into several SDUs in the same layer.
    Concatenation is the process of one layer combining several N-PDUs from the next higher layer into one SDU (like blocking except occurring across a layer boundary).
    Separation is the reverse of concatenation, so that a layer breaks a single SDU into several PDUs for the next layer higher (like unblocking except across a layer boundary).
    Finally, here is one last set of definitions that deal with connections:
    Multiplexing is when several connections are supported by a single connection in the next lower layer (so three presentation service connections could be multiplexed into a single session connection).
    Demultiplexing is the reverse of multiplexing, in which one connection is split into several connections for the layer above it.
    Splitting is when a single connection is supported by several connections in the layer below (so the data link layer might have three connections to support one network layer connection).
    Recombining is the reverse of splitting, so that several connections are combined into a single one for the layer above.
    Multiplexing and splitting (and their reverses, demultiplexing and recombining) are different in the manner in which the lines are split. With multiplexing, several connections combine into one in the layer below. With splitting, however, one connection can be split into several in the layer below. As you might expect, each has its importance within TCP and OSI.

    Protocol Headers



    Protocol control information is information about the datagram to which it is attached. This information is usually assembled into a block that is attached to the front of the data it accompanies and is called a header or protocol header. Protocol headers are used for transferring information between layers as well as between machines. As mentioned earlier, the protocol headers are developed according to rules laid down in the ISO's ASN.1 document set.
    When a protocol header is passed to the layer beneath, the datagram including the layer's header is treated as the entire datagram for that receiving layer, which adds its own protocol header to the front. Thus, if a datagram started at the application layer, by the time it reached the physical layer, it would have seven sets of protocol headers on it. These layer protocol headers are used when moving back up the layer structure; they are stripped off as the datagram moves up. 
    Adding each layer's protocol header to user data. It is easier to think of this process as layers on an onion. The inside is the data that is to be sent. As it passes through each layer of the OSI model, another layer of onion skin is added. When it is finished moving through the layers, several protocol headers are enclosing the data. When the datagram is passed back up the layers (probably on another machine), each layer peels off the protocol header that corresponds to the layer. When it reaches the destination layer, only the data is left.
    This process makes sense, because each layer of the OSI model requires different information from the datagram. By using a dedicated protocol header for each layer of the datagram, it is a relatively simple task to remove the protocol header, decode its instructions, and pass the rest of the message on. The alternative would be to have a single large header that contained all the information, but this would take longer to process. The exact contents of the protocol header are not important right now, but I examine them later when looking at the TCP protocol.
    As usual, OSI has a formal description for all this, which states that the N-user data to be transferred is prepended with N-protocol control information (N-PCI) to form an N-protocol data unit (N-PDU). The N-PDUs are passed across an N-service access point (N-SAP) as one of a set of service parameters comprising an N-service data unit (N-SDU). The service parameters comprising the N-SDU are called N-service user data (N-SUD), which is prepended to the (N–1)PCI to form another (N–1)PDU.
    For every service in a layer, there is a protocol for it to communicate to the layer below it (remember that applications communicate through the layer below, not directly). The protocol exchanges for each service are defined by the system, and to a lesser extent by the application developer, who should be following the rules of the system.
    Protocols and headers might sound a little complex or overly complicated for the task that must be accomplished, but considering the original goals of the OSI model, it is generally acknowledged that this is the best way to go. (Many a sarcastic comment has been made about OSI and TCP that claim the header information is much more important than the data contents. In some ways this is true, because without the header the data would never get to its destination.)

    Summary



    Today's text has thrown a lot of terminology at you, most of which you will see frequently in the following chapters. In most cases, a gentle reminder of the definition accompanies the first occurrence of the term. To understand the relationships between the different terms, though, you might have to refer back to today's material.
    You now have the basic knowledge to relate TCP/IP to the OSI's layered model, which will help you understand what TCP/IP does (and how it goes about doing it). The next chapter looks at the history of TCP/IP and the growth of the Internet.

    Standards

    People don't question the need for rules in a board game. If you didn't have rules, each player could be happily playing as it suits them, regardless of whether their play was consistent with that of other players. The existence of rules ensures that each player plays the game in the same way, which might not be as much fun as a free-for-all. However, when a fight over a player's actions arises, the written rules clearly indicate who is right. The rules are a set of standards by which a game is played.
    Standards prevent a situation arising where two seemingly compatible systems really are not. For example, 10 years ago when CP/M was the dominant operating system, the 5.25-inch floppy was used by most systems. But the floppy from a Kaypro II couldn't be read by an Osbourne I because the tracks were laid out in a different manner. A utility program could convert between the two, but that extra step was a major annoyance for machine users.
    When the IBM PC became the platform of choice, the 5.25-inch format used by the IBM PC was adopted by other companies to ensure disk compatibility. The IBM format became a de facto standard, one adopted because of market pressures and customer demand.

    Setting Standards



    Creating a standard in today's world is not a simple matter. Several organizations are dedicated to developing the standards in a complete, unambiguous manner. The most important of these is the International Organization for Standardization, or ISO (often called the International Standardization Organization to fit their acronym, although this is incorrect). ISO consists of standards organizations from many countries who try to agree on international criterion. The American National Standards Institute (ANSI), British Standards Institute (BSI), Deutsches Institut fur Normung (DIN), and Association Francaise du Normalization (AFNOR) are all member groups. The ISO developed the Open Systems Interconnection (OSI) standard that is discussed throughout this book.
    Each nation's standards organization can create a standard for that country, of course. The goal of ISO, however, is to agree on worldwide standards. Otherwise, incompatibilities could exist that wouldn't allow one country's system to be used in another. (An example of this is with television signals: the US relies on NTSC, whereas Europe uses PAL—systems that are incompatible with each other.)
    Curiously, the language used for most international standards is English, even though the majority of participants in a standards committee are not from English-speaking countries. This can cause quite a bit of confusion, especially because most standards are worded awkwardly to begin with.
    The reason most standards involve awkward language is that to describe something unambiguously can be very difficult, sometimes necessitating the creation of new terms that the standard defines. Not only must the concepts be clearly defined, but the absolute behavior is necessary too. With most things that standards apply to, this means using numbers and physical terms to provide a concrete definition. Defining a 2x4 piece of lumber necessitates the use of a measurement of some sort, and similarly defining computer terms requires mathematics.
    Simply defining a method of communications, such as TCP/IP, would be fairly straightforward if it weren't for the complication of defining it for open systems. The use of an open system adds another difficulty because all aspects of the standard must be machine-independent. Imagine trying to define a 2x4 without using a measurement you are familiar with, such as inches, or if inches are adopted, it would be difficult to define inches in an unambiguous way (which indeed is what happens, because most units of length are defined with respect to the wavelength of a particular kind of coherent light).
    Computers communicate through bits of data, but those bits can represent characters, numbers, or something else. Numbers could be integers, fractions, or octal representations. Again, you must define the units. You can see that the complications mount, one on top of the other.
    To help define a standard, an abstract approach is usually used. In the case of OSI, the meaning (called the semantics) of the data transferred (the abstract syntax) is first dealt with, and the exact representation of the data in the machine (the concrete syntax) and the means by which it is transferred (transfer syntax) are handled separately. The separation of the abstract lets the data be represented as an entity, without concern for what it really means. It's a little like treating your car as a unit instead of an engine, transmission, steering wheel, and so on. The abstraction of the details to a simpler whole makes it easier to convey information. ("My car is broken" is abstract, whereas "the power steering fluid has all leaked out" is concrete.)
    To describe systems abstractly, it is necessary to have a language that meets the purpose. Most standards bodies have developed such a system. The most commonly used is ISO's Abstract Syntax Notation One, frequently shortened to ASN.1. It is suited especially for describing open systems networking. Thus, it's not surprising to find it used extensively in the OSI and TCP descriptions. Indeed, ASN.1 was developed concurrently with the OSI standards when it became necessary to describe upper-layer functions.
    The primary concept of ASN.1 is that all types of data, regardless of type, size, origin, or purpose, can be represented by an object that is independent of the hardware, operating system software, or application. The ASN.1 system defines the contents of a datagram protocol header—the chunk of information at the beginning of an object that describes the contents to the system. (Headers are discussed in more detail in the section titled "Protocol Headers" later in this chapter.)
    Part of ASN.1 describes the language used to describe objects and data types (such as a data description language in database terminology). Another part defines the basic encoding rules that deal with moving the data objects between systems. ASN.1 defines data types that are used in the construction of data packets (datagrams). It provides for both structured and unstructured data types, with a list of 28 supported types.

    Internet Standards



    When the Defense Advanced Research Projects Agency (DARPA) was established in 1980, a group was formed to develop a set of standards for the Internet. The group, called the Internet Configuration Control Board (ICCB) was reorganized into the Internet Activities Board (IAB) in 1983, whose task was to design, engineer, and manage the Internet.
    In 1986, the IAB turned over the task of developing the Internet standards to the Internet Engineering Task Force (IETF), and the long-term research was assigned to the Internet Research Task Force (IRTF). The IAB retained final authorization over anything proposed by the two task forces.
    The last step in this saga was the formation of the Internet Society in 1992, when the IAB was renamed the Internet Architecture Board. This group is still responsible for existing and future standards, reporting to the board of the Internet Society.
    After all that, what happened during the shuffling? Almost from the beginning, the Internet was defined as "a loosely organized international collaboration of autonomous, interconnected networks," which supported host-to-host communications "through voluntary adherence to open protocols and procedures" defined in a technical paper called the Internet Standards, RFC 1310,2. That definition is still used today.
    The IETF continues to work on refining the standards used for communications over the Internet through a number of working groups, each one dedicated to a specific aspect of the overall Internet protocol suite. There are working groups dedicated to network management, security, user services, routing, and many more things. It is interesting that the IETF's groups are considerably more flexible and efficient than those of, say, the ISO, whose working groups can take years to agree on a standard. In many cases, the IETF's groups can form, create a recommendation, and disband within a year or so. This helps continuously refine the Internet standards to reflect changing hardware and software capabilities.
    Creating a new Internet standard (which happened with TCP/IP) follows a well-defined process. It begins with a request for comment (RFC). This is usually a document containing a specific proposal, sometimes new and sometimes a modification of an existing standard. RFCs are widely distributed, both on the network itself and to interested parties as printed documents. Important RFCs and instructions for retrieving them are included in the appendixes at the end of this book.
    The RFC is usually discussed for a while on the network itself, where anyone can express their opinion, as well as in formal IETF working group meetings. After a suitable amount of revision and continued discussion, an Internet draft is created and distributed. This draft is close to final form, providing a consolidation of all the comments the RFC generated.
    The next step is usually a proposed standard, which remains as such for at least six months. During this time, the Internet Society requires at least two independent and interoperable implementations to be written and tested. Any problems arising from the actual tests can then be addressed. (In practice, it is usual for many implementations to be written and given a thorough testing.)
    After that testing and refinement process is completed, a draft standard is written, which remains for at least four months, during which time many more implementations are developed and tested. The last step—after many months—is the adoption of the standard, at which point it is implemented by all sites that require it.