Networking Essay

Example #1 – Computer Networking
In-network is a group of two or more computer systems sharing services and interacting in some manner. In most cases, this interaction is accomplished through a shared communication link, with the shared components being data. Put simply, a network is a collection of machines that have been linked both physically and through software components to soothe communication and the sharing of information.
To make communications between two or more computers work, several things need to be in place. First, some type of physical connection mechanism has to exist between the computers involved. Normally, this mechanism is a wire or cable of some kind or a transceiver that can both transmit and receive information attached to or built into your computer. The idea of computer networking is new to some people and almost always seen as a highly technical and rapidly evolving process.
Writing service | Conditions | Website |
[Rated 96/100] | Prices start at $12
| |
[Rated 94/100] | Prices start at $11
| |
[Rated 91/100] | Prices start at $12
|
Every day, computer professionals are called upon by their employers to evaluate, judge, and implement the technologies necessary for the rapid communication of dissimilar groups in order to enhance productivity or lessen complexity within the organization’s processes. Most see the task as a formidable one, and many feel they are not qualified or fully prepared to drive the creation of a Local Area Network (LAN) or Wide Area Network (WAN). James 2 The type of network you can create is often determined by the network operating system you use.
Like a regular operating system for your PC, a network operating system coordinates how all the individual software applications on a network work and how the network interacts with the hardware attached to it. Sharing data is made much easier when a network is involved. People are more productive because several people enter data at the same time and can also evaluate and process the shared data.
The effective use of networks can turn a company into an agile, powerful, and creative organization, giving it a long-term competitive advantage. Networks can be used to share hardware, programs, and databases across the organization. They can transmit and receive information to improve organizational effectiveness and efficiency. They enable geographically separated workgroups to share documents and opinions, which fosters teamwork, innovative ideas, and new business strategies (Stair, Reynolds 269).
Getting computers connected and speaking the same language may be somewhat interesting for some people, but it’s really just a necessary evil to get to the good stuff. In order to take advantage of your network, you need to configure network services, such as a file, print, and application services. A server in a network is dedicated to performing specific tasks in support of other computers on the network. In a server-based network, not all servers are alike.
Networks are about sharing three things: files, resources, and programs. File servers offer services that allow network users to share files. File services include storing, retrieving, and moving data. File and print servers do not do processing for client computers (Lammle 7). Print servers manage and control a single printer or a group of printers on a network.
The print server controls the queue or spooler. Clients send print jobs James 3 to the print server, and the print server uses the spooler to hold the job until the printer is ready. Application servers allow client PCs to access and use extra computing power and expensive software applications that reside on a shared computer. Application servers offload work from the client by running programs for the client and sending the results back to the client.
For example, when a client asks a Microsoft SQL Server server to find a record, SQL Server does all the processing to find the answer, and then sends the results back to the client. File, print, and application services are the main services that servers provide. Although you can dedicate a server to a particular service, such as having a computer that serves only as a print server, you do not need a different server for each type of service. One server can function as a file, print, and application server.
To compare the three, the file and print servers offer a storage location for clients. They benefit greatly from large hard drives. Although Random Access Memory (RAM) is important, the processor is not so important. An application server, on the other hand, requires a fast processor to run the application and get the results to the client. When to network or not to network is the question? The rapid growth in the number of networked computers over the last decade or so has been dramatic.
One factor in this growth is the number of Internet host computers, which is now in excess of six million. This acceleration in growth rate is because when two networks are connected, both are expanded and enhanced. Connecting thousands of LANs made the combined resources of the Internet so vast that it eventually became unrealistic for network planners to attempt to rival it; better to connect to it, take advantage of it, and at the same time, contribute to it.
The only time you would not use a James 4 network is when you are not required to use any type of shared resources that requires a connection either through a LAN, modem, or some other type of connection that would connect your computer to a shared resource. The growth of networking may not continue indefinitely but by the time it begins to slow, it is likely that networking involving Internet access will be as commonplace as cable television. When deciding which type of network to implement, several factors must be considered: number of computers, cost, security requirements, and administrative requirements.
PC networks generally fall into two categories peer-to-peer networks, also called workgroups, and server-based networks also called domains: 1. Peer-to-Peer Networking is the simplest form. In a peer-to-peer network, each workstation acts as both a client and a server. There is no central repository for information and no central server to maintain. Data and resources are distributed throughout the network, and each user is responsible for sharing data and resources connected to their system.
Advantages of peer-to-peer networking: While peer-to-peer networks may not always be the best choice, they do have their place and advantages. Small, inexpensive networks can easily be set up using peer-to-peer systems. The peer-to-peer network model works well for small office networks. Once your network has reached about ten clients, it can become too hard to maintain. All that is need to connect several individual systems and create a peer-to-peer network are network adapters, cable or other transmission media, and the operating system.
Disadvantages of peer-to-peer networking: The general rule is to stop using peer-to-peer networking once your total number of clients reaches about ten. If you start having more than ten clients before long, you would have people with different revisions of the documents on different client computers, and setting up the network would become a problem. If the James 5 network had a central server, you would only need to get information from one source.
Training is also difficult when you have a large number of clients. If you use peer-to-peer networking, your users need to be trained on how to share resources (14). Security in peer-to-peer networking becomes difficult to maintain. Users need to know how to secure their own resources. Because there is no central administration, it is the users’ responsibility to ensure that only authorized users can access their data. Most peer-to-peer security consists of a single password for each resource; this is known as share-level security.
Peer-to-peer networking works in small environments. If you grow beyond approximately 10 machines, the administrative overhead of establishing the shares, coupled with the lack of tight security, creates a nightmare. 2. Server-Based Networks – In a server-based network, you have one computer – usually larger than the clients, which is dedicated to handing out files and or/information to clients. The server controls the data, as well as printers and other resources the clients need to access.
The server is not only a faster computer with a better processor, but it also requires much more storage space to contain all the data that needs to be shared with the clients. Having the tasks handled by the server allows the clients to be less powerful because they only request resources (16) Since the server is dedicated to handing out files and/or information, it cannot be used as a workstation. Its purpose is strictly to provide services to other computers, not to request services.
Advantages of server-based networks: If your network has more than ten to fifteen clients, you should really consider a server-based network. With James 6 a server-based network, you only need to have your clients connect to one or a few servers to get the resources they need. Security is also much easier to manage in a server-based network. Since you only need to create and maintain accounts on the server instead of every workstation, you can assign rights to resources easily.
Access to resources can be granted to user accounts. Since the server on the network acts as the central repository for almost all your information, you only need to perform backups to the server (19). This type of network can also be quite cost-efficient. With the server storing almost all of the information on your network, you do not need large hard drives on the client computer. Disadvantages of the server-based network: The two main disadvantages are the requirement of a server and a dedicated administrator.
Servers can be expensive when compared to a normal workstation, but they also usually have features to help it handle client requests better. With a server-based network centralized administration, you can add all users at one location, control login scripts and backups, and so on. With centralized authentication, you can identify a user to your entire network based on his login name and password, not based on each share he attempts to access.
Each type of network has a physical topology, which is the wiring, and a logical topology, which is the path data, follows. A network’s topology is the physical layout of computers, cables, printers, and other network-related equipment. The topology is not a map of the network. It is a theoretical construct that graphically conveys the shape and structure of the James 7 Local Area Network (LAN). A logical topology describes the possible connections between pairs of networked endpoints that can communicate.
This is useful in describing which endpoints can communicate with which other endpoints, and whether those pairs capable of communicating have a direct physical connection to each other. Topological variation can be an important way to optimize network performance for each of the various functional areas of a LAN.
There are four types of topologies: 1. Star topology requires a hub, which is a central piece of equipment that connects separate computers or network segments together. In basic Star topology, each computer has its own cable length that connects to the central hub. The benefits of star topology include the following: A. A star topology is more fault-tolerant than other topologies because a cable break does not bring down the entire segment (Donald 201). B. It is easy to reconfigure the network, or add nodes because each node connects to the central hub independent of other nodes.
It is easy to isolate cable failures because each node connects independently to the central hub (201). The drawbacks of star topology include the following: A. If the central hub fails, the network becomes unavailable (201). B. This topology uses more network cable than other network models (201). James 8 2. Ring topologies allow network cable to run from one network interface card to the other with no free ends which mean the network cable makes a complete, closed circle.
Ring topologies offer the advantage of equal access to the network media through token passing. The main disadvantage of a ring topology is the same as a bus: a single node’s failure can disrupt the entire network (Moncur 36). 3. Bus topology is the simplest topology and requires the least amount of cable to implement. Bus topologies are created by simply running a single cable length and connect computers to it with T-shaped connectors that have three ends.
The chief disadvantage of a bus topology is that a break at any point in the bus will bring the network down (36). 4. Mesh topology provides fault tolerance through redundant links. In this system, each node is connected to every other node with separate cables. The main advantage of this system is a high degree of reliability. The disadvantage is that mesh topologies require a large amount of cable, making them very expensive to install and expand.
Depending on the physical distance between nodes on a network and the communication and services provided by the network, networks can be classified as local area networks (LANs), metropolitan area networks (MANs), and wide area networks (WANs). A network that connects computer systems and devices within the same geographic area is a local area network (LAN). A local area network can be a ring, star, hierarchical, or hybrid network.
Typically, local area networks are wired into office buildings James 9 and factories. They can be built around powerful personal computers, minicomputers, or mainframe computers (Stair, Reynolds 272). Another basic LAN is a simple peer-to-peer network that might be used for a very small business to allow the sharing of files and hardware devices such as printers. These types of networks have no server. Instead, each computer uses special network operating systems, a network interface card, and cabling to connect it to the next machine.
The performance is usually slower because one computer is actually sharing the resources of another computer (273). It has been estimated that over 70 percent of business PCs in the United States are connected to one or more local area networks. As the capabilities and power of LANs increase, the demand for local networks is expected to soar doubling in just the next three to four years.
LANs provide excellent support for businesses whose main communications are internal or encompass only a small region (273). A metropolitan area network, or MAN, is a group of LANs located in a city. For example, if a college has campuses with networks at each spread over the majority of a city, they could be connected to create a MAN. MANs are slower than LANs but usually have few errors on the network. Since special equipment is needed to connect the different LANs together, they have a high price (Nash 26).
The largest network size is a wide area network or WAN. WANs can interconnect any number of LANs and WANs. They connect networks across cities, states, countries, or even the world. WANs normally use connections that travel all over the country or world. For this reason, they are usually slower than MANs and LANs, and more prone to errors.
They also require a lot of specialized equipment, so their price is high (26). James 10 There are many reasons that WAN technology is used:
- PSTN (Public Service Telephone Network) is the telephone system used throughout the U.S. and many other countries. A modem is used to interface between a computer and the telephone system.
- RAS (Remote Access Service) used under Windows NT and dial-up networking under Windows 95.
- DDS (Digital Data Service) is a type of dedicated digital line provided by phone carriers.
- ISDN (Integrated Digital Network) was developed as an alternative to the standard PSTN telephone system for voice communications.
- ATM (Asynchronous Transfer Mode) is a high-speed packet switching format that supports speeds up to 622 Mbps. ATM uses a virtual circuit between connection points for high reliability over high-speed links. Technological advances in networking hardware and software have led to greater throughput on all scales and to an increasingly tighter integration of networking with all aspects of computing.
In keeping with these advances, the idea of networking has entered the common consciousness to an extent that would have been unimaginable a few short years ago. This shift in perception has led to an expansion of networking beyond the workplace, which is already beginning to shape developments in networking technology.
Example #2 – Satellite Atm Networks
Increasing demand for high-speed reliable network transmission has motivated the design and implementation of new technologies capable of meeting lofty performance standards. A relatively new form of network transmission known as Asynchronous Transfer Mode (ATM) has emerged as a potential technological solution to the ever-increasing bandwidth demand imposed upon networks.
ATM technology can be a good choice of network medium for many transmission tasks such as voice, data, video, and multimedia. Coupling ATM network implementations with the benefits associated with satellite networks may prove to be a fruitful merger in the pursuit of faster, more reliable, far-reaching networks.
Using ATM network technology over a satellite network medium can provide many advantages over the typical terrestrial-based network system. Some of these advantages include remote area coverage, immunity from earth-bound disasters, and transmission distance insensitivity. Additionally, these networks can offer bandwidth-on-demand, broadband links, and easy network user addition and deletion.
Adapting ATM technology for use over a satellite network is a new concept still in its infancy. Numerous problems such as signal adaptation, network congestion, and error control will have to be overcome in order to produce a fully functioning, viable system resulting from the marriage of these two distinct technologies.
Although ATM over satellite networks are not yet commonplace, many militaries, research, and business entities are expressing interest in their development. It is a promising idea that maybe one of the future solutions to the expanding problem of network performance needs.
SATELLITE NETWORKS
A satellite network is a highly specialized wireless type of transmission and reception system. Satellites send and receive signals to and from the earth in order to facilitate data transfer from various points on the planet. A typical satellite is rocket-launched and placed in a specific type of orbit around the globe. In 1945, scientist and author Arthur C.
Clarke first proposed that satellites in orbit around the earth could be used for communication purposes. (The Geosynchronous Earth Orbit described below is sometimes called the Clarke orbit in honor of the author’s suggestion.) The first satellite successfully launched and implemented in space was a Russian artificial satellite, roughly the size of a basketball.
This satellite, launched in the late 1950s, simply transmitted a short Morse code signal repeatedly. Today, there are hundreds of satellites circling the globe serving many diverse purposes such as communications, weather tracking and reporting, military functions, photo and video imaging, and global positioning information.
Satellites communicate with the earth by transmitting radio waves between the satellites and earth-bound reception stations. The wavelengths of the radio wave frequencies are determined by the location of the satellite in space as described below. The signals transmitted between the earth and satellites are sent and received by various sized antennas known as satellite dishes, typically located near the earth-station receivers.
Signals being sent from Earth to the satellites are referred to as uplinks, whereas the signals emanating from the satellites are called downlinks. The range of coverage area on earth that the frequencies can reach is known as the satellite’s footprint. (See Figure 1 below.) Radio waves transmitted to (and from) satellites usually involve traveling over large distances which creates relatively long propagation delays since the speed of transmission is limited by the speed of light.
Figure1. Three commonly used satellite frequency bands are C, Ku, and Ka, with C and Ku being the most frequently used in today’s satellite systems. C-band transmissions occupy the 4 to 8 GHz frequency range, whereas Ku and Ka bands exist in the 11 to 17 GHz, and 20 to 30 GHz frequency ranges respectively. There is a relationship between transmission frequencies, wavelength size, and antennas or dish size.
The higher the frequency, the smaller the wavelength, and accordingly, the smaller the dish. Conversely, a lower frequency corresponds to a larger wavelength which in turn requires a larger dish. C-band antenna size is generally 2-3 meters in diameter whereas the Ku-band antenna can be as small as 18 inches in diameter. Ku is the band of choice for many home DSS systems in use today.
The majority of satellites presently circling the globe today are in Geosynchronous Earth Orbit (GEO). These satellites are positioned at a point 22,238 miles above the earth’s surface. As the Geosynchronous title implies, these satellites circle the globe once every 24 hours, completing one orbit for every earth rotation. In relative view from the earth, the satellites appear to be stationary, remaining in a fixed position. It is for this reason that these satellites are also occasionally called Geostationary Satellites. (There is a difference between the two terms, however.
Geosynchronous orbits can be circular or elliptical, whereas Geostationary orbits must be circular and located above the earth’s equator.) This GEO orbit allows the satellite dishes located on the earth to be aimed at the orbiting satellite once, without requiring continual repositioning. A satellite in this type of orbit can provide a coverage footprint equal to 40% of the earth’s surface. Therefore, three evenly spaced Geosynchronous Earth Orbit satellites (120 angular degrees apart) can provide complete transmission coverage for the entire civilized world.
In recent years, technological innovations have paved the way for new types of satellite orbits and designs. One of these new orbits is called Medium Earth Orbit. (MEO) Satellites in this type of orbit are located at an altitude of approximately 8,000 miles above the earth. Placing satellites at this level allows for shorter transmission lengths, thereby increasing the strength of the signal and decreasing the transmission delay. This in turn means that the receiving equipment on earth can be smaller, more lightweight, and less expensive. The downside to these altitudes is the smaller footprints provided by MEO satellites as opposed to their GEO counterparts.
Another relatively new category of satellite orbits is the Low Earth Orbit (LEO). There are three categories of LEO satellites: Little LEO, Big LEO, and Mega LEO. LEO satellites typically orbit at a distance of only 500 to 1000 miles above the earth. As with the MEO satellites described above, LEO satellites further reduce the transmission delay and equipment expense, maintain strong signal strength, and project a smaller footprint.
ATM NETWORKS
A network topology that has gained considerable popularity in the recent past is Asynchronous Transfer Mode (ATM). This network switching technology, also known as Cell Switching, has been embraced by various factions of the network transmission community such as telephone companies, scientific research firms, and the military. ATM was designed to operate on transmission media at speeds of 155Mbps or more, giving it the advantage of good performance.
ATM topology is connection-oriented offering a high Quality Of Service (QOS) level. The ATM network technology was originally envisioned as a way to create large public networks for the transmission of data, voice, and video. Additionally, ATM has been subsequently embraced by the LAN community to compete with Ethernet and Gigabit Ethernet.
In its simplest form, ATM networks are switched networks that create a connection and path from a sender to one or more receivers for the transmission of fixed size frames known as cells. Cell transport is accomplished using a statistical multiplexing algorithm for transmission decisions, and a technique called Cell Segmentation and Reassembly.
Since the majority of frames passed to an ATM network are not the required byte size, the ATM protocol must segment the frames into the proper cell size prior to transmission and subsequently reassembling the ATM cells back into their original state at the receiver.
ATM cells are fixed-size data packets of 53 bytes each. The ATM cell size is always 5 bytes of header and 48 bytes of payload. Therefore, this technology has the beneficial side-effects of a reduction of complex hardware for the switches themselves, improved frame queueing, and uniform switching activities all taking the same amount of time to accomplish.
ATM cells have two possible formats dependent upon where the cell happens to be in the network. The names for these two formats are User-Network Interface (UNI) for the user to the network interface, and Network-Network Interface (NNI) for the network to the network interface.
Figure 2 below shows the format for a UNI version of the ATM cell. The NNI version of the cell differs only in that the Generic Flow Control (GFC) of the UNI version is replaced by 4 additional bits for the Virtual Path Identifier (VPI).
As can be seen from Figure 2, the cell starts with the GFC, which is intended to be used to arbitrate access to a link in the event that the local site uses a shared medium to attach to an ATM configuration. Following the GFC are the VPI and Virtual Circuit Identifier (VCI) bits used to identify the path (channel) created for this transmission. Next, we see the Type bits which are used to facilitate management functions and indicate whether or not using data is contained in the cell. The final two fields prior to the 48-byte payload are the Cell Loss Priority (CLP) bits and the Header Error Check (HEC) bits.
The CLP establishes a priority value in case the network becomes congested and needs to drop one or more frames. (See below for a discussion on traffic management.) The HEC is used for transmission error checking to incorporate the CRC-8 polynomial. (CRC-8 is one of several commonly used polynomials for Cyclic Redundancy Checking within network protocols.)
ATM can be run over several different physical media and physical-layer protocols including SONET and FDDI. To adapt ATM to these various media layers and accomplish the necessary segmentation and reassembly of the ATM cells, a protocol layer called AAL (ATM Adaptation Layer) is inserted into the network protocol stack between the ATM layers and other protocol layers desiring ATM use. (See Figure 3 for a typical ATM protocol stack.)
ATM OVER SATELLITE
To take full advantage of the benefits of both satellite and ATM networks, a network architecture and protocol stack must be implemented that allows communication between these very different technologies. The solution to this problem comes in the form of the key component of this system known as the ATM Satellite Internetworking Unit (ASIU).
(This component is also referred to by some as the ATM Adaptation Unit.) The ASIU is the essential piece of equipment for the properly-functioning interface between the satellite and ATM systems. It is responsible for handling many complex issues of the system interface such as management and control of system resources, real-time bandwidth allocation, network access control, and call monitoring. In addition, the ASIU must take care of system timing and synchronization control, error control, traffic control, and overall system administrative functions.
Figure 4. The ASIU is the single interface that acts as a bridge between the ATM and satellite systems, allowing the desired data to be exchanged back and forth across these two distinctly different types of transmission mediums. As can be seen on both sides of the typical ATM to satellite protocol stack of Figure 5, the ASIU is placed between the last leg of the ATM network and the front of the satellite system equipment. Therefore, the ASIU can perform all of its necessary functions such as data segmentation and reassembly, and the features described above.
As mentioned above, the ASIU performs many functions, all contributing to the proper operation of the ATM to satellite interface. Five of the most interesting and important functions of this component are the Cell Transport Method, the Satellite Link Layer, error control, traffic management, and bandwidth management. These five issues will be explored in the following discussions.
Figure 5. Cell transport across an ATM over a satellite network can make use of an existing digital cell transport format. The three existing cell transport protocols that have been considered for potential use with this type of system are Plesiochronous Digital Hierarchy (PDH), Synchronous Digital Hierarchy (SDH), and Physical Layer Convergence Protocol (PLCP). The most feasible and promising protocol for cell transport within this system turns out to be SDH for the reasons delineated below.
The PDH transmission system was originally developed to carry digitized voice efficiently in major urban areas. (PDH however, is being replaced by other transport methods such as SONET and SDH.) In this scheme, the multiplexer at the sending end has access to multiple tributaries for transport that can have varying clock speeds.
The multiplexer reads each tributary at the highest allowed clock speed but will also check to see if there are no bits in the input buffer. If this is the case, the multiplexer will use bit-stuffing to move the signal up to a higher clock speed for better performance. The multiplexer subsequently notifies the receiving demultiplexer that the data contains stuffed bits for the proper deletion of the bits on the receiving end.
The downside to the PDH system is the added overhead and complexity created by performing the redundant ADD and DROP operations required by the bit stuffing. In addition, PDH has difficulty recovering and rerouting signals following a network failure. SDH, on the other hand, is more suited for the ATM to satellite interface since it was originally designed to take advantage of a completely synchronized network.
The fiber optic transmission signal typically used for an ATM network transfers a very accurate clock rate throughout the network. The key ingredient for the SDH protocol is the inclusion of pointer bytes that indicate the beginning of the cell payload. This helps avoid any data loss due to a bit of slippage caused by a slight phase and/or frequency variations. SDH has other advantages over PDH since it can handle higher data rates, support easier and less expensive multiplexing and demultiplexing, and has increased provisions for network management.
The PLCP transport method was originally designed to carry ATM cells over existing DS3 facilities. The PLCP format consists of 12 ATM cells in a sequential group, with each cell being preceded by 4 overhead bytes. A-frame trailer of either 13 or 14 nibbles is appended to the end of the group of 12 cells to facilitate nibble stuffing. Each individual 12 cell and overhead combinations require a 125-microsecond interval for transmission. Unfortunately, PLCP is susceptible to corruption caused by burst errors that can effect the perceived number of nibbles required for stuffing, resulting in frame misalignment.
SDH appears to be the logical choice for cell transport in this type of system. However, an important point to consider when using SDH is the possibility of an incorrect payload pointer. This situation may produce faulty payload extraction, causing previously received cells to be corrupted and necessitate their dismissal. It is imperative for the correct functioning of an SDH-based system to employ techniques capable of spreading out errors and performing enhanced error monitoring activities. (See the discussion on error control below.)
Satellite Link Access
Access methods typically are seen in Local and Metropolitan Area Networks are not suited for use with satellite systems due to the high propagation delays created by the long distances to the satellites. LAN and MAN performance are dependent upon short transmission times whereas satellite systems are effective when utilized at maximum capacity. Therefore, an access method must be used in this system that “keeps the pipe full”.
There are presently three basic access methods used in satellite systems. Unfortunately, none of these schemes are optimized for use with ATM technology. These three methods, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), and Demand Assignment Multiple Access (DAMA) can be modified from their present form to a configuration more suited for use in an ATM over satellite implementation.
The FDMA access method divides the total available satellite bandwidth into equally sized portions. Each portion is assigned to one earth station for exclusive use by that station. This scheme thus eliminates errors and collisions since there is no signal interference between individual earth stations. In addition, FDMA can be used with smaller antennas. Unfortunately, however, FDMA requires guard bands for signal separation which is not conducive to the goal of maximum capacity usage in the system. (FDMA is also considered to be rather inflexible.)
Unlike the subchannel frequency division of FDMA, the conventional TDMA access method divides the bandwidth into time slots. These time slots are usually equal-sized, however, variable time slots or allocation on-demand configurations are also possible. Using a round-robin scheme, earth stations each receive the use of the entire bandwidth for a small period of time.
This turns out to be a suitably flexible setup for packet traffic. TDMA, unfortunately, requires a large antenna size and since the time slot synchronization adds complexity to the system, the earth-bound hardware cost is increased.
A slight variation of the TDMA access method is the Code-Division Multiple Access (CDMA) technique, also known as spread-spectrum systems. In this scheme, transmissions from the earth stations are spread over the time slots using a unique code identifier. This helps to combat signal jamming. For this reason, this scheme is used frequently by the military.
Another variation of the TDMA access method that is projected to be used in most future satellite systems is the Multifrequency Time Division Multiple Access (MF-TDMA). This method extends the single frequency scheme used by conventional TDMA into the use of multiple frequencies that can be shared by all earth stations. MF-TDMA, therefore, increases bandwidth and reduces antenna size.
The third existing satellite link access method is Demand-Assignment Multiple Access (DAMA). This technology allows dynamic allocation of bandwidth based on the needs of the network user. DAMA is suitable for use when communication between satellites is not required to be continuous. This permits the alternation of channels by which the ATM cells are transmitted as opposed to establishing a single channel and maintaining this connection continuously.
DAMA can be combined with other access methods such as MF-TDMA or SCPC. Using these separate technologies together will allow the system to take advantage of the benefits of both. For example, as mentioned above, DAMA is suited for non-continuous transmissions. Coupling this with SCPC which is suited for continuous connections can help to achieve greater efficiency in the ATM over satellite network configuration.
Error control. A well-known problem facing satellite transmission systems is its susceptibility to burst errors. This characteristic is created by the variations in satellite link attenuation and the use of convolutional coding to compensate for channel noise. ATM systems are also suited for handling random errors in lieu of burst errors. Multiple burst errors in an ATM over satellite system may therefore cause many ATM cells to be discarded during transmission. In order to alleviate this problem, an efficient error checking and/or error-correcting mechanism should be in place when implementing this type of system.
Implementing an automatic repeat request (ARQ) technique at the link layer of the protocol stack can help to reduce the high error ratio levels created by burst errors. There are three common versions of ARQ used in this situation; stop-and-wait, Go-back-N, and selective-repeat. Most existing ATM over satellite networks use the Go-back-N scheme. Stop-and-wait is simple but not effective in the satellite environment due to the long propagation delay. Selective-repeat has the benefits of good throughput and error performance but suffers from the disadvantages of sender and receiver complexity and the potential for out-of-order packet reception.
Traffic Management. Traffic and congestion control are very important issues facing the designers of ATM over satellite networks who wish to maintain a high level of Quality of Service (QoS). The long propagation delays of satellite systems coupled with their limited bandwidths (as opposed to the bandwidths of optical fiber links typical of ATM land-based systems) make the efficient implementation of these control functions imperative. Poor overall system performance caused by the neglect of proper traffic and congestion control mechanisms can serve to make the ATM over satellite network unusable.
Three common traffic control techniques used with land-based ATM systems are traffic shaping, connection admission control (CAC), and deliberate (selective) cell dropping. Although these methods work well for the land-based systems, they need to be modified for acceptable use with an ATM over a satellite network and to maintain the appropriate QOS.
Traffic shaping changes the characteristics of cell streams to improve performance. Some examples of traffic shaping are peak cell rate reduction, burst length limiting, and reduction of Cell Delay Variation (CDV) by positioning cells in time and queue service schemes. The limitation of this scheme in relation to the ATM satellite environment is the inability to dynamically change the traffic parameters during network congestion.
CAC is an effective traffic control mechanism when used with systems that experience occasional congestion. This method is a set of actions that the system can take to allow or disallow a network ATM connection to be established based on the amount of network congestion at the present moment. In the ATM satellite system, however, this scheme seems to be effective during the ATM connection phase only.
The long propagation delay of the satellite portion of the system precludes this technique from being useful during transmission. If the system faces more than occasional congestion, the performance suffers from the inability to establish ATM connections.
The deliberate or selective cell dropping technique is based on the idea of potentially dropping a cell when the network becomes congested. The determining factor concerning which cells are to be dropped is the Cell Loss Priority (CLP) bit contained in the cell. (See Figure 2 for the location of the CLP bit in the ATM cell.) This scheme is not suited for the ATM satellite environment since it can cause many dropped cell retransmissions over long propagation delays thereby hindering overall performance.
Two additional schemes proposed for use with ATM over satellite networks are Explicit Forward Congestion Indication (EFCI), also known as forwarding Explicit Congestion Notification (FECN), and Backward Explicit Congestion Notification (BECN). EFCI is a technique used to convey congestion notification information from the destination to the source via a communication to its peer in the higher protocol layers. The source can therefore take appropriate action to reduce additional traffic through the present channel.
The problem with this method is that at least a one-way propagation delay is required to notify the source of the congestion. BECN is a faster mechanism than EFCI since a congested network can use this technique to send congestion information in the reverse direction of the network flow to indicate the problem without requiring peer notification. However, like the EFCI technique, BECN is also subject to long propagation delays if the congestion is occurring at the destination.
Bandwidth Management
Since bandwidth for satellites is limited, proper bandwidth management in the ATM over satellite system is critical. Substantial degradation of the overall performance of the combined terrestrial and satellite system will severely inhibit its usefulness. Applications requiring high bandwidth allocations are particularly affected by this issue.
Bandwidth management is a difficult matter to handle within a satellite network. One possible way to help with this problem is to allocate bandwidth for the channel at the connection setup phase by using a Burst Time Plan (BTP). This traffic assignment scheme is a mapping tool that indicates the position and lengths of bursts in the transmission frame.
The BTP restricts the number of ATM cells in bursts or sunbursts that each earth station can transmit. The number of Virtual Paths (VP) and Virtual Channels (VC) of the ATM connection can also be restricted by the BTP to help with bandwidth management.
SUMMARY. The usefulness of a combination of ATM and satellite networks will be determined by its ability to maintain the QoS of terrestrial-based ATM systems. This will require seamless integration of the two systems without producing serious performance degradations and/or error increases. The challenge facing designers desiring the combined benefits of the distance advantages of satellites and the speed and reliability of ATM is formidable.
Many of the problems and concerns discussed in this document remain unresolved, precluding the worldwide implementation of an ATM over satellite system. As network technology advances, perhaps new schemes and techniques will be developed enabling some of the limiting factors of complexity, cost, and delay to be alleviated. These issues will have to be resolved if ATM technology over a satellite network is to play a significant role in the rapidly evolving information infrastructure.
Example #3 – Future Of Wireless Networking
For each of the past five years, industry pundits have been convinced it would be the year of wireless data in the wide-area environment. And every year it was for a different reason: support for IP protocols; phones with microbrowsers; support from industry giants, such as Microsoft and IBM; and new platforms, such as handheld computers. This year s reasons include WAP (Wireless Application Protocol_ and the forthcoming GPRS (General Packet Radio Service (Internet Wide-Area Wireless Computing ). Meanwhile, the latest buzz is about third-generation cellular, and data throughputs of 2 Mbps are just around the corner.
Network managers are faced with a difficult situation. The people they support are increasingly demanding wireless access to the corporate network. These people have experienced the advantages of wireless networks for voice communications. Now they want those advantages for data communications, especially as massive promotional efforts by major operators have significantly increased awareness of these services.
Meanwhile, millions of palm-sized devices are begging to have wireless connections for applications such as e-mail and schedule synchronization (Internet, A World Without Wires ). And who wouldn’t want to be able to access a Microsoft Exchange or Lotus Notes server when stuck at an airport gate? Wireless networking promises both greater work productivity and increased flexibility in our lifestyles.
By next year, cellular operators will be offering IP packet-data services with rates as high as 144 Kbps, though 56 Kbps will be more typical of downlink speeds. This is in sharp contrast with today s services, which are limited to about 9.6 Kbps (Internet A World Without Wires ). However, the industry is fragmented, with multiple wireless technologies grouped, each with its own data strategy. Understanding how these data services will evolve will let you take better advantage of these developments.
To obtain some fundamental insight into this industry, you should realize there are huge market forces at play that will determine how and when next-generation data services will be arranged. Unfortunately, these forces are not all in alignment. Understanding this interplay will help you make your own predictions. One force driving the broad deployment of new data services is the huge success of wireless voice, leading the industry to view data as a vast new source of potential revenue. Data is also now an integral part of next-generation cellular systems, not just an afterthought.
Complementary industry developments, including new handheld platforms and new delivery methods, such as WAP, also help. Finally, the massive alliance in the wireless industry will make it easier for operators to arrange services over a nationwide if not a global scale (Internet Wide-Area Wireless Computing ).
There are some key attributes of wireless networking services though. First, they are slow. They weren’t when they were designed, at the beginning of the 1990 s, but today s typical rates of 9.6 Kbps to 14.4 Kbps simply do not stand up to the demands of rich Web pages and heavy-duty productivity applications like Microsoft Exchange and Lotus Notes.
Second is the emphasis on circuit-switched connections. With data as an afterthought for current digital cellular systems, a dial-up model for data is easier to deploy than a packet-switched architecture. But dial-up means connection delays, an inability to push data to mobile users, and having to pay for connect times even sessions are idle (Internet Wireless World Spawns New Data Device Breed ).
Nevertheless, today s wireless networks can be used productively as long as only small amounts of data are involved. In many instances, customers need to employ wireless middleware solutions that account for the limitations of today s wireless networks. Internet portals are also now targeting this industry by developing mobile content that can be accessed by new microbrowser-equipped cell phones, giving users access to their e-mail, calendar, travel, entertainment, and restaurant information; sports results; package tracking; horoscopes, and so on.
Despite some of these new consumer-oriented services, wireless networks today are used mostly for messaging applications or in vertical markets (field service, for example) (Internet A World Without Wires ). Not only is there a tremendous amount of new wireless technology in development, but a good chunk of it is about to be deployed. The best way to get a clear picture is to examine the data strategies for the three principal digital cellular technologies.
In the U.S., the two dominant cellular technologies are TIA/EIA-136, which is TDMA (time-division multiple access) technologies, and IS-95, a CDMA (code-division multiple access) technologies. TDMA, the oldest U.S digital technology, divides radio channels into three-time slots, with each user receiving a distinct slot. This method enables three users on each radio channel to communicate without interference.
CDMA, a newer U.S. digital technology, uses spread-spectrum technology, in which many users share the same radio channel simultaneously but are distinguished by unique pseudo-random codes. Both are considered second-generation cellular technologies, with analog cellular being first-generation (Internet Wireless World Spawns New Data Device Breed ).
The largest TDMA carriers are AT&T Wireless Services and SBC Communications, while the largest CDMA carriers are Sprint PCS and Verizon. GSM (Global System for Mobile Communications) technology is a distant third in the U.S but dominates worldwide. In the U.S., the dominant GSM carrier is VoiceStream. AT&T Wireless Services, Sprint PCS, and VoiceStream all offer nationwide coverage (Internet Wireless World Spawns New Data Device Breed ).
Cellular technology gets the lion s share of attention, but network managers should keep their eyes open to other wireless providers. One is Metricom, which is rolling out a wireless data service called Ricochet2, with rates of 128 Kbps. With strong backing from MCI WorldCom, Metricom expects to have services deployed in more than 35 metropolitan areas by this year, reaching 100 million potential subscribers (Internet Wide-Area Wireless Computing ). If cellular carriers execute their data plans aggressively, Metricom might offer too little too late, but if cellular carriers stumble, Metricom could scoop up cellular-crazed mobile customers.
Wireless LANs promise productivity enhancements and flexibility, operators are broadly promoting new services, while new platforms, such as handheld computers and phones with microbrowsers, enable new applications. However, the widespread use of wireless networking also proved elusive because of slow speeds, high costs, and complex deployments that often require middleware (Internet Wireless LANs Work Their Magic ).
But high-speed data technology could increase the utility of the wireless services. Although it is targeted initially for private deployments, there is an increasing interest in public arrangements, such as airports, shopping malls, and hotels. Some cellular operators may eventually offer a blend of access options, with cellular in wide areas and WLAN technologies in high-density environments (Internet Wireless LANs Work Their Magic ).
The wireless industry is driven by the desire for new revenue sources and has designed data services as a core function of next-generation networks. For customers, which technology prevails may not be a huge factor. Most of the technology I discussed in my will offer high-speed wireless IP networking, and customers will easily be able to port applications from one network to the other. And multimode devices in the future may even make it possible to roam between these networks (Internet Wide-Area Wireless Computing ). In the meantime, this will be a grand phenomenon to observe.
Example #4
The Internet is, literally, a network of networks. It is made of thousands of interconnected networks spanning the globe. The computers that form the Internet range from huge mainframes in research establishments to humble PCs in people’s homes and offices. Despite the recent publicity, the Internet is not a new thing. Its roots lie in a collection of computers that were linked together in the 1970s to form the US Department of Defense’s communications systems.
Fearing the consequences of a nuclear attack, there was no central computer holding vast amounts of data, but instead, the information was dispersed across thousands of machines. A protocol known as TCP/IP was developed to allow different devices to work together. The original network has long since been upgraded and expanded and TCP/IP is now an overall standard.
The Internet has gone on now to fulfill a great deal more than it’s intended purpose and has definitely brought more good than bad. Millions of people worldwide are using the Internet to share information, make new associations, and communicate. Individuals and businesses, from students and journalists, to consultants, programmers, and corporate giants are all harnessing the power of the Internet. For many businesses, the Internet is becoming integral to their operations.
Imagine the ability to send and receive data, messages, notes, letters, documents, pictures, video, sound, and just about any form of communication, as effortlessly as making a phone call. It is easy to understand why the Internet is rapidly becoming a corporate communications medium. Using the mouse on your computer, the familiar point-and-click functionality gives you access to electronic mail for sending and receiving data, and file transfer for copying files from one computer to another.
This flood of information is a beautiful thing and it can only open the minds of society. With the explosion of the World Wide Web, anyone could publish his or her ideas to the world. Before, in order to be heard one would have to go through publishers who were willing to invest in his ideas to get something put into print. With the advent of the Internet, anyone who has something to say can be heard by the world. By letting everyone speak their mind, this opens up all-new ways of thinking to anyone who is willing to listen.
A very important disadvantage is that the Internet is addictive. One of the first people to take the phenomenon seriously was Kimberly S. Young, Ph.D., a professor of psychology at the University of Pittsburgh. She takes it so seriously, in fact, that she founded the Center for Online Addiction, an organization that provides consultation for educational institutions, mental health clinics, and corporations dealing with Internet misuse problems.
Psychologists now recognize Internet Addiction Syndrome (IAS) as a new illness that could ruin hundreds of lives. Internet addicts are people who are reported staying online for six, eight, ten, or more hours a day, every day. They use the Internet as a way of escaping problems or relieving distressing moods. Their usage can cause problems in their family, work, and social lives. They feel anxious and irritable when offline and craved getting back online.
Despite the consequences, they continue using regardless of what their friends and family say. Special help groups have been set up to give out advice and offer links with other addicts. Internets Anonymous and Webaholics are two of the sites offering help, but only through logging onto the Internet. The effects of IAS lead to headaches, lack of concentration, and tiredness. Robert Kraut Doctoral Psychologist says referring to the subject: “We have evidence that people who are online for long periods of time show negative changes in how much they talk to people in their family and how many friends and acquaintances they say they keep in contact with.
They also report small but increased amounts of loneliness, stress, and depression. What we do not know is exactly why. Being online takes up time, and it may be taking time away from sleep, social contact, or even eating. Our negative results are understandable if people’s interactions on the net are not as socially valuable as their other activities.”
Another considerable drawback of the Internet is that it is susceptible to hackers. Hackers are persons that have tremendous knowledge on the subject and use it to steal, cheat, or misuse confidential or classified information for the sake of fun or profit. As the world increases its dependence on computer systems, we become more vulnerable to terrorists who use computer technology as a weapon. It is called cyber-terrorism and research groups within the CIA and FBI say cyber-warfare has become one of the main threats to global security.
One notorious hacker is American Kevin Mitnick, a 31-year-old computer junkie arrested by the FBI in February for allegedly stealing more than $1 million worth of data and 20,000 credit-card numbers through the Internet. Network hacking is presenting fresh problems for companies, universities, and law-enforcement officials in every industrial country. But what can be done for hacking?
There are ways for corporations to safeguard against hackers and the demand for safety has led to a booming industry in data security. Security measures range from user IDs and passwords to thumbprint, voiceprint, or retinal scan technologies. Another approach is public-key encryption. An information system girded with firewalls and gates where suspicion is the standard and nothing can be trusted will probably reduce the risk of information warfare.
A committee of the Canadian Association of Chiefs of Police has made several recommendations to stop hacking. One would make it illegal to possess computer hacking programs, those used to break into computer systems. Another would make the use of computer networks and telephone lines used in the commission of a crime in itself.
The committee also recommends agreements with the United States that would allow police officials in both countries to search for computer data banks. The problem with regulating the Internet is that no one owns it and no one controls it. Messages are passed from the computer system to the computer system in milliseconds. Government officials are hoping that Internet service providers, such as AOL, can police the Net themselves.
There is another problem that practically circulates through the Internet: Viruses. They can move stealthily and strike without warning. They have no real life of their own and go virtually unnoticed until they find a suitable host. Computer viruses are tiny bits of programming code capable of destroying vast amounts of stored data and bear an uncannily close relationship to real viruses.
Like real viruses, they are constantly changing, making them more and more difficult to detect. It is estimated that two or three new varieties are written each day. Most experts believe that a virus is created by an immature, disenchanted computer whiz, frequently called a “cracker”. The effects of a virus may be insignificant such as that of the famous “Stoned” virus that merely displays a message calling for the legalization of marijuana.
Other viruses, however, can program files to constantly perform duplications that may cause a computer’s microchips to fail. The rapid increase in computer networks, with their millions of users exchanging vast amounts of information, has only made things worse. With word processing macros embedded in the text, opening an e-mail can now unleash a virus in a network or a hard disk. Web browsers can also download running code, some of it is possibly harmful.
Many companies offer antiviral programs, capable of detecting viruses before they have the chance to spread. Such programs find the majority of viruses but virus detection is likely to remain a serious problem because of the cleverness of crackers. One type of virus, known as a polymorphic virus, evades discovery by changing slightly each time it replicates itself.
The Internet offers a new way of doing business. A virtual marketplace where customers can, at the push of a button, select goods, place an order, and pay using a secure electronic transaction. Businesses are discovering the Internet as the most powerful and cost-effective tool in history. The Net provides a faster, more efficient way to work with colleagues, customers, vendors, and business partners. Businesses making the transition over to “e-business” are prospering; however, those that do not will most certainly suffer the consequences.
One of the most commonly asked questions is, “Will the Net help me sell more product?” The answer is yes, but in ways, you might not expect. The Internet is a communication “tool” first, not an advertisement medium. Unlike print or broadcast media, the Internet is interactive; and unlike the telephone, it is both visual and content-rich. A Web site is an excellent way to reduce costs, improve customer service, disseminate information, and even sell to your market.
A very important fact is that the Internet supports online education. Online education introduces unprecedented options for teaching, learning, and knowledge building. Today access to a microcomputer, modem, telephone line, and communication program offers students and teachers the possibility of interactions that overcome the restrictions of time and space.
There are many school-based networks that link learners to discuss, share, and examine specific subjects such as environmental concerns, science, local and global issues, or to enhance written communication skills. The introduction of online education opens unique opportunities for educational interactivity. Students may learn independently, at their own pace, in a convenient location, at a convenient time about a greater variety of subjects, from a greater variety of teachers.
The most important facts about the Internet are that it contains a wealth of information, that can be sent across the world almost instantly, and that it can connect people in very different locations as if they were next to each other. Most claims for the importance of the Internet in today’s society are based upon these very facts. People of like minds and interests can share information with one another through electronic mail and chat rooms.
E-mail is enabling radically new forms of worldwide human collaboration. Approximately 225 million people can send and receive it and they all represent a network of potentially cooperating individuals belittling anything that even the mightiest corporation or government can assemble. Mailing-list discussion groups and online conferencing allow us to gather together to work on a multitude of projects that are interesting or helpful to us.
Chat rooms and mailing lists can connect groups of users to discuss a topic and share ideas. Materials from users can be added to a Web site to share with others and can be updated quickly and easily anytime. Today the Internet is a highly effective tool for communicating, gathering information, and for cooperation between distant locations. There are continuous development and improvement.
Many businesses are discovering new ways to reach their customers, new ways to improve efficiency, new products, and services to sell. In the next 10 years, somebody will figure out how to charge for information on the Net, so you won’t get things necessarily for free. That will have several good effects, including a way to pay authors for their work. And because of the economic incentive, it will become easier to filter out the good from the bad. The Web is like a library that many people access for the sake of ease.
Arguments can be made for the advantages and disadvantages of the Internet, but most people will agree that the Internet is a fortunate thing for technology. It is not a question of whether or not the advantages of the Internet outweigh the disadvantages. It is about an understanding of the risks and implications of using this type of technology when working to achieve goals.
Once the security problems are handled, the costs are streamlined, and the searching algorithms are perfected, the possibilities are endless. However, governmental action can’t really make any difference, because the Internet is too already far out of their control.
Example #5
For my independent study, I have created a network in my house. A network by definition is more than one computer that is linked together electronically via a protocol (common language) so the computers can communicate and share resources. This network improves day-to-day life by adding value and usefulness to the computers.
The processes and ideas that I have learned thru this experience can be applied directly to today¡s rich electronic business environment. Identifying the needs of the user is the first step for building a well-designed Network. Professional installation was needed to maintain the aesthetics of the rental house. Most of the wires are run in the attic and then down plastic conduit attached to the wall.
The conduit is run all the way to the wall boxes where the Ethernet ports are located. Every wire is clearly labeled and included in an easy to read schematic of the house. This way future tenants will have the ability to utilize the network. Next, every room needed to have access to the network. In order to minimize the overall use of wires, hubs were placed in strategic locations.
An 8-port 10/100-megabit auto-sensing hub is located in the computer room and a 5 port 10-megabit in the sound room. There, needed to be docking stations, so laptop users or visiting computers could easily plug into the network and utilize the pre-existing monitor, keyboard, and mouse. These are the basic needs that have been put into the design of the network.
Each computer setup is unique with certain strengths and weaknesses. The network takes advantage of the strengths of each individual computer and makes it available to all users. A network essentially expands the capabilities of each computer by increasing functionality thru resource sharing. In the house, there are a total of four computers and two laptops. Processing speed and an abundance of ram are not essential for a server with such low traffic.
Example #6
Nowadays, it seems that everyone has a computer and is discovering that communication technologies are necessary. E-mail, the Internet, and file transferring have become a part of the modern world. Networks allow people to connect their computers together and to share resources. They allow people to communicate and interact with each other. The days of the lone PC are diminishing. At the same time, computers are getting faster than ever.
The most powerful PC five years ago couldn’t be sold for half of its original price today. This poses some problems for the consumer. New technologies can’t be driven by older technology. Modems, a popular communications device, typically have throughput rates from 9600 baud (See Glossary) to 28800 baud. Modems use an RS232 or serial port connection.
One solution to this transmission speed is the parallel or Centronics port. This port transmits eight bits at a time instead of one with a serial port, which is how they got their names. Because it sends eight bits at a time, a parallel port can achieve transfer rates of up to 40,000 bits or 5000 characters a second (Seyer 64). Parallel ports are commonly used to connect to printers and external storage devices.
The main problem with modems and parallel ports is that only two computers can be connected at one time. Networks were invented to solve this. They allowed a group of computers to communicate and to share resources such as hard disk space and printers. Different types of networks have different throughput rates, but all are higher than either serial or parallel ports.
The type of network used at Banks High School is an Ethernet network with software from Novell. This network has the capability of supporting transfer rates up to ten megabits per second (Bennett 1). Although this seems like a lot, remember that all of the computers on the network are using this connection at the same time and each computer has to share with every other computer. There are a few ways to speed up a slow Ethernet network, however.
Example #7
Networking is a term used for the process of connecting printers, computers, and other nodes in a manner that would allow individuals to use computers to communicate with one another easily. In this age of growing technology, individuals need to communicate with one another from different locations and this is why there is a need for the development of networks. Networking allows individuals to communicate, transfer files, share printers, fax machines, and many other nodes on a network.
A network is meant to be a design of a number of computers connected together in a manner where the essence of sharing resources would be available and the computers would also be able to communicate and interact with one another as any human would do with another. What Is A Network? “In its simplest form, a network is at least two computers — desktops, laptops, or one of each — connected together with wireless or wired technologies.
A network is meant to be a design of a number of computers connected together in a manner where the essence of sharing resources would be available and the computers would also be able to communicate and interact with one another as any human would do with another. What Is A Network? “In its simplest form, a network is at least two computers — desktops, laptops, or one of each — connected together with wireless or wired technologies.
That’s it. ” (Motorola, Inc., 2009 ) This means that any two computers when connected together are known to be a network. The connection of course should allow the computers to communicate and share each other’s resources in an effective manner. This is the basic concept of a network and this is why a network is built in many companies and businesses.
A network allows businesses to save on the resources required and would just need to hire one resource for a number of people. For example, in any company there would be several computers, each for one employee, however, all these employees would be connected to a single printer and this would allow a lower cost.
Why Do We Need A Network? “Proponents view this trend toward in-network computing as the ultimate form of convergence: the network function is more holistically integrated with the application, particularly where QoS requirements exist. Others see this trend as a mechanism to move to a service-oriented architecture, where applications are expressed in terms of functional catalogs, and network functions are seen as addressable, executable blocks.
” (Minoli, 2006 ) A network is required for the need of effective and efficient communication and this can be achieved only if various employees or individuals work together in a manner that would allow the quality to be achieved in the work being completed and this would mean that networks are important for any company to satisfy its customers. The only way customer can be satisfied is by being served faster and with the correct information. There are three aspects of differentiating a network from another. The three aspects could be 1. Topology: The geometric arrangement of a computer system.
Common topologies include a bus, star, and ring. 2. Protocol: The protocol defines a common set of rules and signals that computers on the network use to communicate. One of the most popular protocols for LANs is called Ethernet. Another popular LAN protocol for PCs is the IBM token-ring network. 3. Architecture: Networks can be broadly classified as using either a peer-to-peer or client/server architecture. What Are The Components Of A Network? The best way of understanding the components of a network is by the following picture: (RUPP, 2008 )
The basic components of any network include computers, switches, hubs, routers, and servers. Every network must have computers that would be connected to a centralized location which would be the server. The server would handle all the computing and the message handling of various kinds. The servers would involve the daily transactions or the heavy load transactions of any company. Then there is the concept of switches and hubs that act as connectors between networks and this is the main concept of a router as well. These connectors allow computers to communicate across networks.
What Are The Basic Terminologies Of A Network? There are several terminologies of a network, however, the basic ones are 1. Ping – Ping is an acronym for Packet Internet Groper and it is used to check if a host exists on the TCP/IP network. When you ping a host you are actually sending out a message asking for a reply. A successful reply from the host will be acknowledged on the sending computer. 2. Ipconfig – Ipconfig is an acronym for IP Configuration and is primarily used to display local TCP/IP configuration data for a client machine.
It also has additional features like the ability to Release or Renew a DHCP server obtained address. 3. Tracert – Tracert is an acronym for TraceRoute. It allows you to trace the connectivity path a packet uses to go from the source computer to the destination computer. 4. Nslookup – “Nslookup is the name of a program that lets an Internet server administrator or any computer user enter a hostname and find out the corresponding IP address. It will also do reverse name lookup and find the hostname for an IP address you specify.
” (Whatis, 2009 ) Demonstrate and Explain the Concepts within Nslookup The basic concept of lookup is to allow any computer across any network to find a computer on another network by typing in the domain name, and then the result would show a corresponding IP address. However, the one distinguishing factor of nslookup is that when it is used, the option would first try to look for the domain name within the internal network of the user. If the results would not be found, then the search would be conducted for the external network.
The results from the external network are given as a non-authoritative answer while the internal network search is given before it with the fields of server and address. An example has been given below. Do “Nslookup” On DWC’s And Etisalat’s Network. What Are The Results? What Is “Reverse IP lookup”? Just as the lookup allows the user to search for the IP address of another by using the domain name, the same can be done in a reverse manner. This is known as the reverse IP lookup. The reverse IP lookup allows the user to know which domain is currently using a particular IP address.
Example #8
The following essay is on professional networks and the role that they play in one’s career path. The essay will look into the professional career path of a healthcare administrator and the professional networks that healthcare administrators need to join.
The essay will discuss the role of professional bodies and the reasons for one to join professional networks. The other part of this essay will look into the ways in which the professional networks maintain their members and the kind of activities they have for their members as well as the advantages that accrue to member’s participation.
Professional networks play an important part in the career development of the members in many ways. They provide a platform for the members to interact with each other. Professional networks make it easier for professionals in that network to identify new trends in their field and new employment opportunities with better rewards. For a starter professional, they provide an opportunity to work in (Tulenko, 2009).
Other than networking and informing the members of new opportunities in their field, the professional bodies provide continuing education for their members. These are important to the members to ensure that they are up to date with the dynamic field. Opportunities for continuing education keep the members abreast of new trends and education requirements of their careers.
At a time when downsizing due to integration of technology in workplaces is the order of the day, the professional networks provide training opportunities for their members such that they are at par with the technological changes (Turban, 2008).
The certifications offered by the professional bodies assist an individual who possesses them, as they are an added advantage. In most of the job opportunities, membership to a professional body is usually a requirement (Weare, 2007).
The professional bodies act as internal regulators of the professional practices. They stipulate and enforce professional ethics. They formulate and enforce a code of ethics that members should ascribe. They have a panel and disciplinary actions against those who do not follow the stipulated measures in their career practice. This enables them to protect their members from external intrusion by other members of society during their profession (Weare, 2007).
As a healthcare administrator, there are professional bodies that one may join to have a network of like-minded professionals. The first one is the Association of Medical Practitioners. Due to the managerial work done by the healthcare administrator joining a professional body that deals with healthcare managerial activities would update the administrator. Therefore, the need to create a well-oriented network is imperative in the development of one’s career.
The professional body of healthcare administration offers the healthcare administrator knowledge on the current trends in healthcare administration. It also provides training opportunities for healthcare professionals to keep up to date with the ever-evolving technology. The professional networks provide opportunities for the administrators to hire new people in the organizations as well as informing the professionals on the code of ethics to follow during the professional practice.
The Association of Medical Practitioners engages in many activities to promote the organization and provide an opportunity for members to join the network. The first common activity that professional bodies engage in is that of providing conferences to the members where they bring together professionals in the field to participate in discussions and networking activities.
This enables the members to increase their professional contacts. Professional conferences act as training opportunities for the members and at ordinary times such opportunities are expensive but professional bodies offer them at subsidized costs (Tulenko, 2009).
The professional body in healthcare management should also participate in Corporate Social Responsibilities such as sponsoring healthcare related charity works. This offers the association publicity and provides an opportunity for healthcare professionals to network as they participate in the corporate social responsibilities of the organization.
The healthcare professional body should also have online community networks for the members to share their ideas and challenges through social networks such as Twitter and LinkedIn. Participating in discussions in those forums keeps the members informed. It also offers an opportunity for identifying the challenges that healthcare professionals go through and methods of tackling them.
The healthcare in the professional body must be active in participating in the public discourse on the healthcare administration and professional issues affecting the practice of healthcare administration. The body must participate in the formulation of policies that affect healthcare issues. The body must provide certifications to the members of the healthcare administration body after they participate in training and other professional activities of the professional network.
Participation in a healthcare professional network is imperative in ensuring that the members receive positive results. The contacts offered to the network members, the training, and the certifications offered by the professional network enable the members to thrive in their careers.
Example #9
When was the last time you listened to an audio sample of your favorite music artist on your personal computer? If you answered any date at all you have Real Networks to thank. Real Networks was created by Rob Glaser in 1994 as Progressive Networks and was based out of Seattle. Since the introduction of Real Audio, Real Networks technology has grown to be the most popular format for delivering streaming multimedia over the Internet.
Real Network`s initial goal has been to develop and market software and services for the average person to send and receive real-time audio samples from any computer source. Real Networks provides downloadable software for new users to register and then install to their computer. With this software, an individual can now access more than 85% of all streaming media-enabled web pages? This electronic media company is still a thriving leader in its field, and with competition at a minimum, its possibilities are endless.
Real Networks technology has become a must-have for any computer users. Thousands of internet broadcasters stream more than 300,000 hours of Real Audio and Real Video of live sports, music, news, or entertainment per week, not to mention the hundreds of thousands of sites that broadcast pre-recorded media.
As of November of 1998, more than 80 million Real Audio players have been downloaded from the Real Networks homepages at a rate of about 150,000 copies per day. (56) Real Networks technology has grasped many big-name corporations such as CNN, ABC, Buick, Gap, and Intel to advertise to consumers and attract new buyers. Real Audio was the pioneer in streaming internet audio and is still the best, says PC Magazine. (12) With the advantages of Real Networks, companies and consumers have opened a new window of marketing on the internet.
Real Networks offers two main products; RealPlayer and RealJukebox. First of all, RealJukeBox is exactly what the name says. After this program is downloaded and installed, it allows a person to download music from the internet and store it on their personal computer. These sound files are called MP3’s and can be searched for and downloaded for free.
The next product from Real Networks is its most popular one and that is the RealPlayer. The RealPlayer has two separate components built-in, one for audio and another for video. The RealPlayer is also free to download and also plays MP3`s. The advantage Real Networks offers to its competitors is that all of its software in its simplest forms are free to consumers. This free marketing attracts companies and businesses to advertise and support Real Networks.
Real Player also offers quality near that of a CD and provides consumers with a SmartEQ that automatically optimizes computer audio settings. These two products offered by Real Networks have revolutionized streaming media for computer users and created a useful market for advertising for businesses.
In conclusion Real Networks is a successful company because it is a pioneer in its field. The internet had very little audio content before Real Networks introduced a decent and affordable streaming audio device. Since the start of Real Networks, innovations and progress were obtained only by them and not a competitor.
Real Networks has pioneered its way through the market to establish its name along with their product. Since the invention of these two products, the markets for audio and video on the internet have exploded. Real Networks has found a product and service to offer that is affordable and very convenient for the consumer`s wants and needs, and that is the purpose of existence for any business.
Example #10 – interesting ideas
‘‘Social Networking’ essay hook statement?
I’m doing an essay on social networking sites and online socializing, how it became popular, what are its advantages and disadvantages, so I need a hook statement to start the essay cuz I really can’t think of any, you can give me several ones and I’ll see which one will fit into my essay, or just one, whatever you want, and thnx very much.
Answer. Social networking has come a long way since the dawn of 21st century. Nowadays, anyone can perform a multitude of tasks thought to be impossible merely a decade ago. For example you can do [this] and [that]. However [this] and [that] comes with a few setbacks. Many people pervert the social media and use it to hurt others. ‘Follow’ me as I detail the pros and cons of social media and hopefully you will ‘like’ it!
Teenagers will freely give up personal information to join social networks on the Internet. Afterwards, they are surprised when their parents read their journals. Communities are outraged by the personal information posted by young people online and colleges keep track of student activities on and off-campus. The posting of personal information by teens and students has consequences. I will discuss the uproar over privacy issues in social networks by describing a privacy paradox.
Social networking is successful because of its Viral Nature-They key to social networks quickly moving up in size is their viral nature. Because people who get on those need to expand their network, they invite their friends. And those friends, in turn, invite their friends. The viral nature of a new social network is an important part of making it succeed.
Social Networks
In spite of the advantages afforded by social networking sites they can have potentially harmful effects. It is undeniable that social networks have a lot of advantages; still they also have several disadvantages. It is common knowledge that many people, especially young people, have heard about or belong to at least one social networking site. Social networks’ users rely on these sites to maintain their social relations and entertain themselves.
It seems that these networking sites offer many advantages for their users both at a social and at a personal level. So it is normal for these users to spend long hours using the wide variety of applications that the sites have. However, this does not mean that spending so much time logged in will not have adverse effects on the users’ social and personal well being. Many youngsters are not aware of the negative effects that overexposure to these sites may cause.
Cite this page

This content was submitted by our community members and reviewed by Essayscollector Team. All content on this page is verified and owned by Essayscollector Team. All comments and user reviews are moderated by Essayscollector Team. In the case of any content-related problem, you can reach us through the report button.