Logout

Home Topic 3 Last Next

3.1.7

Explain why protocols are necessary.

 

Teaching Note:

Including data integrity, flow control, deadlock, congestion, error checking.

 

Sample Question:

sdfsdfsf

JSR Notes:

JSR point: network pictures or videos reminder.

Images needed!

General Rational for the Existence of Protocols:

Protocols, in general, exist to provide standard ways of communicating and interacting. Recently I heard a funny piece by Lucy Calloway of the BBC World Service bemoaning the fact that there are not strict international protocols about greeting each other in a business situation. So for example at an international gathering, some will try to kiss others three times on the cheek, others will bow, others will shake hands, all of different durations of shaking, and so on, causing nervousness, misunderstanding, and wasted emotional engagement when each should be more worried about establishing the business relationship.

Protocols are needed in computer networks primarily because the networks are made up of devices and software made by many different companies. The only way to ensure compatibility among everything is to have common documents, i.e. protocol specifications, that stipulate things such as the format of the data to be sent, and the mechanics of how it is to be sent and received.

Again, don't get standards and protocols confused: standards are for objects (so measurements, usually physical, but sometimes things like speeds), and protocols are ways of doing things, i.e. rules. Think of standards as being nouns and adjectives, and protocols as being verbs and adverbs, if that helps.

Protocols are the basis for solving many networking problems and issues such as those listed in the Teaching Note above.

 

Specific to the Teaching Note:

In fact, a great way to address this assessment statement is to list, and for a more involved answer, explain what is stated in the Teaching Note:

So your answer for a, say, [2 marks] only question, would be: "Protocols are necessary so as to assure data integrity, manage the flow of data, prevent congestion and deadlock, and supply an agreed upon way of error checking".

But if it were a question worth more than a few marks, or if any one of those listed things was asked details of, here is each one tweaked out a bit:


Flow Control - Network protocols dictate the ways servers are able control the flow of traffic through a network. One particularly aspect of flow control that needs to be regulated is the speed of transmission. This helps prevent a fast sender from overwhelming a slow receiver.

The good analogy here for controlling flow would be how traffic is controlled though a city. Some of the rules:

Flow of control is all about traffic in networks too. The same sorts of traffic issues cars and buses have in a city, data also has when moving around a network.

Congestion - congestion means too much traffic going through particular paths and nodes, resulting in everything in that part of the network slowing down. Typical effects include queuing delay, packet loss, and/or the blocking of new connections.

Different protocols offer different approaches to reducing congestion.
Approaches to reducing congestion:

- One approach for relieving congestion is to not allow data to be sent at exactly the same time. Rather, the sending of pieces of a big file should be staggered.

- Congestion can also be relieved simply by re-directing network traffic to alternative routes which are not the shortest, but which are relatively uncontested.

- Another approach is to flush the network at certain points of congestion, and request re-sending.

Analogies, as always can help here. Congestion of automobile traffic can be relieved by staggering when workers start in and finish their day. In Canada, rather than work 9 to 5, government workers work 8:30 a.m. to 4:30 p.m. to relieve traffic congestion. And traffic is also the reason ISB starts each school so early and ends so early. Meantime, re-routing of truck/lorrie traffic around a city on a "ring road" is often done to relieve inner city congestion. Flushing automobile traffic in the middle of the day, though, is not an option!

You'll note that it is routers (between networks) and switches (within networks) which do the actual routing, but they do so based on what is possible as dictated by the network protocol.

Deadlock - this is when traffic on the network grinds to a halt, due to levels of traffic that the server cannot handle. One way a protocol can prevent deadlock is by requiring the network server to totally re-start when a certain threshold of traffic is crossed, there-by chucking everyone off. This may seem drastic, but by doing so the protocol is in fact preventing deadlock, along with the possible freezing of the server, before it would likely happen.

Note the related cyber attack strategy DDOS (Distributed Denial Of Service attack) - This is where a huge number of requests are sent to a network server by many computers; often these computers are infected or controlled by the bad guy. If enough requests are sent, the server can freeze.

Other than extreme levels of traffic, another way deadlock across a network can happen when both sender and receiver are waiting for the other to reply; if both devices are precisely following proper protocol this is not supposed to happen. Re-setting is the one most obvious "solution".

Data Integrity - Network protocols can help assure the "correctness" (i.e. integrity) of information over its entire life-span, meaning that what is sent is what is received. Networks, via their protocols need to have ***some*** way of assuring data integrity. (When playing "telephone" as elementary students, there was not good data integrity with the passing on of the message by whispers from student to student!)

So the protocol will dictate the use of error checking algorithms, which primarily, but not exclusively, are intended to help assure that what was sent is what was received; i.e. that there is "data integrity". Common data integrity error checking algorithms include:

(See below for more details.)

 

& other Error Checking - Though data integrity is the main kind of error checking done, there are other aspects of the network session that need to be checked for errors as well, to assure smooth functioning of the network..

 

Specifics of Particular Network Protocols

Do note that each network protocol will specify particular ways of working with all of these things, deadlock, error checking etc. So the protocol TCP/IP may use, for example, weighted check sums for ensuring data integrity, while another protocol, like UDP, does not; it rather may use unweighted checksums to assure data integrity.


 

 

For this assessment statement

--------------- YOU DO NOT HAVE TO KNOW THE FOLLOWING TECHNICAL DETAILS ---------------

of error checking, but a general appreciation of the way they work would be good to have.

 

A Bit More Details on Error Checking

Error checking in network communication, in general, is the processes of checking to see if there was an error in transmission. Usually this will be because of some sort of electromagnetic interference, which causes a bit to be "flipped". So before it was sent, it was a 1, but upon arrival at its destination, it is a 0, or vice versa.

Parity Checking

Parity checking is a system in which the number of binary 0s or the number of binary 1s in message ("message", in the case of network activity is a packet) are calculated before the message is sent, and after it is received. That number should be exactly the same if no errors occurred during the transmission. If it is not the same, it means there was at least one error. So re-transmission is requested.

In the he case where a particular protocol uses Even Number of Zeros Parity Checking, the number of 0s in a packet are counted up, and if that number is odd, another 0 is added as the "parity bit" to make the total number of 0s even. And if the number of 0s is even, then it is kept even by adding a 1 as the parity bit. When the packet arrives at its destination, the number of 0s is added up, and if it's still even, then no error is assumed to have occurred, if it's odd, then an error must have happened during transmission, and re-transmission is demanded by the protocol.

Check Sums & Weighted Check Sums

Parity checking is not so sophisticated, and what happens if two bits get flipped?? An alternative approach is to use check sums. A check sum is some sort of value that is calculated using a specific formula before and after transmission. As with parity checking, that number should be exactly the same if the message was un-altered during transmission. A simple example is as follows: add up the ASCII decimal equivalents of all the characters in the message, and see if it is the same after transmission. In order to be able to keep a check sum to one byte, what is used is the remainder of that value divided by 255.

Check Sum Example:

ABCDE: that's 65 + 66 + 67 + 68 + 69 = 335 % 255 = 80. So what is sent is ABCD80

If received without an error, the recalculation is:

ABCDE: 65 + 66 + 67 + 68 + 69 = 335 % 255 = 80 (So check! it's the same; 80 == 80, so everything is assumed to have been transferred correctly.)


But in the case of it being sent with an error occurring, say the last 1 of the C is changed to a 0:
0100 0001 0100 0010 0100 0011 0100 0100 0100 0101 ----------->
0100 0001 0100 0010 0100 0010 0100 0100 0100 0101

So what is received is ABBDE80, and the check sum calculation goes:

ABBDE: 65 + 66 + 66 + 68 + 69 = 334 % 255 = 79 - ERROR, ERROR (because 79 != 80, the check sum before sending) RE-TRANSMIT!


Weighted Check Sum:

The problem with the check sum is that two letters could be transposed, and the sum would be the same, so for example ABC would yield 198, but so would BAC. Each letter is therefore weighted by multiplying its ASCII value by the number which represents its place in the message.

ABC: that's (1 x 65) + (2 * 66) + (3 * 67) = 398 % 255 = 143

BAC: that's (1 x 66) + (2 * 65) + (3 * 67) = 397 % 255 = 142, and 142 != 143, indicating the error in transmission.

 

Jose: What could cause bits to get flipped and other corruption of data in transmission: one example is magnetism or electromagnetism affecting the wire/wireless signal; another more esoteric example is what's termed a "single event upset" caused by cosmic particles. Seriously.

 

 

 

 

ADDITIONAL INFORMATION ONLY

JSR Notes - FORMER CURRICULUM - 6.4.4 Outline the need for protocols in packet switching.:

With this assessment statement, the emphasis is not so much on the protocols themselves as the need to have them in the first place.  So this is a **protocols** assessment statement more than it is a TCP/IP one.

Look through the text on 344 and 345 to remind yourself what TCP and IP are, and on which levels of the OSI (Open System Interconnection) model they are found.  And also remember that the OSI model is just that, a model.  To fully understand all of what makes TCP/IP work within the full context of networking would take a couple of advanced degrees.  And neither TCP nor IP necessarily fit perfectly into one of the seven layers of that model.  And that’s not the point.  The point is the ‘P’ in each of these, the protocol; the idea that however this networking is going to work, the computers on either side of the connection had better agree on the ways to proceed.

And in terms, then, of what is to be agreed with TCP, think of the postal analogy; the way in which these packets are to be transported, both sent and received, must be agreed on from both ends.  To take a simple, extreme case with the analogy, there would be no sense of sending a parcel air mail to a place with no airport.  And oh yeah, don’t forget that it is “Transmission” Control Protocol.  In other words what are the rules and regulations for the actual control of the transmission of the packets through the network.

And with the IP part of it, again think of the postal analogy.  This is the “inter-network” set of rules for how the packets will be addressed; the format of the “where they are coming from” and the “where they are going to”.  Does the postal code need to be included, and how many numbers and letters should it have.  Actually IP is pretty easy compared to all of the non-standard postal address configurations out there; it is simply four 8 bit numbers separated by points, from 0.0.0.0 all the way up to 255.255.255.255.  And remember the interesting fact that this makes for around 4 billion possible IP addresses (in IP version 4).

It would’t hurt to also remind you that Domain Name Servers (DNS servers) keep a list of all the IP addresses out there with their corresponding short-cut domain name.  johnrayworth.info, for example is actually the IP address 195.250.148.236.

But anyway, much of this is beyond the point of the assessment statement; in packet switching systems, such as the one used for the Internet, there are certain protocols that once chosen must be followed; certain ways to, among other things, transmit packets and address them.  Without agreement on and adherence to these rules, these protocols, networking, in this case, would not work.