Why routing algorithm is required




















Whereas some routing protocols might only provide one metric to the routing algorithm, others might provide up to ten. As we cover each routing protocol, we will discuss what metrics they gather for the routing algorithm to use. On the other hand, whereas two protocols might both send only one metric to the algorithm, the origin of that metric might differ from protocol to protocol.

One routing protocol might give an algorithm the single metric of cost, but that cost could represent something different than another protocol using the same metric. The algorithm in our example states that the best path is the one with the lowest metric value.

Therefore, by adding the metric numbers associated with each possible link, we see that the route from Router A to Router B to Router C has a metric value of 5, while the direct link to Router C has a value of 6. The algorithm selects the A-B-C path and sends the information along.

A metric is a number used as a standard of measurement for the links of a network. Each link is assigned a metric to represent anything from monetary cost to use the line, to the amount of available bandwidth. Although simplistic, this example demonstrates just how routing algorithms function as the true decision engine within the router.

The specific information that is stored within the routing table, and how the algorithm uses it, depends on the protocol. Let's examine the differences between these two algorithmic types. Distance vector algorithms are similar to the simple algorithm used in Table 3. A distance vector algorithm uses metrics known as costs to help determine the best path to a destination.

The path with the lowest total cost is chosen as the best path. When a router utilizes a distance vector algorithm, different costs are gathered by each router. These costs can be completely arbitrary, administrator-assigned numbers, such as five. Although the number five might not be of any significance to an outside observer, the administrator might have assigned it to a particular link to represent the reliability of that link. Costs can also be dynamically gathered values, such as the amount of delay experienced by routers when sending packets over one link as opposed to another.

All the costs assigned and otherwise are compiled and placed within the router's routing table. All the costs gathered are then used by the algorithm to calculate a best path for any given network scenario. The formula for this is as follows:. This formula states that the best path between two networks M i,k can be found by finding the lowest min value of paths between all network points. Let's look again at the routing information in Table 3. Plugging this information into the formula, we see that the route from A to B to C is still the best path:.

This example illustrates how distance vector algorithms use the information passed to them to make informed routing decisions.

Do not spend too much time memorizing the algorithm, as you will rarely see it in the real world. The algorithms used by routers and routing protocols are not configurable, nor can they be modified.

Another major difference between distance vector algorithms and link state protocols covered in the next section is that when distance vector routing protocols update each other, all or part of the routing table depending on the type of update is sent from one router to another. By this process, each router is exposed to the information contained within the other router's tables, thus giving each router a more complete view of the networking environment and enabling them to make better routing decisions.

The process of router updates is described in more detail in the next section. Other popular protocols such as OSPF are examples of protocols which use the link state routing algorithm. Link-state algorithms work within the same basic framework that distance vector algorithms do in that they both favor the path with the lowest cost. However, link-state protocols work in a somewhat more localized manner. Whereas a router running a distance vector algorithm will compute the end-to-end path for any given packet, a link-state protocol will compute that path as it relates to the most immediate link.

That is, where a distance vector algorithm will compute the lowest metric between Network A and Network C, a link-state protocol will compute it as two distinct paths, A to B and B to C. This process is best for larger environments that might change fairly often.

Link-state algorithms enable routers to focus on their own links and interfaces. Any one router on a network will only have direct knowledge of the routers and networks that are directly connected to it or, the state of its own links. In larger environments, this means that the router will use less processing power to compute complicated paths. The router simply needs to know which one of its direct interfaces will get the information where it needs to go the quickest.

The next router in line will repeat the process until the information reaches its destination. Another advantage to such localized routing processes is that protocols can maintain smaller routing tables. Because a link-state protocol only maintains routing information for its direct interfaces, the routing table contains much less information than that of a distance vector protocol that might have information for multiple routers.

Like distance vector protocols, link-state protocols require updates to share information with each other. When a particular link becomes unavailable changes state , the router sends an update through the environment alerting all the routers with which it is directly linked.

Link-state and distance vector protocols handle certain routing situations quite differently. As we discuss each protocol in the remaining lessons of this book, we'll look at how these protocols handle particular routing situations.

The initiation of the neighbor discovery phase is marked by the source node when it evaluates its d l for determining the next best node. If d th is higher than d l , then fewer communication overheads will occur during the neighbor discovery phase.

In this way,the neighbor table is updated by every node. Considering the information gathered from neighboring nodes, the packet is forwarded by the source node to the node that is selected as the next best node depending on the WDC value by 5.

The entire process is repeated until the packet reaches the destination. Conversely, if d th is lower than d l , then high communication overheads are encountered during the neighbor discovery phase. Accordingly, communication overhead caused by frequent HELLO messages is eliminated in the cases of network congestion.

The proposed method uses the concept of receiver-based relay selection technique in its design. The currently used techniques of this method use implementation distance between next node and destination node as the measure for calculating waiting time.

However, the proposed method uses two additional metrics, namely, relative velocity and number of neighbors. Furthermore, the next node self-selection method in the proposed algorithm performs piggybacking of data on the IEEE The waiting time determines whether the node can serve well as a forwarding candidate node.

Therefore, if the waiting time is small, then the node becomes the best candidate node to forward the packet. The following formula is used to calculate the waiting time: 6 where T is a parameter that indicates time and is set by the vehicular network. The parameter regulates the relation between the waiting time and WDC value of receiver. The source forwards the packet to the next best candidate node after receiving the CTS. The next node sends an acknowledgment of the packet receipt.

Algorithm 2 illustrates the neighbor discovery phase. A RTS frame. B CTS frame. The next node self-selection method is illustrated in Fig 8. Source S searches for the next node to forward the packet to destination D. The source specifies its own position and that of destination node and its own velocity and transmission time of packet. The source also broadcasts an RTS frame. We assume that the shortest waiting time Tw2 is that of node N2 and that this node is the first one to send CTS frame.

Moreover, the neighbors of N2 will know that a transmission is ongoing and that they must not send any frame to N2 until it completes the transmission.

This transmission is overheard by the neighbors of S , and they learn that they must not send any frame to S until S receives an acknowledgment from N2. Input: The local node density d l of node i , the threshold node density d th. Begin Algorithm 2. Sn checks its d l ;. Run the command line 5 until the line 18 of Algorithm 1;. Each node calculates its Twaiting ;. Set the timer to Twaiting ;. If timer is minimum, then. Cancel the timer from all other nodes;. Sn sends packet to candidate node;.

If Tpacket is completed, then. Candidate node sends ACK to Sn ;. Defer transmissions, if any, for TACK ;. Run the command line 15 until the line 18 of Algorithm 1;.

Cancel the timer;. Go to 6;. End Algorithm 2. Table 2 shows that the two methods have the same objective, but they differ in certain aspects. We then focus on communication overhead. Table 3 summarizes the key parameters in the simulation.

Delivery ratio measures the ratio of total data packets successfully delivered to the destination node to the data packets generated by the source node. The packets are dropped if the TTL hits zero or it fails to deliver the link fails. Fig 9A and 9B illustrate the delivery ratio with varying transmission rates and number of nodes, respectively. By contrast, ARP-QD works on the intersection selection only when the source approaches an intersection, thereby leading to high number of dropped packets.

As a result, high number of packets is needed to reach the destination. A Delivery ratio vs. B Delivery ratio vs. This result is due to that the local node density in this period is higher than the threshold density.

In other words, the next node self-selection method is provided in the neighbor discovery phase. Therefore, the communication overhead reduces in this period, thereby increasing the delivery ratio. At transmission rates between2. This finding is attributed to that the average throughput of the network is decreased because the local node density in this period is lower than the threshold density.

Such condition means that the periodic beacons are provided in the neighbor discovery phase. Therefore, the communication overhead increases in this period, thereby decreasing the delivery ratio. The reason is that, before reaching , the threshold density is less than the local node density.

Hence, during the neighbor discovery phase, less communication overhead is encountered because of the next node self-selection method. In turn, the delivery ratio is increased.

This finding is explained by that the threshold density is higher than the local node density. Hence, the communication overhead increases because the periodic beacons are provided in the neighbor discovery phase.

Given that the neighbor table needs to be updated for all neighbors, the delivery ratio decreases. Delivery delay is the difference between the time a packet is received at the destination and the time the packet is sent by the source. Fig 10A and 10B illustrate the delivery delay with varying transmission rates and number of nodes, respectively. The delay increases because of the increase in the number of hops and a failure in links.

The algorithm also checks for the next intersection on each node before forwarding the packet. Moreover, ESRA-MD always selects the route with high number of nodes to ensure network connectivity, especially when more than one route has the same path length. The algorithm avoids the packet that will be transmitted away from the destination.

On the contrary, ARP-QD works on intersection selection when the source approaches an intersection; thereby leading to high number of dropped packets and delays.

A Delivery delay vs. B Delivery delay vs. Fig 10A shows that the increase in the transmission rate increases delivery delay. The reason is that the source in ESRA-MD cannot find a backup neighbor in case of failure because of the high transmission rate.

Hence, the carry forward procedure is prolonged and the delivery delay increases. Moreover, the delivery delay increase slightly to 0. However, the delivery delay increases significantly to 0. Fig 10B shows that the delivery delay decreases along with the increase in the number of nodes. The delivery delay decreases dramatically to 0. The next node self-selection method is provided in this period to select the best node candidate. Therefore, the packet reaches to the destination with minimum delay.

Moreover, when the number of nodes is between and , the delivery delay continuously decreases to 0. Therefore, the packet reaches to the destination with low delay and few numbers of hops. In Fig 11B and 11D , the number of nodes is fixed to while transmission rates are denoted by five curves.

C Delivery delay vs. The routing protocol is a routing algorithm that provides the best path from the source to the destination. The best path is the path that has the "least-cost path" from source to the destination. Routing is the process of forwarding the packets from source to the destination but the best route to send the packets is determined by the routing algorithm.

Classification of a Routing algorithm The Routing algorithm is divided into two categories: Adaptive Routing algorithm Non-adaptive Routing algorithm Adaptive Routing algorithm An adaptive routing algorithm is also known as dynamic routing algorithm.

This algorithm makes the routing decisions based on the topology and network traffic. The main parameters related to this algorithm are hop count, distance and estimated transit time. An adaptive routing algorithm can be classified into three parts: Centralized algorithm: It is also known as global routing algorithm as it computes the least-cost path between source and destination by using complete and global knowledge about the network.

This algorithm takes the connectivity between the nodes and link cost as input, and this information is obtained before actually performing any calculation. Link state algorithm is referred to as a centralized algorithm since it is aware of the cost of each link in the network.

Isolation algorithm: It is an algorithm that obtains the routing information by using local information rather than gathering information from other nodes. Distributed algorithm: It is also known as decentralized algorithm as it computes the least-cost path between source and destination in an iterative and distributed manner.

In the decentralized algorithm, no node has the knowledge about the cost of all the network links. In the beginning, a node contains the information only about its own directly attached links and through an iterative process of calculation computes the least-cost path to the destination. A Distance vector algorithm is a decentralized algorithm as it never knows the complete path from source to the destination, instead it knows the direction through which the packet is to be forwarded along with the least cost path.

Non-Adaptive Routing algorithm Non Adaptive routing algorithm is also known as a static routing algorithm. Adaptive Algorithms — These are the algorithms that change their routing decisions whenever network topology or traffic load changes. The changes in routing decisions are reflected in the topology as well as the traffic of the network. Also known as dynamic routing, these make use of dynamic information such as current topology, load, delay, etc. Optimization parameters are distance, number of hops, and estimated transit time.

Skip to content. Change Language. Related Articles. Computer Network Fundamentals.



0コメント

  • 1000 / 1000