PIM-SM, part 1
I knew it would be tricky before I even started with the multicast section, but not this hard 🙂
So I wrote about PIM-DM and its flooding behavior, and how it would make sure data was flowing from the source to the receivers. I will try to give some information about PIM-SM and also some further information on IGMP.
First of, lets define what protocols handles what:
- IGMP (Internet Group Management Protocol).
- PIM-SM (Protocol Independent Multicast – Sparse Mode).
IGMP is a protocol that allows end hosts to communicate with their directly connected routers. That means that when a receiver wants to “tune” into a multicast stream, it tells the router(s) about this through IGMP.
Querier:
A querier is selected for each LAN. The job of the querier is to keep a consistent state about which members are on this lan. There are two versions of IGMP currently in use (though IGMPv2 is the dominant one now), IGMPv1 and IGMPv2. The reason I bring up the versions is because the querier is elected differently depending on which version is in use. IGMPv1 relies on network layer protocols (such as PIM) to elect a DR (more on this later) and it will use this elected router as the IGMP querier. IGMPv2 has its own election process, where the router with the lowest IP address on the lan becomes the querier.
So how does the querier perform its job role? Well, each 60 seconds, it sends out an IGMP general query, which basically asks hosts: “What groups (if any) do you need data from?”, the hosts then reply with an IGMP report. This report is sent with the destination being the multicast group they wish to join/receive data from.
PIM-SM:
So now that we know how hosts inform routers on what they want, lets take a look at how a multicast-routing protocol works, specifically PIM-SM. PIM-SM is alot more complex than PIM-DM. The reason behind this is the difference in the basic assumption on who needs to receive data. If you remember from the previous post, PIM-DM assumes there are receivers on every subnet on the internetwork. PIM-SM starts out with the total opposite assumption, that there are only a few receivers scattered around the internetwork. The effect of all of this, is that we cant just flood the entire network, based on a push model, but we more or less need a pull model, where only parts of the network that needs multicasts will receive it. So how do you accomplish this?
As it turn outs, we need to select a central point in our internetwork, which will be the central point of connecting data. This central point is called the Rendevouz Point. (How this Rendevouz Point is selected is a topic on its own which I will cover in later posts). Remember we are still using the concept of trees, but where we in PIM-DM treats the source as the “root” of the tree, in PIM-SM we use the Rendevouz Point (here after RP) as the root.
When all routers know where this RP is located, when they want to receive data from a multicast group, they goto the RP instead of to the source. The “stream” of data will then flow from the RP down to the receiving routers and in the end to the receving hosts.
Before I go into further details it is important to figure out the mechanisms that are in place to perform these operations, hence some definitions below.
Lets define a few things regarding the usage of (*,G) and (S,G) entries:
- (*,G) entries in PIM-SM always point towards the Rendevouz Point.
- (S,G) entries will point in the direction of the source (not the RP).
- RPF interface, Reverse Path Forwarding interface. The interface that points either to the RP (in case of (*,G) entries) or the source (S,G) entries).
Lets use this topology:
Assume that the source and the R1 router is connected on one lan, and the Receiver and R4 are on a seperate lan. Now the goal is for the receiver to receive the data the source is sending. Also, notice that R3 has been selected as the RP for this network.
Step 1 is the receiver sending out an IGMP report stating that it wishes to receive data from group 233.0.0.1. R4 being the only router on the lan picks this up and figures out, “hey im the DR, I must send this join up towards the RP”. Step 2, it does this by sending an (*,G) join, specifically a (*,233.0.0.1) join to the RP. This entry has an outgoing interface list of the lan on which the receiver is located and an incomming interface toward the RP (the RPF interface for the (*,G) entry). When this join is received by the RP (R3) as step 3, by PIM-SM rules, it too must create an (*, 233.0.0.1) entry. The entry will not have an incomming interface, because R3 IS the RP. It will however have an outgoing interface list, pointing directly toward R4.
This can be visualized as such:
At this point we have now created a shared tree from the RP to R4, which in turn will deliver the data to the receiver. (Note that right now we dont have any data flowing from the source at all).
Lets look at the states in the multicast routing table on R4 and R3:
R4:
(*, 233.0.0.1), 01:29:45/00:02:06, RP 192.168.0.1, flags: SJC
Incoming interface: serial1/0, RPF nbr 192.168.0.1
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 01:29:45/00:02:06
R3:
(*, 233.0.0.1), 01:31:19/00:03:10, RP 192.168.0.1, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
serial0/0, Forward/Sparse, 01:30:51/00:03:10
Notice that on R4, we have an incomming interface pointing towards the RP, the next hop neighbor being 192.168.0.1, which in this case is the RP itself, but it could be the next-hop router toward the RP. However on the RP itself, the incomming interface is Null and there’s no RPF neighbor. This indicates that this router is the RP.
So far, all is good, we have created a shared tree and our routers are behaving, but what about the actual data? Well, PIM-SM is a unidirectional protocol which means that it cannot go “up and down” its trees. Everything must flow downwards. In PIM-DM this was not an issue, since that was all that it did. So how do we get the data from the source to the RP itself? The answer is called the register message. When the source begins sending data, its DR on its lan will pick up that a source has data to send. It then searches for which RP to use, and in our case, finds that it must use R3. It then sends a unicast Register message to R3, with the multicast data included in this packet. When this is received by R3, it knows that a certain source has data to send for a certain group of which it has a shared tree. The RP then does something pretty slick. It sends (S,G) joins toward the source, in reality building a source tree (SPT) from the source (the closest router to the source) down toward itself.
When this SPT has been created, the RP sends a Register stop message to inform the closest router to the source (R1) to stop sending multicast traffic encapsulated in unicast packets (thats the whole point of multicast), because it receives the flow of data natively through the SPT tree.
Specifically it creates an (S,G) entry, and sends an (S,G) join towards R2, which also creates an (S,G) entry (along with the mandatory (*,G) entry), and sends a join toward R1. Now, when R1 started to receive traffic from the source, it automatically created an (S,G) entry with an incoming interface toward the source, but initially with no outgoing interface (because of the register message), but upon receipt of the (S,G) join from R2, it now populates the outgoing interface list to include the interface toward R2.
Our status right now is that we have successfully had a receiver join the multicast group 233.0.0.1. We have also managed to get our source to register to the RP through the register message. On top of that, the RP has set up a SPT (source tree) to the source, creating a way for data to flow from the source to the RP. Finally we have a scenario where data is flowing from the source down to the RP through a SPT and then further down to R4 by a shared tree, and our receiver receives his multicast stream! Great no?
Thats the closing statement of part 1 of this PIM-SM post. Next time I will write a bit about the optimization of PIM-SM. Stay tuned!