## What is the MANRS Measurement Framework?

To measure MANRS readiness for a particular network a set of metrics has been proposed, one for each action. For example, to measure to what degree Filtering (Action 1) is implemented we will measure the number of routing incidents where the network was implicated either as a culprit or an accomplice and their duration. That will produce a number – an indication of the degree of compliance, or a MANRS readiness index (MR-index) for Action1 for a specified period of time.

The measurements are passive, which means that they do not require cooperation for a measured network. That allows us to measure the MR-indices not only for the members of the MANRS initiative, but for all networks in the Internet (at the moment more than 60,000).

### Calculation of Metrics and Data sources

#### Consolidation of Multiple Events

In the current model, only routing incidents related to the network in question and adjacent networks are taken into account.

Non-action is penalized. The longer the incident takes place, the heavier it is rated. For example, the following coefficients are used:

< 30min    = 0.5

< 24hour  = 1

> 24hour  = +1 for each subsequent 24-hour period

Also, multiple routing changes may be part of the same configuration mistake. For this reason, events related to the same metric that share the same time span are merged into an incident. This is shown in Figure 1.

Figure 1. Routing changes, or events (in pink), may be part of the same incident (violet). In this case an operator experienced three incidents with a duration of 29 minutes, 13 hours, and 25 hours respectively. The resulting metric will be M=0.5 + 1 + 2 = 3.5

Based on this approach, for each of the MANRS actions, we can devise a composite MR-index and define thresholds for acceptable, tolerable and unacceptable – informing the members of their security posture related to MANRS.

A summary table of the metrics is provided below. A lower value indicates a higher grade of MANRS readiness.

### Metric Normalization and MANRS readiness scores(MRS)

Metrics M1, M1C, M2, M2C, M3, M3C, M4, M4C and M5 do not have an upper limit (e.g.there may be arbitrary many incidents) and, therefore, it is necessary to normalize these values. We use the following function to normalize these metrics and calculate the MANRS readiness scores(MRS) of a metric M: M_SCORE=𝑀𝑅𝑆(𝑀) = 𝑒−𝛼𝑀𝑛.

The function depends on two parameters, 𝛼 and 𝑛, both set by default to 0.5. We offer a predefined function, which can be called with zero to two interpolation points. This function calculates the parameters 𝛼and 𝑛according to the following logic:

• If no interpolation points are given, the default values are used.
• If one interpolation point (𝑥1,𝑦1)is given, 𝛼is calculated such that 𝑓(𝑥1)=𝑦1. Restrictions: 𝑥1>0,0<𝑦1<1.< /li>
• If two interpolation points (𝑥1,𝑦1),(𝑥2,𝑦2)are given, 𝛼and 𝑛are calculated –if possible –such that 𝑓(𝑥1)=𝑦1,𝑓(𝑥2)=𝑦2. Same restrictions as above, additionally 𝑥1≠𝑥2,𝑦1≠𝑦2

MRS(M) = eaMn

Figure 2 Normalizing an arbitrary value of a metric into 0 – 1 range. Blue, Amber and Red bars depict level of MANRS Readiness (Ready, Aspiring and Lagging).

For metrics M7IRR, M7RPKI, M7RPKIN and M8 the score is calculated as 1-M. For example, for M7IRR=0.9 (90% of the prefixes are not registered), the M7IRR_SCORE=1-0.9=0.1 (10% of all prefixes are registered).

### Current configuration

The current configuration uses a function calculating the complement of a given percentage values and the proposed function with interpolation. The interpolation points were chosen in the way described in the following paragraphs. For the normalization with the proposed function, the boundaries for “normalized ready” was set to 80% (0.8), for “normalized aspiring” to 60% (0.6).

### Filtering

For filtering the MANRS readiness score is defined as an average of corresponding scores for metrics M1, M1C, M2, M2C, M3, M3C, M4, M4C.

MRS_Filtering=(M1_SCORE+M1C_SCORE+M2_SCORE+M2C_SCORE+M3_SCORE+M3C_SCORE+M4_SCORE+M4C_SCORE)/8

The absolute values define the readiness as follows:

• 1.5−5: Aspiring
• ≥5: Lagging

The interpolation values are chosen in the way described above, that is, the two interpolation points were chosen to be [1.5, 0.8] and [5, 0.6].

#### Anti-spoofing

MRS_Anti-Spoofing=M5_SCORE

The idea is the same as filtering, only are the boundaries different:

• 0.5: Aspiring
• 1: Lagging

As the proposed functions already runs through [0, 1] by construction, only one interpolation point needs to be defined, i.e. we chose [0.5, 0.6].

#### Coordination

MRS_Coordination=M8_SCORE

Since coordination is delivered as 0/1-value, it is reasonable to see them as percentages. In this case 0 represents the fact, that contact information is present and 1 that no contact information is present. For Coordination, the absolute values define the readiness as follows:

• 1: Lagging

We mapped the boundaries for the normalized values accordingly:

• 0: Lagging

#### Routing Information (IRR, RPKI)

Same mapping/concept as for coordination, as the values delivered are already percentages.

MRS_Global_Validation_IRR=M7IRR_SCORE

Since for RPKI we need to take into account not only properly registered prefixes, but also the ones that are invalidated by a ROA (suggesting that the ROA is incorrect), the calculation is slightly different:

MRS_Global_Validation_RPKI=max (0;M7RPKI_SCORE-10*M7RPKIN_SCORE)

For routing information, the absolute values define the readiness as follows: