1 Independent Submission L. Song, Ed.
2 Request for Comments: 8483 D. Liu
3 Category: Informational Beijing Internet Institute
4 ISSN: 2070-1721 P. Vixie
5 TISF
6 A. Kato
7 Keio/WIDE
8 S. Kerr
9 October 2018
10
11
12 Yeti DNS Testbed
13
14 Abstract
15
16 Yeti DNS is an experimental, non-production root server testbed that
17 provides an environment where technical and operational experiments
18 can safely be performed without risk to production root server
19 infrastructure. This document aims solely to document the technical
20 and operational experience of deploying a system that is similar to
21 but different from the Root Server system (on which the Internet's
22 Domain Name System is designed and built).
23
24 Status of This Memo
25
26 This document is not an Internet Standards Track specification; it is
27 published for informational purposes.
28
29 This is a contribution to the RFC Series, independently of any other
30 RFC stream. The RFC Editor has chosen to publish this document at
31 its discretion and makes no statement about its value for
32 implementation or deployment. Documents approved for publication by
33 the RFC Editor are not candidates for any level of Internet Standard;
34 see Section 2 of RFC 7841.
35
36 Information about the current status of this document, any errata,
37 and how to provide feedback on it may be obtained at
38 https://www.rfc-editor.org/info/rfc8483.
39
40
41
42
43
44
45
46
47
48
49
50
51
52 Song, et al. Informational [Page 1]
53 RFC 8483 Yeti DNS Testbed October 2018
54
55
56 Copyright Notice
57
58 Copyright (c) 2018 IETF Trust and the persons identified as the
59 document authors. All rights reserved.
60
61 This document is subject to BCP 78 and the IETF Trust's Legal
62 Provisions Relating to IETF Documents
63 (https://trustee.ietf.org/license-info) in effect on the date of
64 publication of this document. Please review these documents
65 carefully, as they describe your rights and restrictions with respect
66 to this document.
67
68 Table of Contents
69
70 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
71 2. Requirements Notation and Conventions . . . . . . . . . . . . 5
72 3. Areas of Study . . . . . . . . . . . . . . . . . . . . . . . 5
73 3.1. Implementation of a Testbed like the Root Server System . 5
74 3.2. Yeti-Root Zone Distribution . . . . . . . . . . . . . . . 5
75 3.3. Yeti-Root Server Names and Addressing . . . . . . . . . . 5
76 3.4. IPv6-Only Yeti-Root Servers . . . . . . . . . . . . . . . 6
77 3.5. DNSSEC in the Yeti-Root Zone . . . . . . . . . . . . . . 6
78 4. Yeti DNS Testbed Infrastructure . . . . . . . . . . . . . . . 7
79 4.1. Root Zone Retrieval . . . . . . . . . . . . . . . . . . . 8
80 4.2. Transformation of Root Zone to Yeti-Root Zone . . . . . . 9
81 4.2.1. ZSK and KSK Key Sets Shared between DMs . . . . . . . 10
82 4.2.2. Unique ZSK per DM; No Shared KSK . . . . . . . . . . 10
83 4.2.3. Preserving Root Zone NSEC Chain and ZSK RRSIGs . . . 11
84 4.3. Yeti-Root Zone Distribution . . . . . . . . . . . . . . . 12
85 4.4. Synchronization of Service Metadata . . . . . . . . . . . 12
86 4.5. Yeti-Root Server Naming Scheme . . . . . . . . . . . . . 13
87 4.6. Yeti-Root Servers . . . . . . . . . . . . . . . . . . . . 14
88 4.7. Experimental Traffic . . . . . . . . . . . . . . . . . . 16
89 4.8. Traffic Capture and Analysis . . . . . . . . . . . . . . 16
90 5. Operational Experience with the Yeti DNS Testbed . . . . . . 17
91 5.1. Viability of IPv6-Only Operation . . . . . . . . . . . . 17
92 5.1.1. IPv6 Fragmentation . . . . . . . . . . . . . . . . . 18
93 5.1.2. Serving IPv4-Only End-Users . . . . . . . . . . . . . 19
94 5.2. Zone Distribution . . . . . . . . . . . . . . . . . . . . 19
95 5.2.1. Zone Transfers . . . . . . . . . . . . . . . . . . . 19
96 5.2.2. Delays in Yeti-Root Zone Distribution . . . . . . . . 20
97 5.2.3. Mixed RRSIGs from Different DM ZSKs . . . . . . . . . 21
98 5.3. DNSSEC KSK Rollover . . . . . . . . . . . . . . . . . . . 22
99 5.3.1. Failure-Case KSK Rollover . . . . . . . . . . . . . . 22
100 5.3.2. KSK Rollover vs. BIND9 Views . . . . . . . . . . . . 22
101 5.3.3. Large Responses during KSK Rollover . . . . . . . . . 23
102 5.4. Capture of Large DNS Response . . . . . . . . . . . . . . 24
103 5.5. Automated Maintenance of the Hints File . . . . . . . . . 24
104
105
106
107 Song, et al. Informational [Page 2]
108 RFC 8483 Yeti DNS Testbed October 2018
109
110
111 5.6. Root Label Compression in Knot DNS Server . . . . . . . . 25
112 6. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 26
113 7. Security Considerations . . . . . . . . . . . . . . . . . . . 28
114 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 28
115 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 29
116 9.1. Normative References . . . . . . . . . . . . . . . . . . 29
117 9.2. Informative References . . . . . . . . . . . . . . . . . 29
118 Appendix A. Yeti-Root Hints File . . . . . . . . . . . . . . . . 33
119 Appendix B. Yeti-Root Server Priming Response . . . . . . . . . 34
120 Appendix C. Active IPv6 Prefixes in Yeti DNS Testbed . . . . . . 36
121 Appendix D. Tools Developed for Yeti DNS Testbed . . . . . . . . 36
122 Appendix E. Controversy . . . . . . . . . . . . . . . . . . . . 37
123 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . 38
124 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 39
125
126 1. Introduction
127
128 The Domain Name System (DNS), as originally specified in [RFC1034]
129 and [RFC1035], has proved to be an enduring and important platform
130 upon which almost every end-user of the Internet relies. Despite its
131 longevity, extensions to the protocol, new implementations, and
132 refinements to DNS operations continue to emerge both inside and
133 outside the IETF.
134
135 The Root Server system in particular has seen technical innovation
136 and development, for example, in the form of wide-scale anycast
137 deployment, the mitigation of unwanted traffic on a global scale, the
138 widespread deployment of Response Rate Limiting [RRL], the
139 introduction of IPv6 transport, the deployment of DNSSEC, changes in
140 DNSSEC key sizes, and preparations to roll the root zone's Key
141 Signing Key (KSK) and corresponding trust anchor. These projects
142 created tremendous qualitative operational change and required
143 impressive caution and study prior to implementation. They took
144 place in parallel with the quantitative expansion or delegations for
145 new TLDs (see <https://newgtlds.icann.org/>).
146
147 Aspects of the operational structure of the Root Server system have
148 been described in such documents as [TNO2009], [ISC-TN-2003-1],
149 [RSSAC001], and [RFC7720]. Such references, considered together,
150 provide sufficient insight into the operations of the system as a
151 whole that it is straightforward to imagine structural changes to the
152 Root Server system's infrastructure and to wonder what the
153 operational implications of such changes might be.
154
155 The Yeti DNS Project was conceived in May 2015 with the aim of
156 providing a non-production testbed that would be open for use by
157 anyone from the technical community to propose or run experiments
158 designed to answer these kinds of questions. Coordination for the
159
160
161
162 Song, et al. Informational [Page 3]
163 RFC 8483 Yeti DNS Testbed October 2018
164
165
166 project was provided by BII, TISF, and the WIDE Project. Thus, Yeti
167 DNS is an independently coordinated project and is not affiliated
168 with the IETF, ICANN, IANA, or any Root Server Operator. The
169 objectives of the Yeti Project were set by the participants in the
170 project based on experiments that they considered would provide
171 valuable information.
172
173 Many volunteers collaborated to build a distributed testbed that at
174 the time of writing includes 25 Yeti root servers with 16 operators
175 and handles experimental traffic from individual volunteers,
176 universities, DNS vendors, and distributed measurement networks.
177
178 By design, the Yeti testbed system serves the root zone published by
179 the IANA with only those structural modifications necessary to ensure
180 that it is able to function usefully in the Yeti testbed system
181 instead of the production Root Server system. In particular, no
182 delegation for any top-level zone is changed, added, or removed from
183 the IANA-published root zone to construct the root zone served by the
184 Yeti testbed system, and changes in the root zone are reflected in
185 the testbed in near real-time. In this document, for clarity, we
186 refer to the zone derived from the IANA-published root zone as the
187 Yeti-Root zone.
188
189 The Yeti DNS testbed serves a similar function to the Root Server
190 system in the sense that they both serve similar zones: the Yeti-Root
191 zone and the IANA-published root zone. However, the Yeti DNS testbed
192 only serves clients that are explicitly configured to participate in
193 the experiment, whereas the Root Server system serves the whole
194 Internet. Since the dependent end-users and systems of the Yeti DNS
195 testbed are known and their operations well-coordinated with those of
196 the Yeti project, it has been possible to deploy structural changes
197 in the Yeti DNS testbed with effective measurement and analysis,
198 something that is difficult or simply impractical in the production
199 Root Server system.
200
201 This document describes the motivation for the Yeti project,
202 describes the Yeti testbed infrastructure, and provides the technical
203 and operational experiences of some users of the Yeti testbed. This
204 document neither addresses the relevant policies under which the Root
205 Server system is operated nor makes any proposal for changing any
206 aspect of its implementation or operation.
207
208
209
210
211
212
213
214
215
216
217 Song, et al. Informational [Page 4]
218 RFC 8483 Yeti DNS Testbed October 2018
219
220
221 2. Requirements Notation and Conventions
222
223 Through the document, any mention of "Root" with an uppercase "R" and
224 without other prefix, refers to the "IANA Root" systems used in the
225 production Internet. Proper mentions of the Yeti infrastructure will
226 be prefixed with "Yeti", like "Yeti-Root zone", "Yeti DNS", and so
227 on.
228
229 3. Areas of Study
230
231 This section provides some examples of the topics that the developers
232 of the Yeti DNS testbed considered important to address. As noted in
233 Section 1, the Yeti DNS is an independently coordinated project and
234 is not affiliated with the IETF, ICANN, IANA, or any Root Server
235 Operator. Thus, the topics and areas for study were selected by (and
236 for) the proponents of the Yeti project to address their own concerns
237 and in the hope that the information and tools provided would be of
238 wider interest.
239
240 Each example included below is illustrated with indicative questions.
241
242 3.1. Implementation of a Testbed like the Root Server System
243
244 o How can a testbed be constructed and deployed on the Internet,
245 allowing useful public participation without any risk of
246 disruption of the Root Server system?
247
248 o How can representative traffic be introduced into such a testbed
249 such that insights into the impact of specific differences between
250 the testbed and the Root Server system can be observed?
251
252 3.2. Yeti-Root Zone Distribution
253
254 o What are the scaling properties of Yeti-Root zone distribution as
255 the number of Yeti-Root servers, Yeti-Root server instances, or
256 intermediate distribution points increases?
257
258 3.3. Yeti-Root Server Names and Addressing
259
260 o What naming schemes other than those closely analogous to the use
261 of ROOT-SERVERS.NET in the production root zone are practical, and
262 what are their respective advantages and disadvantages?
263
264 o What are the risks and benefits of signing the zone that contains
265 the names of the Yeti-Root servers?
266
267
268
269
270
271
272 Song, et al. Informational [Page 5]
273 RFC 8483 Yeti DNS Testbed October 2018
274
275
276 o What automatic mechanisms might be useful to improve the rate at
277 which clients of Yeti-Root servers are able to react to a Yeti-
278 Root server renumbering event?
279
280 3.4. IPv6-Only Yeti-Root Servers
281
282 o Are there negative operational effects in the use of IPv6-only
283 Yeti-Root servers, compared to the use of servers that are dual-
284 stack?
285
286 o What effect does the IPv6 fragmentation model have on the
287 operation of Yeti-Root servers, compared with that of IPv4?
288
289 3.5. DNSSEC in the Yeti-Root Zone
290
291 o Is it practical to sign the Yeti-Root zone using multiple,
292 independently operated DNSSEC signers and multiple corresponding
293 Zone Signing Keys (ZSKs)?
294
295 o To what extent is [RFC5011] ("Automated Updates of DNS Security
296 (DNSSEC) Trust Anchors") supported by resolvers?
297
298 o Does the KSK Rollover plan designed and in the process of being
299 implemented by ICANN work as expected on the Yeti testbed?
300
301 o What is the operational impact of using much larger RSA key sizes
302 in the ZSKs used in a root?
303
304 o What are the operational consequences of choosing DNSSEC
305 algorithms other than RSA to sign a root?
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327 Song, et al. Informational [Page 6]
328 RFC 8483 Yeti DNS Testbed October 2018
329
330
331 4. Yeti DNS Testbed Infrastructure
332
333 The purpose of the testbed is to allow DNS queries from stub
334 resolvers, mediated by recursive resolvers, to be delivered to Yeti-
335 Root servers, and for corresponding responses generated on the Yeti-
336 Root servers to be returned, as illustrated in Figure 1.
337
338 ,----------. ,-----------. ,------------.
339 | stub +------> | recursive +------> | Yeti-Root |
340 | resolver | <------+ resolver | <------+ nameserver |
341 `----------' `-----------' `------------'
342 ^ ^ ^
343 | appropriate | Yeti-Root hints; | Yeti-Root zone
344 `- resolver `- Yeti-Root trust `- with DNSKEY RRset
345 configured anchor signed by
346 Yeti-Root KSK
347
348 Figure 1: High-Level Testbed Components
349
350 To use the Yeti DNS testbed, a recursive resolver must be configured
351 to use the Yeti-Root servers. That configuration consists of a list
352 of names and addresses for the Yeti-Root servers (often referred to
353 as a "hints file") that replaces the corresponding hints used for the
354 production Root Server system (Appendix A). If resolvers are
355 configured to validate DNSSEC, then they also need to be configured
356 with a DNSSEC trust anchor that corresponds to a KSK used in the Yeti
357 DNS Project, in place of the normal trust anchor set used for the
358 Root Zone.
359
360 Since the Yeti root(s) are signed with Yeti keys, rather than those
361 used by the IANA Root, corresponding changes are needed in the
362 resolver trust anchors. Corresponding changes are required in the
363 Yeti-Root hints file Appendix A. Those changes would be properly
364 rejected as bogus by any validator using the production Root Server
365 system's root zone trust anchor set.
366
367 Stub resolvers become part of the Yeti DNS testbed by their use of
368 recursive resolvers that are configured as described above.
369
370 The data flow from IANA to stub resolvers through the Yeti testbed is
371 illustrated in Figure 2 and is described in more detail in the
372 sections that follow.
373
374
375
376
377
378
379
380
381
382 Song, et al. Informational [Page 7]
383 RFC 8483 Yeti DNS Testbed October 2018
384
385
386 ,----------------.
387 ,-- / IANA Root Zone / ---.
388 | `----------------' |
389 | | |
390 | | | Root Zone
391 ,--------------. ,---V---. ,---V---. ,---V---.
392 | Yeti Traffic | | BII | | WIDE | | TISF |
393 | Collection | | DM | | DM | | DM |
394 `----+----+----' `---+---' `---+---' `---+---'
395 | | ,-----' ,-------' `----.
396 | | | | | Yeti-Root
397 ^ ^ | | | Zone
398 | | ,---V---. ,---V---. ,---V---.
399 | `---+ Yeti | | Yeti | . . . . . . . | Yeti |
400 | | Root | | Root | | Root |
401 | `---+---' `---+---' `---+---'
402 | | | | DNS
403 | | | | Response
404 | ,--V----------V-------------------------V--.
405 `---------+ Yeti Resolvers |
406 `--------------------+---------------------'
407 | DNS
408 | Response
409 ,--------------------V---------------------.
410 | Yeti Stub Resolvers |
411 `------------------------------------------'
412
413 The three coordinators of the Yeti DNS testbed:
414 BII : Beijing Internet Institute
415 WIDE: Widely Integrated Distributed Environment Project
416 TISF: A collaborative engineering and security project by Paul Vixie
417
418 Figure 2: Testbed Data Flow
419
420 Note that the roots are not bound to Distribution Masters (DMs). DMs
421 update their zone on a schedule described in Section 4.1. Each DM
422 that updates the latest zone can notify all roots, so the zone
423 transfer can happen between any DM and any root.
424
425 4.1. Root Zone Retrieval
426
427 The Yeti-Root zone is distributed within the Yeti DNS testbed through
428 a set of internal master servers that are referred to as Distribution
429 Masters (DMs). These server elements distribute the Yeti-Root zone
430 to all Yeti-Root servers. The means by which the Yeti DMs construct
431 the Yeti-Root zone for distribution is described below.
432
433
434
435
436
437 Song, et al. Informational [Page 8]
438 RFC 8483 Yeti DNS Testbed October 2018
439
440
441 Since Yeti DNS DMs do not receive DNS NOTIFY [RFC1996] messages from
442 the Root Server system, a polling approach is used to determine when
443 new revisions of the root zone are available from the production Root
444 Server system. Each Yeti DM requests the Root Zone SOA record from a
445 Root server that permits unauthenticated zone transfers of the root
446 zone, and performs a zone transfer from that server if the retrieved
447 value of SOA.SERIAL is greater than that of the last retrieved zone.
448
449 At the time of writing, unauthenticated zone transfers of the Root
450 Zone are available directly from B-Root, C-Root, F-Root, G-Root,
451 K-Root, and L-Root; two servers XFR.CJR.DNS.ICANN.ORG and
452 XFR.LAX.DNS.ICANN.ORG; and via FTP from sites maintained by the Root
453 Zone Maintainer and the IANA Functions Operator. The Yeti DNS
454 testbed retrieves the Root Zone using zone transfers from F-Root.
455 The schedule on which F-Root is polled by each Yeti DM is as follows:
456
457 +-------------+-----------------------+
458 | DM Operator | Time |
459 +-------------+-----------------------+
460 | BII | UTC hour + 00 minutes |
461 | WIDE | UTC hour + 20 minutes |
462 | TISF | UTC hour + 40 minutes |
463 +-------------+-----------------------+
464
465 The Yeti DNS testbed uses multiple DMs, each of which acts
466 autonomously and equivalently to its siblings. Any single DM can act
467 to distribute new revisions of the Yeti-Root zone and is also
468 responsible for signing the RRsets that are changed as part of the
469 transformation of the Root Zone into the Yeti-Root zone described in
470 Section 4.2. This multiple DM model intends to provide a basic
471 structure to implement the idea of shared zone control as proposed in
472 [ITI2014].
473
474 4.2. Transformation of Root Zone to Yeti-Root Zone
475
476 Two distinct approaches have been deployed in the Yeti DNS testbed,
477 separately, to transform the Root Zone into the Yeti-Root zone. At a
478 high level, the approaches are equivalent in the sense that they
479 replace a minimal set of information in the root zone with
480 corresponding data for the Yeti DNS testbed; the mechanisms by which
481 the transforms are executed are different, however. The approaches
482 are discussed in Sections 4.2.1 and 4.2.2.
483
484 A third approach has also been proposed, but not yet implemented.
485 The motivations and changes implied by that approach are described in
486 Section 4.2.3.
487
488
489
490
491
492 Song, et al. Informational [Page 9]
493 RFC 8483 Yeti DNS Testbed October 2018
494
495
496 4.2.1. ZSK and KSK Key Sets Shared between DMs
497
498 The approach described here was the first to be implemented. It
499 features entirely autonomous operation of each DM, but also requires
500 secret key material (the private key in each of the Yeti-Root KSK and
501 ZSK key pairs) to be distributed and maintained on each DM in a
502 coordinated way.
503
504 The Root Zone is transformed as follows to produce the Yeti-Root
505 zone. This transformation is carried out autonomously on each Yeti
506 DNS Project DM. Each DM carries an authentic copy of the current set
507 of Yeti KSK and ZSK key pairs, synchronized between all DMs (see
508 Section 4.4).
509
510 1. SOA.MNAME is set to www.yeti-dns.org.
511
512 2. SOA.RNAME is set to <dm-operator>.yeti-dns.org, where
513 <dm-operator> is currently one of "wide", "bii", or "tisf".
514
515 3. All DNSKEY, RRSIG, and NSEC records are removed.
516
517 4. The apex Name Server (NS) RRset is removed, with the
518 corresponding root server glue (A and AAAA) RRsets.
519
520 5. A Yeti DNSKEY RRset is added to the apex, comprising the public
521 parts of all Yeti KSK and ZSKs.
522
523 6. A Yeti NS RRset is added to the apex that includes all Yeti-Root
524 servers.
525
526 7. Glue records (AAAA only, since Yeti-Root servers are v6-only) for
527 all Yeti-Root servers are added.
528
529 8. The Yeti-Root zone is signed: the NSEC chain is regenerated; the
530 Yeti KSK is used to sign the DNSKEY RRset; and the shared ZSK is
531 used to sign every other RRset.
532
533 Note that the SOA.SERIAL value published in the Yeti-Root zone is
534 identical to that found in the root zone.
535
536 4.2.2. Unique ZSK per DM; No Shared KSK
537
538 The approach described here was the second to be implemented and
539 maintained as stable state. Each DM is provisioned with its own,
540 dedicated ZSK key pairs that are not shared with other DMs. A Yeti-
541 Root DNSKEY RRset is constructed and signed upstream of all DMs as
542 the union of the set of active Yeti-Root KSKs and the set of active
543 ZSKs for every individual DM. Each DM now only requires the secret
544
545
546
547 Song, et al. Informational [Page 10]
548 RFC 8483 Yeti DNS Testbed October 2018
549
550
551 part of its own dedicated ZSK key pairs to be available locally, and
552 no other secret key material is shared. The high-level approach is
553 illustrated in Figure 3.
554
555 ,----------. ,-----------.
556 .--------> BII ZSK +---------> Yeti-Root |
557 | signs `----------' signs `-----------'
558 |
559 ,-----------. | ,----------. ,-----------.
560 | Yeti KSK +-+--------> TISF ZSK +---------> Yeti-Root |
561 `-----------' | signs `----------' signs `-----------'
562 |
563 | ,----------. ,-----------.
564 `--------> WIDE ZSK +---------> Yeti-Root |
565 signs `----------' signs `-----------'
566
567 Figure 3: Unique ZSK per DM
568
569 The process of retrieving the Root Zone from the Root Server system
570 and replacing and signing the apex DNSKEY RRset no longer takes place
571 on the DMs; instead, it takes place on a central Hidden Master. The
572 production of signed DNSKEY RRsets is analogous to the use of Signed
573 Key Responses (SKRs) produced during ICANN KSK key ceremonies
574 [ICANN2010].
575
576 Each DM now retrieves source data (with a premodified and Yeti-signed
577 DNSKEY RRset, but otherwise unchanged) from the Yeti DNS Hidden
578 Master instead of from the Root Server system.
579
580 Each DM carries out a similar transformation to that described in
581 Section 4.2.1, except that DMs no longer need to modify or sign the
582 DNSKEY RRset, and the DM's unique local ZSK is used to sign every
583 other RRset.
584
585 4.2.3. Preserving Root Zone NSEC Chain and ZSK RRSIGs
586
587 A change to the transformation described in Section 4.2.2 has been
588 proposed as a Yeti experiment called PINZ [PINZ], which would
589 preserve the NSEC chain from the Root Zone and all RRSIG RRs
590 generated using the Root Zone's ZSKs. The DNSKEY RRset would
591 continue to be modified to replace the Root Zone KSKs, but Root Zone
592 ZSKs would be kept intact, and the Yeti KSK would be used to generate
593 replacement signatures over the apex DNSKEY and NS RRsets. Source
594 data would continue to flow from the Root Server system through the
595 Hidden Master to the set of DMs, but no DNSSEC operations would be
596 required on the DMs, and the source NSEC and most RRSIGs would remain
597 intact.
598
599
600
601
602 Song, et al. Informational [Page 11]
603 RFC 8483 Yeti DNS Testbed October 2018
604
605
606 This approach has been suggested in order to keep minimal changes
607 from the IANA Root zone and provide cryptographically verifiable
608 confidence that no owner name in the root zone had been changed in
609 the process of producing the Yeti-Root zone from the Root Zone,
610 thereby addressing one of the concerns described in Appendix E in a
611 way that can be verified automatically.
612
613 4.3. Yeti-Root Zone Distribution
614
615 Each Yeti DM is configured with a full list of Yeti-Root server
616 addresses to send NOTIFY [RFC1996] messages to. This also forms the
617 basis for an address-based access-control list for zone transfers.
618 Authentication by address could be replaced with more rigorous
619 mechanisms (e.g., using Transaction Signatures (TSIGs) [RFC2845]).
620 This has not been done at the time of writing since the use of
621 address-based controls avoids the need for the distribution of shared
622 secrets amongst the Yeti-Root server operators.
623
624 Individual Yeti-Root servers are configured with a full set of Yeti
625 DM addresses to which SOA and AXFR queries may be sent in the
626 conventional manner.
627
628 4.4. Synchronization of Service Metadata
629
630 Changes in the Yeti DNS testbed infrastructure such as the addition
631 or removal of Yeti-Root servers, renumbering Yeti-Root servers, or
632 DNSSEC key rollovers require coordinated changes to take place on all
633 DMs. The Yeti DNS testbed is subject to more frequent changes than
634 are observed in the Root Server system and includes substantially
635 more Yeti-Root servers than there are IANA Root Servers, and hence a
636 manual change process in the Yeti testbed would be more likely to
637 suffer from human error. An automated and cooperative process was
638 consequently implemented.
639
640 The theory of this operation is that each DM operator runs a Git
641 repository locally, containing all service metadata involved in the
642 operation of each DM. When a change is desired and approved among
643 all Yeti coordinators, one DM operator (usually BII) updates the
644 local Git repository. A serial number in the future (in two days) is
645 chosen for when the changes become active. The DM operator then
646 pushes the changes to the Git repositories of the other two DM
647 operators who have a chance to check and edit the changes. When the
648 serial number of the root zone passes the number chosen, the changes
649 are pulled automatically to individual DMs and promoted to
650 production.
651
652
653
654
655
656
657 Song, et al. Informational [Page 12]
658 RFC 8483 Yeti DNS Testbed October 2018
659
660
661 The three Git repositories are synchronized by configuring them as
662 remote servers. For example, at BII we push to all three DMs'
663 repositories as follows:
664
665 $ git remote -v
666 origin yeticonf@yeti-conf.dns-lab.net:dm (fetch)
667 origin yeticonf@yeti-conf.dns-lab.net:dm (push)
668 origin yeticonf@yeti-dns.tisf.net:dm (push)
669 origin yeticonf@yeti-repository.wide.ad.jp:dm (push)
670
671 For more detailed information on DM synchronization, please see this
672 document in Yeti's GitHub repository: <https://github.com/BII-Lab/
673 Yeti-Project/blob/master/doc/Yeti-DM-Sync.md>.
674
675 4.5. Yeti-Root Server Naming Scheme
676
677 The current naming scheme for Root Servers was normalized to use
678 single-character host names ("A" through "M") under the domain ROOT-
679 SERVERS.NET, as described in [RSSAC023]. The principal benefit of
680 this naming scheme was that DNS label compression could be used to
681 produce a priming response that would fit within 512 bytes at the
682 time it was introduced, where 512 bytes is the maximum DNS message
683 size using UDP transport without EDNS(0) [RFC6891].
684
685 Yeti-Root servers do not use this optimization, but rather use free-
686 form nameserver names chosen by their respective operators -- in
687 other words, no attempt is made to minimize the size of the priming
688 response through the use of label compression. This approach aims to
689 challenge the need to minimize the priming response in a modern DNS
690 ecosystem where EDNS(0) is prevalent.
691
692 Priming responses from Yeti-Root servers (unlike those from Root
693 Servers) do not always include server addresses in the additional
694 section. In particular, Yeti-Root servers running BIND9 return an
695 empty additional section if the configuration parameter "minimum-
696 responses" is set, forcing resolvers to complete the priming process
697 with a set of conventional recursive lookups in order to resolve
698 addresses for each Yeti-Root server. The Yeti-Root servers running
699 NSD were observed to return a fully populated additional section
700 (depending, of course, on the EDNS buffer size in use).
701
702 Various approaches to normalize the composition of the priming
703 response were considered, including:
704
705 o Require use of DNS implementations that exhibit a desired behavior
706 in the priming response.
707
708
709
710
711
712 Song, et al. Informational [Page 13]
713 RFC 8483 Yeti DNS Testbed October 2018
714
715
716 o Modify nameserver software or configuration as used by Yeti-Root
717 servers.
718
719 o Isolate the names of Yeti-Root servers in one or more zones that
720 could be slaved on each Yeti-Root server, renaming servers as
721 necessary, giving each a source of authoritative data with which
722 the authority section of a priming response could be fully
723 populated. This is the approach used in the Root Server system
724 with the ROOT-SERVERS.NET zone.
725
726 The potential mitigation of renaming all Yeti-Root servers using a
727 scheme that would allow their names to exist directly in the root
728 zone was not considered because that approach implies the invention
729 of new top-level labels not present in the Root Zone.
730
731 Given the relative infrequency of priming queries by individual
732 resolvers and the additional complexity or other compromises implied
733 by each of those mitigations, the decision was made to make no effort
734 to ensure that the composition of priming responses was identical
735 across servers. Even the empty additional sections generated by
736 Yeti-Root servers running BIND9 seem to be sufficient for all
737 resolver software tested; resolvers simply perform a new recursive
738 lookup for each authoritative server name they need to resolve.
739
740 4.6. Yeti-Root Servers
741
742 Various volunteers have donated authoritative servers to act as Yeti-
743 Root servers. At the time of writing, there are 25 Yeti-Root servers
744 distributed globally, one of which is named using a label as
745 specified in IDNA2008 [RFC5890] (it is shown in the following list in
746 punycode).
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767 Song, et al. Informational [Page 14]
768 RFC 8483 Yeti DNS Testbed October 2018
769
770
771 +-------------------------------------+---------------+-------------+
772 | Name | Operator | Location |
773 +-------------------------------------+---------------+-------------+
774 | bii.dns-lab.net | BII | CHINA |
775 | yeti-ns.tsif.net | TSIF | USA |
776 | yeti-ns.wide.ad.jp | WIDE Project | Japan |
777 | yeti-ns.as59715.net | as59715 | Italy |
778 | dahu1.yeti.eu.org | Dahu Group | France |
779 | ns-yeti.bondis.org | Bond Internet | Spain |
780 | | Systems | |
781 | yeti-ns.ix.ru | Russia | MSK-IX |
782 | yeti.bofh.priv.at | CERT Austria | Austria |
783 | yeti.ipv6.ernet.in | ERNET India | India |
784 | yeti-dns01.dnsworkshop.org | dnsworkshop | Germany |
785 | | /informnis | |
786 | dahu2.yeti.eu.org | Dahu Group | France |
787 | yeti.aquaray.com | Aqua Ray SAS | France |
788 | yeti-ns.switch.ch | SWITCH | Switzerland |
789 | yeti-ns.lab.nic.cl | NIC Chile | Chile |
790 | yeti-ns1.dns-lab.net | BII | China |
791 | yeti-ns2.dns-lab.net | BII | China |
792 | yeti-ns3.dns-lab.net | BII | China |
793 | ca...a23dc.yeti-dns.net | Yeti-ZA | South |
794 | | | Africa |
795 | 3f...374cd.yeti-dns.net | Yeti-AU | Australia |
796 | yeti1.ipv6.ernet.in | ERNET India | India |
797 | xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c | ERNET India | India |
798 | yeti-dns02.dnsworkshop.org | dnsworkshop | USA |
799 | | /informnis | |
800 | yeti.mind-dns.nl | Monshouwer | Netherlands |
801 | | Internet | |
802 | | Diensten | |
803 | yeti-ns.datev.net | DATEV | Germany |
804 | yeti.jhcloos.net. | jhcloos | USA |
805 +-------------------------------------+---------------+-------------+
806
807 The current list of Yeti-Root servers is made available to a
808 participating resolver first using a substitute hints file Appendix A
809 and subsequently by the usual resolver priming process [RFC8109].
810 All Yeti-Root servers are IPv6-only, because of the IPv6-only
811 Internet of the foreseeable future, and hence the Yeti-Root hints
812 file contains no IPv4 addresses and the Yeti-Root zone contains no
813 IPv4 glue records. Note that the rationale of an IPv6-only testbed
814 is to test whether an IPv6-only root can survive any problem or
815 impact when IPv4 is turned off, much like the context of the IETF
816 SUNSET4 WG [SUNSET4].
817
818
819
820
821
822 Song, et al. Informational [Page 15]
823 RFC 8483 Yeti DNS Testbed October 2018
824
825
826 At the time of writing, all root servers within the Root Server
827 system serve the ROOT-SERVERS.NET zone in addition to the root zone,
828 and all but one also serve the ARPA zone. Yeti-Root servers serve
829 the Yeti-Root zone only.
830
831 Significant software diversity exists across the set of Yeti-Root
832 servers, as reported by their volunteer operators at the time of
833 writing:
834
835 o Platform: 18 of 25 Yeti-Root servers are implemented on a Virtual
836 Private Server (VPS) rather than bare metal.
837
838 o Operating System: 15 Yeti-Root servers run on Linux (Ubuntu,
839 Debian, CentOS, Red Hat, and ArchLinux); 4 run on FreeBSD; 1 on
840 NetBSD; and 1 on Windows Server 2016.
841
842 o DNS software: 16 of 25 Yeti-Root servers use BIND9 (versions
843 varying between 9.9.7 and 9.10.3); 4 use NSD (4.10 and 4.15); 2
844 use Knot (2.0.1 and 2.1.0); 1 uses Bundy (1.2.0); 1 uses PowerDNS
845 (4.1.3); and 1 uses MS DNS (10.0.14300.1000).
846
847 4.7. Experimental Traffic
848
849 For the Yeti DNS testbed to be useful as a platform for
850 experimentation, it needs to carry statistically representative
851 traffic. Several approaches have been taken to load the system with
852 traffic, including both real-world traffic triggered by end-users and
853 synthetic traffic.
854
855 Resolvers that have been explicitly configured to participate in the
856 testbed, as described in Section 4, are a source of real-world, end-
857 user traffic. Due to an efficient cache mechanism, the mean query
858 rate is less than 100 qps in the Yeti testbed, but a variety of
859 sources were observed as active during 2017, as summarized in
860 Appendix C.
861
862 Synthetic traffic has been introduced to the system from time to time
863 in order to increase traffic loads. Approaches include the use of
864 distributed measurement platforms such as RIPE ATLAS to send DNS
865 queries to Yeti-Root servers and the capture of traffic (sent from
866 non-Yeti resolvers to the Root Server system) that was subsequently
867 modified and replayed towards Yeti-Root servers.
868
869 4.8. Traffic Capture and Analysis
870
871 Traffic capture of queries and responses is available in the testbed
872 in both Yeti resolvers and Yeti-Root servers in anticipation of
873 experiments that require packet-level visibility into DNS traffic.
874
875
876
877 Song, et al. Informational [Page 16]
878 RFC 8483 Yeti DNS Testbed October 2018
879
880
881 Traffic capture is performed on Yeti-Root servers using either
882
883 o dnscap <https://www.dns-oarc.net/tools/dnscap> or
884
885 o pcapdump, part of the pcaputils Debian package
886 <https://packages.debian.org/sid/pcaputils>, with a patch to
887 facilitate triggered file upload (see <https://bugs.debian.org/
888 cgi-bin/bugreport.cgi?bug=545985>).
889
890 PCAP-format files containing packet captures are uploaded using rsync
891 to central storage.
892
893 5. Operational Experience with the Yeti DNS Testbed
894
895 The following sections provide commentary on the operation and impact
896 analyses of the Yeti DNS testbed described in Section 4. More
897 detailed descriptions of observed phenomena are available in the Yeti
898 DNS mailing list archives <http://lists.yeti-dns.org/pipermail/
899 discuss/> and on the Yeti DNS blog <https://yeti-dns.org/blog.html>.
900
901 5.1. Viability of IPv6-Only Operation
902
903 All Yeti-Root servers were deployed with IPv6 connectivity, and no
904 IPv4 addresses for any Yeti-Root server were made available (e.g., in
905 the Yeti hints file or in the DNS itself). This implementation
906 decision constrained the Yeti-Root system to be v6 only.
907
908 DNS implementations are generally adept at using both IPv4 and IPv6
909 when both are available. Servers that cannot be reliably reached
910 over one protocol might be better queried over the other, to the
911 benefit of end-users in the common case where DNS resolution is on
912 the critical path for end-users' perception of performance. However,
913 this optimization also means that systemic problems with one protocol
914 can be masked by the other. By forcing all traffic to be carried
915 over IPv6, the Yeti DNS testbed aimed to expose any such problems and
916 make them easier to identify and understand. Several examples of
917 IPv6-specific phenomena observed during the operation of the testbed
918 are described in the sections that follow.
919
920 Although the Yeti-Root servers themselves were only reachable using
921 IPv6, real-world end-users often have no IPv6 connectivity. The
922 testbed was also able to explore the degree to which IPv6-only Yeti-
923 Root servers were able to serve single-stack, IPv4-only end-user
924 populations through the use of dual-stack Yeti resolvers.
925
926
927
928
929
930
931
932 Song, et al. Informational [Page 17]
933 RFC 8483 Yeti DNS Testbed October 2018
934
935
936 5.1.1. IPv6 Fragmentation
937
938 In the Root Server system, structural changes with the potential to
939 increase response sizes (and hence fragmentation, fallback to TCP
940 transport, or both) have been exercised with great care, since the
941 impact on clients has been difficult to predict or measure. The Yeti
942 DNS testbed is experimental and has the luxury of a known client
943 base, making it far easier to make such changes and measure their
944 impact.
945
946 Many of the experimental design choices described in this document
947 were expected to trigger larger responses. For example, the choice
948 of naming scheme for Yeti-Root servers described in Section 4.5
949 defeats label compression. It makes a large priming response (up to
950 1754 octets with 25 NS records and their corresponding glue records);
951 the Yeti-Root zone transformation approach described in Section 4.2.2
952 greatly enlarges the apex DNSKEY RRset especially during the KSK
953 rollover (up to 1975 octets with 3 ZSKs and 2 KSKs). Therefore, an
954 increased incidence of fragmentation was expected.
955
956 The Yeti DNS testbed provides service on IPv6 only. However,
957 middleboxes (such as firewalls and some routers) are not friendly on
958 IPv6 fragments. There are reports of a notable packet drop rate due
959 to the mistreatment of middleboxes on IPv6 fragments [FRAGDROP]
960 [RFC7872]. One APNIC study [IPv6-frag-DNS] reported that 37% of
961 endpoints using IPv6-capable DNS resolvers cannot receive a
962 fragmented IPv6 response over UDP.
963
964 To study the impact, RIPE Atlas probes were used. For each Yeti-Root
965 server, an Atlas measurement was set up using 100 IPv6-enabled probes
966 from five regions, sending a DNS query for "./IN/DNSKEY" using UDP
967 transport with DO=1. This measurement, when carried out concurrently
968 with a Yeti KSK rollover, further exacerbating the potential for
969 fragmentation, identified a 7% failure rate compared with a non-
970 fragmented control. A failure rate of 2% was observed with response
971 sizes of 1414 octets, which was surprising given the expected
972 prevalence of 1500-octet (Ethernet-framed) MTUs.
973
974 The consequences of fragmentation were not limited to failures in
975 delivering DNS responses over UDP transport. There were two cases
976 where a Yeti-Root server failed when using TCP to transfer the Yeti-
977 Root zone from a DM. DM log files revealed "socket is not connected"
978 errors corresponding to zone transfer requests. Further
979 experimentation revealed that combinations of NetBSD 6.1, NetBSD
980 7.0RC1, FreeBSD 10.0, Debian 3.2, and VMWare ESXI 5.5 resulted in a
981 high TCP Maximum Segment Size (MSS) value of 1440 octets being
982 negotiated between client and server despite the presence of the
983 IPV6_USE_MIN_MTU socket option, as described in [USE_MIN_MTU]. The
984
985
986
987 Song, et al. Informational [Page 18]
988 RFC 8483 Yeti DNS Testbed October 2018
989
990
991 mismatch appears to cause outbound segments of a size greater than
992 1280 octets to be dropped before sending. Setting the local TCP MSS
993 to 1220 octets (chosen as 1280 - 60, the size of the IPv6 TCP header
994 with no other extension headers) was observed to be a pragmatic
995 mitigation.
996
997 5.1.2. Serving IPv4-Only End-Users
998
999 Yeti resolvers have been successfully used by real-world end-users
1000 for general name resolution within a number of participant
1001 organizations, including resolution of names to IPv4 addresses and
1002 resolution by IPv4-only end-user devices.
1003
1004 Some participants, recognizing the operational importance of
1005 reliability in resolver infrastructure and concerned about the
1006 stability of their IPv6 connectivity, chose to deploy Yeti resolvers
1007 in parallel to conventional resolvers, making both available to end-
1008 users. While the viability of this approach provides a useful data
1009 point, end-users using Yeti resolvers exclusively provided a better
1010 opportunity to identify and understand any failures in the Yeti DNS
1011 testbed infrastructure.
1012
1013 Resolvers deployed in IPv4-only environments were able to join the
1014 Yeti DNS testbed by way of upstream, dual-stack Yeti resolvers. In
1015 one case (CERNET2), this was done by assigning IPv4 addresses to
1016 Yeti-Root servers and mapping them in dual-stack IVI translation
1017 devices [RFC6219].
1018
1019 5.2. Zone Distribution
1020
1021 The Yeti DNS testbed makes use of multiple DMs to distribute the
1022 Yeti-Root zone, an approach that would allow the number of Yeti-Root
1023 servers to scale to a higher number than could be supported by a
1024 single distribution source and that provided redundancy. The use of
1025 multiple DMs introduced some operational challenges, however, which
1026 are described in the following sections.
1027
1028 5.2.1. Zone Transfers
1029
1030 Yeti-Root servers were configured to serve the Yeti-Root zone as
1031 slaves. Each slave had all DMs configured as masters, providing
1032 redundancy in zone synchronization.
1033
1034 Each DM in the Yeti testbed served a Yeti-Root zone that was
1035 functionally equivalent but not congruent to that served by every
1036 other DM (see Section 4.3). The differences included variations in
1037 the SOA.MNAME field and, more critically, in the RRSIGs for
1038 everything other than the apex DNSKEY RRset, since signatures for all
1039
1040
1041
1042 Song, et al. Informational [Page 19]
1043 RFC 8483 Yeti DNS Testbed October 2018
1044
1045
1046 other RRsets are generated using a private key that is only available
1047 to the DM serving its particular variant of the zone (see Sections
1048 4.2.1 and 4.2.2).
1049
1050 Incremental Zone Transfer (IXFR), as described in [RFC1995], is a
1051 viable mechanism to use for zone synchronization between any Yeti-
1052 Root server and a consistent, single DM. However, if that Yeti-Root
1053 server ever selected a different DM, IXFR would no longer be a safe
1054 mechanism; structural changes between the incongruent zones on
1055 different DMs would not be included in any transferred delta, and the
1056 result would be a zone that was not internally self-consistent. For
1057 this reason, the first transfer after a change of DM would require
1058 AXFR not IXFR.
1059
1060 None of the DNS software in use on Yeti-Root servers supports this
1061 mixture of IXFR/AXFR according to the master server in use. This is
1062 unsurprising, given that the environment described above in the Yeti-
1063 Root system is idiosyncratic; conventional zone transfer graphs
1064 involve zones that are congruent between all nodes. For this reason,
1065 all Yeti-Root servers are configured to use AXFR at all times, and
1066 never IXFR, to ensure that zones being served are internally self-
1067 consistent.
1068
1069 5.2.2. Delays in Yeti-Root Zone Distribution
1070
1071 Each Yeti DM polled the Root Server system for a new revision of the
1072 root zone on an interleaved schedule, as described in Section 4.1.
1073 Consequently, different DMs were expected to retrieve each revision
1074 of the root zone, and make a corresponding revision of the Yeti-Root
1075 zone available, at different times. The availability of a new
1076 revision of the Yeti-Root zone on the first DM would typically
1077 precede that of the last by 40 minutes.
1078
1079 Given this distribution mechanism, it might be expected that the
1080 maximum latency between the publication of a new revision of the root
1081 zone and the availability of the corresponding Yeti-Root zone on any
1082 Yeti-Root server would be 20 minutes, since in normal operation at
1083 least one DM should serve that Yeti-Zone within 20 minutes of root
1084 zone publication. In practice, this was not observed.
1085
1086 In one case, a Yeti-Root server running Bundy 1.2.0 on FreeBSD
1087 10.2-RELEASE was found to lag root zone publication by as much as ten
1088 hours. Upon investigation, this was found to be due to software
1089 defects that were subsequently corrected.
1090
1091 More generally, Yeti-Root servers were observed routinely to lag root
1092 zone publication by more than 20 minutes, and relatively often by
1093 more than 40 minutes. Whilst in some cases this might be assumed to
1094
1095
1096
1097 Song, et al. Informational [Page 20]
1098 RFC 8483 Yeti DNS Testbed October 2018
1099
1100
1101 be a result of connectivity problems, perhaps suppressing the
1102 delivery of NOTIFY messages, it was also observed that Yeti-Root
1103 servers receiving a NOTIFY from one DM would often send SOA queries
1104 and AXFR requests to a different DM. If that DM were not yet serving
1105 the new revision of the Yeti-Root zone, a delay in updating the Yeti-
1106 Root server would naturally result.
1107
1108 5.2.3. Mixed RRSIGs from Different DM ZSKs
1109
1110 The second approach for doing the transformation of Root Zone to
1111 Yeti-Root zone (Section 4.2.2) introduces a situation where mixed
1112 RRSIGs from different DM ZSKs are cached in one resolver.
1113
1114 It is observed that the Yeti-Root zone served by any particular Yeti-
1115 Root server will include signatures generated using the ZSK from the
1116 DM that served the Yeti-Root zone to that Yeti-Root server.
1117 Signatures cached at resolvers might be retrieved from any Yeti-Root
1118 server, and hence are expected to be a mixture of signatures
1119 generated by different ZSKs. Since all ZSKs can be trusted through
1120 the signature by the Yeti KSK over the DNSKEY RRset, which includes
1121 all ZSKs, the mixture of signatures was predicted not to be a threat
1122 to reliable validation.
1123
1124 It was first tested in BII's lab environment as a proof of concept.
1125 It was observed in the resolver's DNSSEC log that the process of
1126 verifying an RDATA set shows "success" with a key (keyid) in the
1127 DNSKEY RRset. It was implemented later in three DMs that were
1128 carefully coordinated and made public to all Yeti resolver operators
1129 and participants in Yeti's mailing list. At least 45 Yeti resolvers
1130 (deployed by Yeti operators) were being monitored and had set a
1131 reporting trigger if anything was wrong. In addition, the Yeti
1132 mailing list is open for error reports from other participants. So
1133 far, the Yeti testbed has been operated in this configuration (with
1134 multiple ZSKs) for 2 years. This configuration has proven workable
1135 and reliable, even when rollovers of individual ZSKs are on different
1136 schedules.
1137
1138 Another consequence of this approach is that the apex DNSKEY RRset in
1139 the Yeti-Root zone is much larger than the corresponding DNSKEY RRset
1140 in the Root Zone. This requires more space and produces a larger
1141 response to the query for the DNSKEY RRset especially during the KSK
1142 rollover.
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152 Song, et al. Informational [Page 21]
1153 RFC 8483 Yeti DNS Testbed October 2018
1154
1155
1156 5.3. DNSSEC KSK Rollover
1157
1158 At the time of writing, the Root Zone KSK is expected to undergo a
1159 carefully orchestrated rollover as described in [ICANN2016]. ICANN
1160 has commissioned various tests and has published an external test
1161 plan [ICANN2017].
1162
1163 Three related DNSSEC KSK rollover exercises were carried out on the
1164 Yeti DNS testbed, somewhat concurrent with the planning and execution
1165 of the rollover in the root zone. Brief descriptions of these
1166 exercises are included below.
1167
1168 5.3.1. Failure-Case KSK Rollover
1169
1170 The first KSK rollover that was executed on the Yeti DNS testbed
1171 deliberately ignored the 30-day hold-down timer specified in
1172 [RFC5011] before retiring the outgoing KSK.
1173
1174 It was confirmed that clients of some (but not all) validating Yeti
1175 resolvers experienced resolution failures (received SERVFAIL
1176 responses) following this change. Those resolvers required
1177 administrator intervention to install a functional trust anchor
1178 before resolution was restored.
1179
1180 5.3.2. KSK Rollover vs. BIND9 Views
1181
1182 The second Yeti KSK rollover was designed with similar phases to the
1183 ICANN's KSK rollover, although with modified timings to reduce the
1184 time required to complete the process. The "slot" used in this
1185 rollover was ten days long, as follows:
1186
1187 +-----------------+----------------+----------+
1188 | | Old Key: 19444 | New Key |
1189 +-----------------+----------------+----------+
1190 | slot 1 | pub+sign | |
1191 | slot 2, 3, 4, 5 | pub+sign | pub |
1192 | slot 6, 7 | pub | pub+sign |
1193 | slot 8 | revoke | pub+sign |
1194 | slot 9 | | pub+sign |
1195 +-----------------+----------------+----------+
1196
1197 During this rollover exercise, a problem was observed on one Yeti
1198 resolver that was running BIND 9.10.4-p2 [KROLL-ISSUE]. That
1199 resolver was configured with multiple views serving clients in
1200 different subnets at the time that the KSK rollover began. DNSSEC
1201 validation failures were observed following the completion of the KSK
1202 rollover, triggered by the addition of a new view that was intended
1203 to serve clients from a new subnet.
1204
1205
1206
1207 Song, et al. Informational [Page 22]
1208 RFC 8483 Yeti DNS Testbed October 2018
1209
1210
1211 BIND 9.10 requires "managed-keys" configuration to be specified in
1212 every view, a detail that was apparently not obvious to the operator
1213 in this case and that was subsequently highlighted by the Internet
1214 Systems Consortium (ISC) in their general advice relating to KSK
1215 rollover in the root zone to users of BIND 9 [ISC-BIND]. When the
1216 "managed-keys" configuration is present in every view that is
1217 configured to perform validation, trust anchors for all views are
1218 updated during a KSK rollover.
1219
1220 5.3.3. Large Responses during KSK Rollover
1221
1222 Since a KSK rollover necessarily involves the publication of outgoing
1223 and incoming public keys simultaneously, an increase in the size of
1224 DNSKEY responses is expected. The third KSK rollover carried out on
1225 the Yeti DNS testbed was accompanied by a concerted effort to observe
1226 response sizes and their impact on end-users.
1227
1228 As described in Section 4.2.2, in the Yeti DNS testbed each DM can
1229 maintain control of its own set of ZSKs, which can undergo rollover
1230 independently. During a KSK rollover where concurrent ZSK rollovers
1231 are executed by each of three DMs, the maximum number of apex DNSKEY
1232 RRs present is eight (incoming and outgoing KSK, plus incoming and
1233 outgoing of each of three ZSKs). In practice, however, such
1234 concurrency did not occur; only the BII ZSK was rolled during the KSK
1235 rollover, and hence only three DNSKEY RRset configurations were
1236 observed:
1237
1238 o 3 ZSKs and 2 KSKs, DNSKEY response of 1975 octets;
1239
1240 o 3 ZSKs and 1 KSK, DNSKEY response of 1414 octets; and
1241
1242 o 2 ZSKs and 1 KSK, DNSKEY response of 1139 octets.
1243
1244 RIPE Atlas probes were used as described in Section 5.1.1 to send
1245 DNSKEY queries directly to Yeti-Root servers. The numbers of queries
1246 and failures were recorded and categorized according to the response
1247 sizes at the time the queries were sent. A summary of the results
1248 ([YetiLR]) is as follows:
1249
1250 +---------------+----------+---------------+--------------+
1251 | Response Size | Failures | Total Queries | Failure Rate |
1252 +---------------+----------+---------------+--------------+
1253 | 1139 | 274 | 64252 | 0.0042 |
1254 | 1414 | 3141 | 126951 | 0.0247 |
1255 | 1975 | 2920 | 42529 | 0.0687 |
1256 +---------------+----------+---------------+--------------+
1257
1258
1259
1260
1261
1262 Song, et al. Informational [Page 23]
1263 RFC 8483 Yeti DNS Testbed October 2018
1264
1265
1266 The general approach illustrated briefly here provides a useful
1267 example of how the design of the Yeti DNS testbed, separate from the
1268 Root Server system but constructed as a live testbed on the Internet,
1269 facilitates the use of general-purpose active measurement facilities
1270 (such as RIPE Atlas probes) as well as internal passive measurement
1271 (such as packet capture).
1272
1273 5.4. Capture of Large DNS Response
1274
1275 Packet capture is a common approach in production DNS systems where
1276 operators require fine-grained insight into traffic in order to
1277 understand production traffic. For authoritative servers, capture of
1278 inbound query traffic is often sufficient, since responses can be
1279 synthesized with knowledge of the zones being served at the time the
1280 query was received. Queries are generally small enough not to be
1281 fragmented, and even with TCP transport are generally packed within a
1282 single segment.
1283
1284 The Yeti DNS testbed has different requirements; in particular, there
1285 is a desire to compare responses obtained from the Yeti
1286 infrastructure with those received from the Root Server system in
1287 response to a single query stream (e.g., using the "Yeti Many Mirror
1288 Verifier" (YmmV) as described in Appendix D). Some Yeti-Root servers
1289 were capable of recovering complete DNS messages from within
1290 nameservers, e.g., using dnstap; however, not all servers provided
1291 that functionality, and a consistent approach was desirable.
1292
1293 The requirement to perform passive capture of responses from the wire
1294 together with experiments that were expected (and in some cases
1295 designed) to trigger fragmentation and use of TCP transport led to
1296 the development of a new tool, PcapParser, to perform fragment and
1297 TCP stream reassembly from raw packet capture data. A brief
1298 description of PcapParser is included in Appendix D.
1299
1300 5.5. Automated Maintenance of the Hints File
1301
1302 Renumbering events in the Root Server system are relatively rare.
1303 Although each such event is accompanied by the publication of an
1304 updated hints file in standard locations, the task of updating local
1305 copies of that file used by DNS resolvers is manual, and the process
1306 has an observably long tail. For example, in 2015 J-Root was still
1307 receiving traffic at its old address some thirteen years after
1308 renumbering [Wessels2015].
1309
1310 The observed impact of these old, deployed hints files is minimal,
1311 likely due to the very low frequency of such renumbering events.
1312 Even the oldest of hints files would still contain some accurate root
1313 server addresses from which priming responses could be obtained.
1314
1315
1316
1317 Song, et al. Informational [Page 24]
1318 RFC 8483 Yeti DNS Testbed October 2018
1319
1320
1321 By contrast, due to the experimental nature of the system and the
1322 fact that it is operated mainly by volunteers, Yeti-Root servers are
1323 added, removed, and renumbered with much greater frequency. A tool
1324 to facilitate automatic maintenance of hints files was therefore
1325 created: [hintUpdate].
1326
1327 The automated procedure followed by the hintUpdate tool is as
1328 follows.
1329
1330 1. Use the local resolver to obtain a response to the query
1331 "./IN/NS".
1332
1333 2. Use the local resolver to obtain a set of IPv4 and IPv6 addresses
1334 for each name server.
1335
1336 3. Validate all signatures obtained from the local resolvers and
1337 confirm that all data is signed.
1338
1339 4. Compare the data obtained to that contained within the currently
1340 active hints file; if there are differences, rotate the old one
1341 away and replace it with a new one.
1342
1343 This tool would not function unmodified when used in the Root Server
1344 system, since the names of individual Root Servers (e.g., A.ROOT-
1345 SERVERS.NET) are not DNSSEC signed. All Yeti-Root server names are
1346 DNSSEC signed, however, and hence this tool functions as expected in
1347 that environment.
1348
1349 5.6. Root Label Compression in Knot DNS Server
1350
1351 [RFC1035] specifies that domain names can be compressed when encoded
1352 in DNS messages, and can be represented as one of
1353
1354 1. a sequence of labels ending in a zero octet;
1355
1356 2. a pointer; or
1357
1358 3. a sequence of labels ending with a pointer.
1359
1360 The purpose of this flexibility is to reduce the size of domain names
1361 encoded in DNS messages.
1362
1363 It was observed that Yeti-Root servers running Knot 2.0 would
1364 compress the zero-length label (the root domain, often represented as
1365 ".") using a pointer to an earlier example. Although legal, this
1366 encoding increases the encoded size of the root label from one octet
1367 to two; it was also found to break some client software -- in
1368
1369
1370
1371
1372 Song, et al. Informational [Page 25]
1373 RFC 8483 Yeti DNS Testbed October 2018
1374
1375
1376 particular, the Go DNS library. Bug reports were filed against both
1377 Knot and the Go DNS library, and both were resolved in subsequent
1378 releases.
1379
1380 6. Conclusions
1381
1382 Yeti DNS was designed and implemented as a live DNS root system
1383 testbed. It serves a root zone ("Yeti-Root" in this document)
1384 derived from the root zone published by the IANA with only those
1385 structural modifications necessary to ensure its function in the
1386 testbed system. The Yeti DNS testbed has proven to be a useful
1387 platform to address many questions that would be challenging to
1388 answer using the production Root Server system, such as those
1389 included in Section 3.
1390
1391 Indicative findings following from the construction and operation of
1392 the Yeti DNS testbed include:
1393
1394 o Operation in a pure IPv6-only environment; confirmation of a
1395 significant failure rate in the transmission of large responses
1396 (~7%), but no other persistent failures observed. Two cases in
1397 which Yeti-Root servers failed to retrieve the Yeti-Root zone due
1398 to fragmentation of TCP segments; mitigated by setting a TCP MSS
1399 of 1220 octets (see Section 5.1.1).
1400
1401 o Successful operation with three autonomous Yeti-Root zone signers
1402 and 25 Yeti-Root servers, and confirmation that IXFR is not an
1403 appropriate transfer mechanism of zones that are structurally
1404 incongruent across different transfer paths (see Section 5.2).
1405
1406 o ZSK size increased to 2048 bits and multiple KSK rollovers
1407 executed to exercise support of RFC 5011 in validating resolvers;
1408 identification of pitfalls relating to views in BIND9 when
1409 configured with "managed-keys" (see Section 5.3).
1410
1411 o Use of natural (non-normalized) names for Yeti-Root servers
1412 exposed some differences between implementations in the inclusion
1413 of additional-section glue in responses to priming queries;
1414 however, despite this inefficiency, Yeti resolvers were observed
1415 to function adequately (see Section 4.5).
1416
1417 o It was observed that Knot 2.0 performed label compression on the
1418 root (empty) label. This resulted in an increased encoding size
1419 for references to the root label, since a pointer is encoded as
1420 two octets whilst the root label itself only requires one (see
1421 Section 5.6).
1422
1423
1424
1425
1426
1427 Song, et al. Informational [Page 26]
1428 RFC 8483 Yeti DNS Testbed October 2018
1429
1430
1431 o Some tools were developed in response to the operational
1432 experience of running and using the Yeti DNS testbed: DNS fragment
1433 and DNS Additional Truncated Response (ATR) for large DNS
1434 responses, a BIND9 patch for additional-section glue, YmmV, and
1435 IPv6 defrag for capturing and mirroring traffic. In addition, a
1436 tool to facilitate automatic maintenance of hints files was
1437 created (see Appendix D).
1438
1439 The Yeti DNS testbed was used only by end-users whose local
1440 infrastructure providers had made the conscious decision to do so, as
1441 is appropriate for an experimental, non-production system. So far,
1442 no serious user complaints have reached Yeti's mailing list during
1443 Yeti normal operation. Adding more instances into the Yeti root
1444 system may help to enhance the quality of service, but it is
1445 generally accepted that Yeti DNS performance is good enough to serve
1446 the purpose of DNS Root testbed.
1447
1448 The experience gained during the operation of the Yeti DNS testbed
1449 suggested several topics worthy of further study:
1450
1451 o Priming truncation and TCP-only Yeti-Root servers: observe and
1452 measure the worst-possible case for priming truncation by
1453 responding with TC=1 to all priming queries received over UDP
1454 transport, forcing clients to retry using TCP. This should also
1455 give some insight into the usefulness of TCP-only DNS in general.
1456
1457 o KSK ECDSA Rollover: one possible way to reduce DNSKEY response
1458 sizes is to change to an elliptic curve signing algorithm. While
1459 in principle this can be done separately for the KSK and the ZSK,
1460 the RIPE NCC has done research recently and discovered that some
1461 resolvers require that both KSK and ZSK use the same algorithm.
1462 This means that an algorithm roll also involves a KSK roll.
1463 Performing an algorithm roll at the root would be an interesting
1464 challenge.
1465
1466 o Sticky Notify for zone transfer: the non-applicability of IXFR as
1467 a zone transfer mechanism in the Yeti DNS testbed could be
1468 mitigated by the implementation of a sticky preference for master
1469 server for each slave. This would be so that an initial AXFR
1470 response could be followed up with IXFR requests without
1471 compromising zone integrity in the case (as with Yeti) that
1472 equivalent but incongruent versions of a zone are served by
1473 different masters.
1474
1475
1476
1477
1478
1479
1480
1481
1482 Song, et al. Informational [Page 27]
1483 RFC 8483 Yeti DNS Testbed October 2018
1484
1485
1486 o Key distribution for zone transfer credentials: the use of a
1487 shared secret between slave and master requires key distribution
1488 and management whose scaling properties are not ideally suited to
1489 systems with large numbers of transfer clients. Other approaches
1490 for key distribution and authentication could be considered.
1491
1492 o DNS is a tree-based hierarchical database. Mathematically, it has
1493 a root node and dependency between parent and child nodes. So,
1494 any failures and instability of parent nodes (Root in Yeti's case)
1495 may impact their child nodes if there is a human mistake, a
1496 malicious attack, or even an earthquake. It is proposed to define
1497 technology and practices to allow any organization, from the
1498 smallest company to nations, to be self-sufficient in their DNS.
1499
1500 o In Section 3.12 of [RFC8324], a "Centrally Controlled Root" is
1501 viewed as an issue of DNS. In future work, it would be
1502 interesting to test some technical tools like blockchain [BC] to
1503 either remove the technical requirement for a central authority
1504 over the root or enhance the security and stability of the
1505 existing Root.
1506
1507 7. Security Considerations
1508
1509 As introduced in Section 4.4, service metadata is synchronized among
1510 3 DMs using Git tool. Any security issue around Git may affect Yeti
1511 DM operation. For example, a hacker may compromise one DM's Git
1512 repository and push unwanted changes to the Yeti DM system; this may
1513 introduce a bad root server or bad key for a period of time.
1514
1515 The Yeti resolver needs the bootstrapping files to join the testbed,
1516 like the hints file and trust anchor of Yeti. All required
1517 information is published on <yeti-dns.org> and <github.com>. If a
1518 hacker tampers with those websites by creating a fake page, a new
1519 resolver may lose its way and be configured with a bad root.
1520
1521 DNSSEC is an important research goal in the Yeti DNS testbed. To
1522 reduce the central function of DNSSEC for Root zone, we sign the
1523 Yeti-Root zone using multiple, independently operated DNSSEC signers
1524 and multiple corresponding ZSKs (see Section 4.2). To verify ICANN's
1525 KSK rollover, we rolled the Yeti KSK three times according to RFC
1526 5011, and we do have some observations (see Section 5.3). In
1527 addition, larger RSA key sizes were used in the testbed before
1528 2048-bit keys were used in the ZSK signing process of the IANA Root
1529 zone.
1530
1531 8. IANA Considerations
1532
1533 This document has no IANA actions.
1534
1535
1536
1537 Song, et al. Informational [Page 28]
1538 RFC 8483 Yeti DNS Testbed October 2018
1539
1540
1541 9. References
1542
1543 9.1. Normative References
1544
1545 [RFC1034] Mockapetris, P., "Domain names - concepts and facilities",
1546 STD 13, RFC 1034, DOI 10.17487/RFC1034, November 1987,
1547 <https://www.rfc-editor.org/info/rfc1034>.
1548
1549 [RFC1035] Mockapetris, P., "Domain names - implementation and
1550 specification", STD 13, RFC 1035, DOI 10.17487/RFC1035,
1551 November 1987, <https://www.rfc-editor.org/info/rfc1035>.
1552
1553 [RFC1995] Ohta, M., "Incremental Zone Transfer in DNS", RFC 1995,
1554 DOI 10.17487/RFC1995, August 1996,
1555 <https://www.rfc-editor.org/info/rfc1995>.
1556
1557 [RFC1996] Vixie, P., "A Mechanism for Prompt Notification of Zone
1558 Changes (DNS NOTIFY)", RFC 1996, DOI 10.17487/RFC1996,
1559 August 1996, <https://www.rfc-editor.org/info/rfc1996>.
1560
1561 [RFC5011] StJohns, M., "Automated Updates of DNS Security (DNSSEC)
1562 Trust Anchors", STD 74, RFC 5011, DOI 10.17487/RFC5011,
1563 September 2007, <https://www.rfc-editor.org/info/rfc5011>.
1564
1565 [RFC5890] Klensin, J., "Internationalized Domain Names for
1566 Applications (IDNA): Definitions and Document Framework",
1567 RFC 5890, DOI 10.17487/RFC5890, August 2010,
1568 <https://www.rfc-editor.org/info/rfc5890>.
1569
1570 9.2. Informative References
1571
1572 [ATR] Song, L., "ATR: Additional Truncation Response for Large
1573 DNS Response", Work in Progress, draft-song-atr-large-
1574 resp-02, August 2018.
1575
1576 [BC] Wikipedia, "Blockchain", September 2018,
1577 <https://en.wikipedia.org/w/
1578 index.php?title=Blockchain&oldid=861681529>.
1579
1580 [FRAGDROP] Jaeggli, J., Colitti, L., Kumari, W., Vyncke, E., Kaeo,
1581 M., and T. Taylor, "Why Operators Filter Fragments and
1582 What It Implies", Work in Progress, draft-taylor-v6ops-
1583 fragdrop-02, December 2013.
1584
1585 [FRAGMENTS]
1586 Sivaraman, M., Kerr, S., and D. Song, "DNS message
1587 fragments", Work in Progress, draft-muks-dns-message-
1588 fragments-00, July 2015.
1589
1590
1591
1592 Song, et al. Informational [Page 29]
1593 RFC 8483 Yeti DNS Testbed October 2018
1594
1595
1596 [hintUpdate]
1597 "Hintfile Auto Update", commit de428c0, October 2015,
1598 <https://github.com/BII-Lab/Hintfile-Auto-Update>.
1599
1600 [HOW_ATR_WORKS]
1601 Huston, G., "How well does ATR actually work?",
1602 APNIC blog, April 2018,
1603 <https://blog.apnic.net/2018/04/16/
1604 how-well-does-atr-actually-work/>.
1605
1606 [ICANN2010]
1607 Schlyter, J., Lamb, R., and R. Balasubramanian, "DNSSEC
1608 Key Management Implementation for the Root Zone (DRAFT)",
1609 May 2010, <http://www.root-dnssec.org/wp-content/
1610 uploads/2010/05/draft-icann-dnssec-keymgmt-01.txt>.
1611
1612 [ICANN2016]
1613 Design Team, "Root Zone KSK Rollover Plan", March 2016,
1614 <https://www.iana.org/reports/2016/
1615 root-ksk-rollover-design-20160307.pdf>.
1616
1617 [ICANN2017]
1618 ICANN, "2017 KSK Rollover External Test Plan", July 2016,
1619 <https://www.icann.org/en/system/files/files/
1620 ksk-rollover-external-test-plan-22jul16-en.pdf>.
1621
1622 [IPv6-frag-DNS]
1623 Huston, G., "Dealing with IPv6 fragmentation in the DNS",
1624 APNIC blog, August 2017,
1625 <https://blog.apnic.net/2017/08/22/
1626 dealing-ipv6-fragmentation-dns>.
1627
1628 [ISC-BIND] Risk, V., "2017 Root Key Rollover - What Does it Mean for
1629 BIND Users?", Internet Systems Consortium, December 2016,
1630 <https://www.isc.org/blogs/2017-root-key-rollover-what-
1631 does-it-mean-for-bind-users/>.
1632
1633 [ISC-TN-2003-1]
1634 Abley, J., "Hierarchical Anycast for Global Service
1635 Distribution", March 2003,
1636 <http://ftp.isc.org/isc/pubs/tn/isc-tn-2003-1.txt>.
1637
1638 [ITI2014] ICANN, "Identifier Technology Innovation Report", May
1639 2014, <https://www.icann.org/en/system/files/files/
1640 iti-report-15may14-en.pdf>.
1641
1642
1643
1644
1645
1646
1647 Song, et al. Informational [Page 30]
1648 RFC 8483 Yeti DNS Testbed October 2018
1649
1650
1651 [KROLL-ISSUE]
1652 Song, D., "A DNSSEC issue during Yeti KSK rollover", Yeti
1653 DNS blog, October 2016, <http://yeti-dns.org/yeti/blog/
1654 2016/10/26/A-DNSSEC-issue-during-Yeti-KSK-rollover.html>.
1655
1656 [PINZ] Song, D., "Yeti experiment plan for PINZ", Yeti DNS blog,
1657 May 2018, <http://yeti-dns.org/yeti/blog/2018/05/01/
1658 Experiment-plan-for-PINZ.html>.
1659
1660 [RFC2826] Internet Architecture Board, "IAB Technical Comment on the
1661 Unique DNS Root", RFC 2826, DOI 10.17487/RFC2826, May
1662 2000, <https://www.rfc-editor.org/info/rfc2826>.
1663
1664 [RFC2845] Vixie, P., Gudmundsson, O., Eastlake 3rd, D., and B.
1665 Wellington, "Secret Key Transaction Authentication for DNS
1666 (TSIG)", RFC 2845, DOI 10.17487/RFC2845, May 2000,
1667 <https://www.rfc-editor.org/info/rfc2845>.
1668
1669 [RFC6219] Li, X., Bao, C., Chen, M., Zhang, H., and J. Wu, "The
1670 China Education and Research Network (CERNET) IVI
1671 Translation Design and Deployment for the IPv4/IPv6
1672 Coexistence and Transition", RFC 6219,
1673 DOI 10.17487/RFC6219, May 2011,
1674 <https://www.rfc-editor.org/info/rfc6219>.
1675
1676 [RFC6891] Damas, J., Graff, M., and P. Vixie, "Extension Mechanisms
1677 for DNS (EDNS(0))", STD 75, RFC 6891,
1678 DOI 10.17487/RFC6891, April 2013,
1679 <https://www.rfc-editor.org/info/rfc6891>.
1680
1681 [RFC7720] Blanchet, M. and L-J. Liman, "DNS Root Name Service
1682 Protocol and Deployment Requirements", BCP 40, RFC 7720,
1683 DOI 10.17487/RFC7720, December 2015,
1684 <https://www.rfc-editor.org/info/rfc7720>.
1685
1686 [RFC7872] Gont, F., Linkova, J., Chown, T., and W. Liu,
1687 "Observations on the Dropping of Packets with IPv6
1688 Extension Headers in the Real World", RFC 7872,
1689 DOI 10.17487/RFC7872, June 2016,
1690 <https://www.rfc-editor.org/info/rfc7872>.
1691
1692 [RFC8109] Koch, P., Larson, M., and P. Hoffman, "Initializing a DNS
1693 Resolver with Priming Queries", BCP 209, RFC 8109,
1694 DOI 10.17487/RFC8109, March 2017,
1695 <https://www.rfc-editor.org/info/rfc8109>.
1696
1697
1698
1699
1700
1701
1702 Song, et al. Informational [Page 31]
1703 RFC 8483 Yeti DNS Testbed October 2018
1704
1705
1706 [RFC8324] Klensin, J., "DNS Privacy, Authorization, Special Uses,
1707 Encoding, Characters, Matching, and Root Structure: Time
1708 for Another Look?", RFC 8324, DOI 10.17487/RFC8324,
1709 February 2018, <https://www.rfc-editor.org/info/rfc8324>.
1710
1711 [RRL] Vixie, P. and V. Schryver, "Response Rate Limiting in the
1712 Domain Name System (DNS RRL)", June 2012,
1713 <http://www.redbarn.org/dns/ratelimits>.
1714
1715 [RSSAC001] Root Server System Advisory Committee (RSSAC), "Service
1716 Expectations of Root Servers", RSSAC001 Version 1,
1717 December 2015,
1718 <https://www.icann.org/en/system/files/files/
1719 rssac-001-root-service-expectations-04dec15-en.pdf>.
1720
1721 [RSSAC023] Root Server System Advisory Committee (RSSAC), "History of
1722 the Root Server System", November 2016,
1723 <https://www.icann.org/en/system/files/files/
1724 rssac-023-04nov16-en.pdf>.
1725
1726 [SUNSET4] IETF, "Sunsetting IPv4 (sunset4) Concluded WG",
1727 <https://datatracker.ietf.org/wg/sunset4/about/>.
1728
1729 [TNO2009] Gijsen, B., Jamakovic, A., and F. Roijers, "Root Scaling
1730 Study: Description of the DNS Root Scaling Model",
1731 TNO report, September 2009,
1732 <https://www.icann.org/en/system/files/files/
1733 root-scaling-model-description-29sep09-en.pdf>.
1734
1735 [USE_MIN_MTU]
1736 Andrews, M., "TCP Fails To Respect IPV6_USE_MIN_MTU", Work
1737 in Progress, draft-andrews-tcp-and-ipv6-use-minmtu-04,
1738 October 2015.
1739
1740 [Wessels2015]
1741 Wessels, D., Castonguay, J., and P. Barber, "Thirteen
1742 Years of 'Old J-Root'", DNS-OARC Fall 2015 Workshop,
1743 October 2015, <https://indico.dns-oarc.net/event/24/
1744 session/10/contribution/10/material/slides/0.pdf>.
1745
1746 [YetiLR] "Observation on Large response issue during Yeti KSK
1747 rollover", Yeti DNS blog, August 2017,
1748 <https://yeti-dns.org/yeti/blog/2017/08/02/
1749 large-packet-impact-during-yeti-ksk-rollover.html>.
1750
1751
1752
1753
1754
1755
1756
1757 Song, et al. Informational [Page 32]
1758 RFC 8483 Yeti DNS Testbed October 2018
1759
1760
1761 Appendix A. Yeti-Root Hints File
1762
1763 The following hints file (complete and accurate at the time of
1764 writing) causes a DNS resolver to use the Yeti DNS testbed in place
1765 of the production Root Server system and hence participate in
1766 experiments running on the testbed.
1767
1768 Note that some lines have been wrapped in the text that follows in
1769 order to fit within the production constraints of this document.
1770 Wrapped lines are indicated with a blackslash character ("\"),
1771 following common convention.
1772
1773 . 3600000 IN NS bii.dns-lab.net
1774 bii.dns-lab.net 3600000 IN AAAA 240c:f:1:22::6
1775 . 3600000 IN NS yeti-ns.tisf.net
1776 yeti-ns.tisf.net 3600000 IN AAAA 2001:559:8000::6
1777 . 3600000 IN NS yeti-ns.wide.ad.jp
1778 yeti-ns.wide.ad.jp 3600000 IN AAAA 2001:200:1d9::35
1779 . 3600000 IN NS yeti-ns.as59715.net
1780 yeti-ns.as59715.net 3600000 IN AAAA \
1781 2a02:cdc5:9715:0:185:5:203:53
1782 . 3600000 IN NS dahu1.yeti.eu.org
1783 dahu1.yeti.eu.org 3600000 IN AAAA \
1784 2001:4b98:dc2:45:216:3eff:fe4b:8c5b
1785 . 3600000 IN NS ns-yeti.bondis.org
1786 ns-yeti.bondis.org 3600000 IN AAAA 2a02:2810:0:405::250
1787 . 3600000 IN NS yeti-ns.ix.ru
1788 yeti-ns.ix.ru 3600000 IN AAAA 2001:6d0:6d06::53
1789 . 3600000 IN NS yeti.bofh.priv.at
1790 yeti.bofh.priv.at 3600000 IN AAAA 2a01:4f8:161:6106:1::10
1791 . 3600000 IN NS yeti.ipv6.ernet.in
1792 yeti.ipv6.ernet.in 3600000 IN AAAA 2001:e30:1c1e:1::333
1793 . 3600000 IN NS yeti-dns01.dnsworkshop.org
1794 yeti-dns01.dnsworkshop.org \
1795 3600000 IN AAAA 2001:1608:10:167:32e::53
1796 . 3600000 IN NS yeti-ns.conit.co
1797 yeti-ns.conit.co 3600000 IN AAAA \
1798 2604:6600:2000:11::4854:a010
1799 . 3600000 IN NS dahu2.yeti.eu.org
1800 dahu2.yeti.eu.org 3600000 IN AAAA 2001:67c:217c:6::2
1801 . 3600000 IN NS yeti.aquaray.com
1802 yeti.aquaray.com 3600000 IN AAAA 2a02:ec0:200::1
1803 . 3600000 IN NS yeti-ns.switch.ch
1804 yeti-ns.switch.ch 3600000 IN AAAA 2001:620:0:ff::29
1805 . 3600000 IN NS yeti-ns.lab.nic.cl
1806 yeti-ns.lab.nic.cl 3600000 IN AAAA 2001:1398:1:21::8001
1807 . 3600000 IN NS yeti-ns1.dns-lab.net
1808
1809
1810
1811
1812 Song, et al. Informational [Page 33]
1813 RFC 8483 Yeti DNS Testbed October 2018
1814
1815
1816 yeti-ns1.dns-lab.net 3600000 IN AAAA 2001:da8:a3:a027::6
1817 . 3600000 IN NS yeti-ns2.dns-lab.net
1818 yeti-ns2.dns-lab.net 3600000 IN AAAA 2001:da8:268:4200::6
1819 . 3600000 IN NS yeti-ns3.dns-lab.net
1820 yeti-ns3.dns-lab.net 3600000 IN AAAA 2400:a980:30ff::6
1821 . 3600000 IN NS \
1822 ca978112ca1bbdcafac231b39a23dc.yeti-dns.net
1823 ca978112ca1bbdcafac231b39a23dc.yeti-dns.net \
1824 3600000 IN AAAA 2c0f:f530::6
1825 . 3600000 IN NS \
1826 3e23e8160039594a33894f6564e1b1.yeti-dns.net
1827 3e23e8160039594a33894f6564e1b1.yeti-dns.net \
1828 3600000 IN AAAA 2803:80:1004:63::1
1829 . 3600000 IN NS \
1830 3f79bb7b435b05321651daefd374cd.yeti-dns.net
1831 3f79bb7b435b05321651daefd374cd.yeti-dns.net \
1832 3600000 IN AAAA 2401:c900:1401:3b:c::6
1833 . 3600000 IN NS \
1834 xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c
1835 xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c \
1836 3600000 IN AAAA 2001:e30:1c1e:10::333
1837 . 3600000 IN NS yeti1.ipv6.ernet.in
1838 yeti1.ipv6.ernet.in 3600000 IN AAAA 2001:e30:187d::333
1839 . 3600000 IN NS yeti-dns02.dnsworkshop.org
1840 yeti-dns02.dnsworkshop.org \
1841 3600000 IN AAAA 2001:19f0:0:1133::53
1842 . 3600000 IN NS yeti.mind-dns.nl
1843 yeti.mind-dns.nl 3600000 IN AAAA 2a02:990:100:b01::53:0
1844
1845 Appendix B. Yeti-Root Server Priming Response
1846
1847 Here is the reply of a Yeti root name server to a priming request.
1848 The authoritative server runs NSD.
1849
1850 ...
1851 ;; Got answer:
1852 ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62391
1853 ;; flags: qr aa rd; QUERY: 1, ANSWER: 26, AUTHORITY: 0, ADDITIONAL: 7
1854 ;; WARNING: recursion requested but not available
1855
1856 ;; OPT PSEUDOSECTION:
1857 ; EDNS: version: 0, flags: do; udp: 1460
1858 ;; QUESTION SECTION:
1859 ;. IN NS
1860
1861 ;; ANSWER SECTION:
1862 . 86400 IN NS bii.dns-lab.net.
1863 . 86400 IN NS yeti.bofh.priv.at.
1864
1865
1866
1867 Song, et al. Informational [Page 34]
1868 RFC 8483 Yeti DNS Testbed October 2018
1869
1870
1871 . 86400 IN NS yeti.ipv6.ernet.in.
1872 . 86400 IN NS yeti.aquaray.com.
1873 . 86400 IN NS yeti.jhcloos.net.
1874 . 86400 IN NS yeti.mind-dns.nl.
1875 . 86400 IN NS dahu1.yeti.eu.org.
1876 . 86400 IN NS dahu2.yeti.eu.org.
1877 . 86400 IN NS yeti1.ipv6.ernet.in.
1878 . 86400 IN NS ns-yeti.bondis.org.
1879 . 86400 IN NS yeti-ns.ix.ru.
1880 . 86400 IN NS yeti-ns.lab.nic.cl.
1881 . 86400 IN NS yeti-ns.tisf.net.
1882 . 86400 IN NS yeti-ns.wide.ad.jp.
1883 . 86400 IN NS yeti-ns.datev.net.
1884 . 86400 IN NS yeti-ns.switch.ch.
1885 . 86400 IN NS yeti-ns.as59715.net.
1886 . 86400 IN NS yeti-ns1.dns-lab.net.
1887 . 86400 IN NS yeti-ns2.dns-lab.net.
1888 . 86400 IN NS yeti-ns3.dns-lab.net.
1889 . 86400 IN NS xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c.
1890 . 86400 IN NS yeti-dns01.dnsworkshop.org.
1891 . 86400 IN NS yeti-dns02.dnsworkshop.org.
1892 . 86400 IN NS 3f79bb7b435b05321651daefd374cd.yeti-dns.net.
1893 . 86400 IN NS ca978112ca1bbdcafac231b39a23dc.yeti-dns.net.
1894 . 86400 IN RRSIG NS 8 0 86400 (
1895 20171121050105 20171114050105 26253 .
1896 FUvezvZgKtlLzQx2WKyg+D6dw/pITcbuZhzStZfg+LNa
1897 DjLJ9oGIBTU1BuqTujKHdxQn0DcdFh9QE68EPs+93bZr
1898 VlplkmObj8f0B7zTQgGWBkI/K4Tn6bZ1I7QJ0Zwnk1mS
1899 BmEPkWmvo0kkaTQbcID+tMTodL6wPAgW1AdwQUInfy21
1900 p+31GGm3+SU6SJsgeHOzPUQW+dUVWmdj6uvWCnUkzW9p
1901 +5en4+85jBfEOf+qiyvaQwUUe98xZ1TOiSwYvk5s/qiv
1902 AMjG6nY+xndwJUwhcJAXBVmGgrtbiR8GiGZfGqt748VX
1903 4esLNtD8vdypucffem6n0T0eV1c+7j/eIA== )
1904
1905 ;; ADDITIONAL SECTION:
1906 bii.dns-lab.net. 86400 IN AAAA 240c:f:1:22::6
1907 yeti.bofh.priv.at. 86400 IN AAAA 2a01:4f8:161:6106:1::10
1908 yeti.ipv6.ernet.in. 86400 IN AAAA 2001:e30:1c1e:1::333
1909 yeti.aquaray.com. 86400 IN AAAA 2a02:ec0:200::1
1910 yeti.jhcloos.net. 86400 IN AAAA 2001:19f0:5401:1c3::53
1911 yeti.mind-dns.nl. 86400 IN AAAA 2a02:990:100:b01::53:0
1912
1913 ;; Query time: 163 msec
1914 ;; SERVER: 2001:4b98:dc2:45:216:3eff:fe4b:8c5b#53
1915 ;; WHEN: Tue Nov 14 16:45:37 +08 2017
1916 ;; MSG SIZE rcvd: 1222
1917
1918
1919
1920
1921
1922 Song, et al. Informational [Page 35]
1923 RFC 8483 Yeti DNS Testbed October 2018
1924
1925
1926 Appendix C. Active IPv6 Prefixes in Yeti DNS Testbed
1927
1928 The following table shows the prefixes that were active during 2017.
1929
1930 +----------------------+---------------------------------+----------+
1931 | Prefix | Originator | Location |
1932 +----------------------+---------------------------------+----------+
1933 | 240c::/28 | BII | CN |
1934 | 2001:6d0:6d06::/48 | MSK-IX | RU |
1935 | 2001:1488::/32 | CZ.NIC | CZ |
1936 | 2001:620::/32 | SWITCH | CH |
1937 | 2001:470::/32 | Hurricane Electric, Inc. | US |
1938 | 2001:0DA8:0202::/48 | BUPT6-CERNET2 | CN |
1939 | 2001:19f0:6c00::/38 | Choopa, LLC | US |
1940 | 2001:da8:205::/48 | BJTU6-CERNET2 | CN |
1941 | 2001:62a::/31 | Vienna University Computer | AT |
1942 | | Center | |
1943 | 2001:67c:217c::/48 | AFNIC | FR |
1944 | 2a02:2478::/32 | Profitbricks GmbH | DE |
1945 | 2001:1398:1::/48 | NIC Chile | CL |
1946 | 2001:4490:dc4c::/46 | NIB (National Internet | IN |
1947 | | Backbone) | |
1948 | 2001:4b98::/32 | Gandi | FR |
1949 | 2a02:aa8:0:2000::/52 | T-Systems-Eltec | ES |
1950 | 2a03:b240::/32 | Netskin GmbH | CH |
1951 | 2801:1a0::/42 | Universidad de Ibague | CO |
1952 | 2a00:1cc8::/40 | ICT Valle Umbra s.r.l. | IT |
1953 | 2a02:cdc0::/29 | ORG-CdSB1-RIPE | IT |
1954 +----------------------+---------------------------------+----------+
1955
1956 Appendix D. Tools Developed for Yeti DNS Testbed
1957
1958 Various tools were developed to support the Yeti DNS testbed, a
1959 selection of which are described briefly below.
1960
1961 YmmV ("Yeti Many Mirror Verifier") is designed to make it easy and
1962 safe for a DNS administrator to capture traffic sent from a resolver
1963 to the Root Server system and to replay it towards Yeti-Root servers.
1964 Responses from both systems are recorded and compared, and
1965 differences are logged. See <https://github.com/BII-Lab/ymmv>.
1966
1967 PcapParser is a module used by YmmV which reassembles fragmented IPv6
1968 datagrams and TCP segments from a PCAP archive and extracts DNS
1969 messages contained within them. See <https://github.com/RunxiaWan/
1970 PcapParser>.
1971
1972
1973
1974
1975
1976
1977 Song, et al. Informational [Page 36]
1978 RFC 8483 Yeti DNS Testbed October 2018
1979
1980
1981 DNS-layer-fragmentation implements DNS proxies that perform
1982 application-level fragmentation of DNS messages, based on
1983 [FRAGMENTS]. The idea with these proxies is to explore splitting DNS
1984 messages in the protocol itself, so they will not by fragmented by
1985 the IP layer. See <https://github.com/BII-Lab/DNS-layer-
1986 Fragmentation>.
1987
1988 DNS_ATR is an implementation of DNS Additional Truncated Response
1989 (ATR), as described in [ATR] and [HOW_ATR_WORKS]. DNS_ATR acts as a
1990 proxy between resolver and authoritative servers, forwarding queries
1991 and responses as a silent and transparent listener. Responses that
1992 are larger than a nominated threshold (1280 octets by default)
1993 trigger additional truncated responses to be sent immediately
1994 following the large response. See <https://github.com/songlinjian/
1995 DNS_ATR>.
1996
1997 Appendix E. Controversy
1998
1999 The Yeti DNS Project, its infrastructure and the various experiments
2000 that have been carried out using that infrastructure, have been
2001 described by people involved in the project in many public meetings
2002 at technical venues since its inception. The mailing lists using
2003 which the operation of the infrastructure has been coordinated are
2004 open to join, and their archives are public. The project as a whole
2005 has been the subject of robust public discussion.
2006
2007 Some commentators have expressed concern that the Yeti DNS Project
2008 is, in effect, operating an alternate root, challenging the IAB's
2009 comments published in [RFC2826]. Other such alternate roots are
2010 considered to have caused end-user confusion and instability in the
2011 namespace of the DNS by the introduction of new top-level labels or
2012 the different use of top-level labels present in the Root Server
2013 system. The coordinators of the Yeti DNS Project do not consider the
2014 Yeti DNS Project to be an alternate root in this sense, since by
2015 design the namespace enabled by the Yeti-Root zone is identical to
2016 that of the Root Zone.
2017
2018 Some commentators have expressed concern that the Yeti DNS Project
2019 seeks to influence or subvert administrative policy relating to the
2020 Root Server system, in particular in the use of DNSSEC trust anchors
2021 not published by the IANA and the use of Yeti-Root servers in regions
2022 where governments or other organizations have expressed interest in
2023 operating a Root Server. The coordinators of the Yeti-Root project
2024 observe that their mandate is entirely technical and has no ambition
2025 to influence policy directly; they do hope, however, that technical
2026 findings from the Yeti DNS Project might act as a useful resource for
2027 the wider technical community.
2028
2029
2030
2031
2032 Song, et al. Informational [Page 37]
2033 RFC 8483 Yeti DNS Testbed October 2018
2034
2035
2036 Acknowledgments
2037
2038 Firstly, the authors would like to acknowledge the contributions from
2039 the people who were involved in the implementation and operation of
2040 the Yeti DNS by donating their time and resources. They are:
2041
2042 Tomohiro Ishihara, Antonio Prado, Stephane Bortzmeyer, Mickael
2043 Jouanne, Pierre Beyssac, Joao Damas, Pavel Khramtsov, Dmitry
2044 Burkov, Dima Burkov, Kovalenko Dmitry, Otmar Lendl, Praveen Misra,
2045 Carsten Strotmann, Edwin Gomez, Daniel Stirnimann, Andreas
2046 Schulze, Remi Gacogne, Guillaume de Lafond, Yves Bovard, Hugo
2047 Salgado, Kees Monshouwer, Li Zhen, Daobiao Gong, Andreas Schulze,
2048 James Cloos, and Runxia Wan.
2049
2050 Thanks to all people who gave important advice and comments to Yeti,
2051 either in face-to-face meetings or virtually via phone or mailing
2052 list. Some of the individuals are as follows:
2053
2054 Wu Hequan, Zhou Hongren, Cheng Yunqing, Xia Chongfeng, Tang
2055 Xiongyan, Li Yuxiao, Feng Ming, Zhang Tongxu, Duan Xiaodong, Wang
2056 Yang, Wang JiYe, Wang Lei, Zhao Zhifeng, Chen Wei, Wang Wei, Wang
2057 Jilong, Du Yuejing, Tan XiaoSheng, Chen Shangyi, Huang Chenqing,
2058 Ma Yan, Li Xing, Cui Yong, Bi Jun, Duan Haixing, Marc Blanchet,
2059 Andrew Sullivan, Suzanne Wolf, Terry Manderson, Geoff Huston, Jaap
2060 Akkerhuis, Kaveh Ranjbar, Jun Murai, Paul Wilson, and Kilnam
2061 Chonm.
2062
2063 The authors also acknowledge the assistance of the Independent
2064 Submissions Editorial Board, and of the following reviewers whose
2065 opinions helped improve the clarity of this document:
2066
2067 Joe Abley, Paul Mockapetris, and Subramanian Moonesamy.
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087 Song, et al. Informational [Page 38]
2088 RFC 8483 Yeti DNS Testbed October 2018
2089
2090
2091 Authors' Addresses
2092
2093 Linjian Song (editor)
2094 Beijing Internet Institute
2095 2nd Floor, Building 5, No.58 Jing Hai Wu Lu, BDA
2096 Beijing 100176
2097 China
2098 Email: songlinjian@gmail.com
2099 URI: http://www.biigroup.com/
2100
2101
2102 Dong Liu
2103 Beijing Internet Institute
2104 2nd Floor, Building 5, No.58 Jing Hai Wu Lu, BDA
2105 Beijing 100176
2106 China
2107 Email: dliu@biigroup.com
2108 URI: http://www.biigroup.com/
2109
2110
2111 Paul Vixie
2112 TISF
2113 11400 La Honda Road
2114 Woodside, California 94062
2115 United States of America
2116 Email: vixie@tisf.net
2117 URI: http://www.redbarn.org/
2118
2119
2120 Akira Kato
2121 Keio University/WIDE Project
2122 Graduate School of Media Design, 4-1-1 Hiyoshi, Kohoku
2123 Yokohama 223-8526
2124 Japan
2125 Email: kato@wide.ad.jp
2126 URI: http://www.kmd.keio.ac.jp/
2127
2128
2129 Shane Kerr
2130 Antoon Coolenlaan 41
2131 Uithoorn 1422 GN
2132 The Netherlands
2133 Email: shane@time-travellers.org
2134
2135
2136
2137
2138
2139
2140
2141
2142 Song, et al. Informational [Page 39]
2143
The IETF is responsible for the creation and maintenance of the DNS RFCs. The ICANN DNS RFC annotation project provides a forum for collecting community annotations on these RFCs as an aid to understanding for implementers and any interested parties. The annotations displayed here are not the result of the IETF consensus process.
This RFC is included in the DNS RFCs annotation project whose home page is here.