[NCAP-Discuss] Current Status of the NCAP Project

James Galvin galvin at elistx.com
Sun Nov 13 01:27:16 UTC 2022


A few comments inline.

On 8 Nov 2022, at 15:13, Jeff Schmidt via NCAP-Discuss wrote:

> Hi, Anne:
>
> 2012 procedures provably accomplished every objective you listed and with less cost and less risk than the proposed procedures.

You can’t evaluate cost until you make implementation choices.  Part of an implementation choice is apportioning the responsibility and thus cost among an appropriate set of parties, which begs the question of whether you’re talking about overall cost or cost to one particular party?  In the 2012 Round the cost of Controlled Interruption was borne entirely by the Applicant.  Speaking personally, as a registry service provider I have some idea about the actual cost of controlled interruption.  However, I’m not aware of any survey of registry operators that reviews this cost and provides empirical data for comparison.  Is the cost you’re referring to “real” or “theoretical”?

As far as risk is concerned, it would help if you could be specific as to how the proposal being finalized increases risk.  In my opinion it is expressly designed to reduce risk so I really do not understand your assessment.

>
>> Do you have a better system for accomplishing the
>> goals in 1, 2, and 3 above?    In other words, in your
>> dissenting opinion that you will write, what is your
>> solution?
>
> (1) Identify “Black Swans” – we did this in 2012 via the “TRT” (which in 2012 was Interisle and JAS). They were hired as technical experts and reviewed all the technical metrics discussed in the NCAP (*and more*), and based on the data and their expert opinions identified corp, home, and mail as “Black Swan” strings. The remaining strings were deemed safe to delegate, following a CI period. The ensuing decade has proven this to be correct. No change necessary or justified.

I’ll be the first to say that you and others did an excellent job in 2012.  The analysis done by the DG confirms this and, as has been said before, did not identify any reason to “conjure up something different”.

Fundamentally, there are technical gaps in the controlled interruption of the 2012 Round that make it substantially less effective in the Internet of today. Thus it’s important to evolve it in a couple specific ways to work better in the Internet that has evolved over the past decade.

Speaking personally, in principle there has been no change nor is any justified.  However in practice, changes are required to meet the needs of today’s Internet as well as allowing for more changes as the Internet evolves, for example:

1. Adding support for IPv6.

2. Adding PCA as a minimally disruptive and less risky method to identify high risk strings.

3. Improving the notification mechanism because the previous mechanism failed.


>
>> PCA is a less disruptive method
>
> There is nothing about PCA that is “less disruptive” or in any way “safer.” That is false marketing. It is an unknown, untested, and non-standards-compliant procedure and we have no idea what will happen should we actually do it. Except we know one thing will happen: unknown data (potentially sensitive)  *will* be sent over the Internet *because of ACA*. Yes, 2012 Controlled Interruption *was* at the time also unknown and untested – but it’s not anymore. We have 1000+ strings and a decade of experience. No one has established a justification to make a risky change.

I believe we agree that ACA and controlled interruption are equivalently risky and equivalently disruptive.

1. They are risky because we know (in absolute terms we *KNOW*) that there is no way to predict or identify collisions in advance of causing them, i.e., you have to delegate into the root zone to see “harm” or “impact”.

2. They are disruptive because the delegation into the root zone and returning a DNS response other than NXDOMAIN is a fundamental protocol change, i.e., it manifests an actual collision that will be experienced in some unknown way by a user or client.

With that agreement in mind, it would help if you could explain how the following fails to be both “less disruptive” and “safer” than controlled interruption and ACA.

1. Although PCA does delegate the TLD into the root zone, in technical terms we know it is less disruptive to a user or client because the user/client will still receive an NXDOMAIN at the cost of 1 extra DNS query-response round trip.

2. DNS technical experts have confirmed that according to how DNS is expected to work and based on their combined decades of experience, the risk of causing harm or impact is substantially reduced to being an exception situation, i.e., it should never happen but of course this is the Internet and you never know what people might do or could do, especially within a private enterprise that might some day leak.

Any insight you could provide would be most helpful.

>
> Everything about PCA and ACA violates the basic principals of conservatism.

Please explain how PCA and ACA violate conservatism.  Since PCA and ACA are simply evolutions of controlled interruption, they are as conservative as controlled interruption.  The evolutionary steps being proposed are required by the environment of today, i.e., controlled interruption is insufficient for the needs of today’s Internet and the risk management needed for tomorrow.


> The importance of conservatism when adding to the root zone is reflected in Recommendations 26.2 and 26.3, which request ICANN honor the principle of conservatism specifically wrt limiting the rate of change of the root zone. There is simply no justification to risk *doubling* root zone change events and monkeying with bulk TLD honeypots. We very specifically considered and decided against this in 2012 for that reason. Juice not worth the squeeze. No change necessary or justified.

Your criticism of “doubling” root zone changes is accurate for a particular implementation choice, which is why if I were doing it I certainly wouldn’t do it that way.  It is important to separate the requirements of the principles being proposed from the actual implementation choice.  Would you be willing to provide some implementation notes for the document to ensure that an efficient implementation choice is made?

I don’t understand the phrase “monkeying with bulk TLD honeypots”.  What is the technical issue that concerns you?



>
>> it lets applicants "off the hook"
>
> This is a procedural issue not a technical one. Applicants may be let “off the hook” at any time ICANN permits. Clarity here over 2012 procedures would indeed be an improvement, but again that is procedural not technical.
>
>> We need a system in place that identifies the High Risk strings
>
> See #1. We certainly had this in 2012. The 2012 “TRT” reviewed the data and made expert determinations based on all available data. Time has proven them correct. No change necessary or justified.

And the analysis done by the DG confirms everything done in 2012 and observes that it will not work today or in the future.  Hence we need some adjustments.  Not a replacement nor should we attempt to “conjure up something different”; just a couple minor evolutionary changes.


>
>> Applicants need to be able to propose mitigation
>
> Agree. And there is no way to handle this other than as a case-by-case. The right way to do this is similar to ICANN’s existing RSTEP program – where a Registry proposes a Registry Service change and technical experts review it and make an expert determination on a case-by-case basis. In the collisions case, the TRT would review the data and the proposed mitigation and make a determination. There is no magic. Limited TLD honeypots may come into play here – which is appropriate. A few TLD honeypots for specific strings are justifiable. Thousands of bulk TLD honeypots for every string are not.

Agreed.  Mitigation has been intended to be covered by Study 3.



>
>> All of the above is actually different from the 2012 round
>
> No. It’s all the same as the 2012 round, except for the addition of the unnecessarily complex, expensive, risky, and reckless root delegations and TLD honeypots.

This is unnecessarily pejorative and not the least bit constructive.  Please be specific in your technical concerns so we can work together to resolve them.


>
> This group tends to underappreciate what was done in 2012.

In what way?  Please be specific.  Speaking personally, we have completely adopted what was done in 2012 and made some minor adjustments as required by the Internet of today.


> I was in the proverbial room where it happened. While the issue of collisions surfaced late in the overall process, once the issue was recognized it was delt with very deliberately. The resulting 2012 procedures were well-reasoned, extensively vetted, discussed publicly in many fora, subject to multiple ICANN public comment rounds, discussed at multiple conferences/events, DNS-OARC, and even a Verisign special-purpose event in London. 2012 procedures were lab tested in advance to the greatest degree possible, previewed and discussed with vendors, and in general extremely, extremely, extremely conservative. And 2012 procedures now have the benefit of a decade of operational experience and vigorous peer review.

And this is why we have adopted the work and seek only to make the minimal evolutionary changes necessary to meet the requirements of today’s Internet.


> ACA/PCA is nothing more than old ideas previously rejected now being rekindled by a very few folks hoping real hard that magic will happen.

This is unnecessarily pejorative and not the least bit constructive.  Please be specific in your technical concerns so we can work together to resolve them.



>
>> It's designed to give the Board the required info before a contract is awarded.
>
> The Board wants experts to make a recommendation. That’s how Boards work. That’s exactly what happened in 2012 and what must happen moving forward. Everything else is just unnecessary complexity, expense, and risk. No change necessary or justified.

We agree that no change is necessary or justified in the large.  The Board will continue to be dependent on experts to make a recommendation.  All we are doing is evolving the way in which the data is being collected to better manage the risk and meet the needs of the Internet today.


>
>> dissenting opinion that you will write, what is your solution?
>
> See above. SubPro – in no uncertain terms - cleared us to do what we did in 2012 again. We should take them up on it.

And that would technically be the wrong thing today.  We empirically know the Internet is different today than it was in 2012 and we empirically know that controlled interruption as prescribed in 2012 is insufficient to meet the needs of the Internet today.  Controlled interruption must be evolved, just a bit, in order to continue to be the effective tool it was in the 2012 Round.


>
> NCAP has become a self-justifying research project that never ends. The DoD calls dynamics like NCAP a “self-licking ice cream cone” – a system that has no purpose other than to sustain itself.

This is unnecessarily pejorative and not the least bit constructive.  Please be specific about your technical concerns so we can work together to resolve them.


>
> When do we start Study 3?

As soon as we finish Study 2.


Jim


>
> Jeff

> _______________________________________________
> NCAP-Discuss mailing list
> NCAP-Discuss at icann.org
> https://mm.icann.org/mailman/listinfo/ncap-discuss
>
> _______________________________________________
> By submitting your personal data, you consent to the processing of your personal data for purposes of subscribing to this mailing list accordance with the ICANN Privacy Policy (https://www.icann.org/privacy/policy) and the website Terms of Service (https://www.icann.org/privacy/tos). You can visit the Mailman link above to change your membership status or configuration, including unsubscribing, setting digest-style delivery or disabling delivery altogether (e.g., for a vacation), and so on.


More information about the NCAP-Discuss mailing list