[NCAP-Discuss] [Ext] Re: An Approach to Measuring Name Collisions Using Online Advertisement

Jeff Schmidt jschmidt at jasadvisors.com
Sat Jun 11 17:45:27 UTC 2022


> My research thus far has generated as many questions as answers,
> and there is much more to uncover.

Right, and we go full circle.

“More Research”

“More Data”

When will we be done? How will we know when we’ve gotten there?

The greatest failing of this group is that we refuse to define success or acceptable risk thresholds. The best way to ensure that we never reach the finish line is to leave it undefined.

Risk will never be zero. The metrics Matt T presented this week will never be zero and have never been zero for any TLD (generic or cc, ASCII or IDN) delegated in modern times. Hiring people specifically to go find problems following an Internet-scale change will always result in something.

I look back on 2012 and find it difficult to make material improvements. Most of the world outside of a handful of NCAP participants agrees. Others may come to different conclusions – and that’s fine – but we *must set success parameters.*

[ This is more directed at the Chairs – who should set direction for the group – than Casey as a contracted researcher ]

Jeff



From: Casey Deccio <casey at deccio.net>
Date: Friday, June 10, 2022 at 6:20 PM
To: Jeff Schmidt <jschmidt at jasadvisors.com>
Cc: Aikman-Scalese, Anne <AAikman at lewisroca.com>, Steve Sheng <steve.sheng at icann.org>, NCAP Discussion Group <ncap-discuss at icann.org>
Subject: Re: [NCAP-Discuss] [Ext] Re: An Approach to Measuring Name Collisions Using Online Advertisement
Just jumping in to add a few comments to the discussion:


On Jun 9, 2022, at 7:39 AM, Jeff Schmidt via NCAP-Discuss <ncap-discuss at icann.org<mailto:ncap-discuss at icann.org>> wrote:

Another problem we are trying to solve is the bad experience of some organizations as documented in Casey’s research.

If you are suggesting that the handful of alleged (*) minor technical hiccups mentioned in Casey’s report are showstoppers, then you are correct. Moreover, ICANN should never delegate another TLD (cc or generic), certainly should never roll a KSK, never allow a root server operator to renumber, and it needs to take a hard look at those pesky IDNs. Perhaps most importantly, it should kill DNSSEC as it has shown an amazing propensity to break things (including large TLDs) and cause widespread “bad experiences.” DNSSEC has broken more things in the past 2 years than TLD collisions have in the past 20.

My point is this: change is never zero risk. Particularly on the Internet, ability to tolerate change is one of the things that makes it wonderful and allows innovation. It’s a balance. My position – and I believe the position of most in the broader audience – is that the 2012 procedures well balanced significant change with extremely limited operational issues. Zero risk is an untenable position.

(*) “alleged” because these are self-reported based on the memories of a handful of people. We don’t know what really happened. It should be instructive to us that literally no one else is talking about collisions; if it was a real problem, IETF, NANOG, et al, would be discussing it. IETF – the organization that has the most ability to directly help by clearly creating 1918-ilke DNS namespaces - refuses to even take it up. No one cares. It’s not an issue.

This is certainly one manner of thinking about the problem, but I do not agree with the idea that "we didn't hear about it in the way that we might have expected, so it must not have been a problem."  That dismissal is convenient but not data-based.  It is true that there are limits on what can be definitively learned, including memories of involved individuals, willing participants, available data, etc.  However, there are clear indicators--both in the ICANN reports and the survey data--that there were problems.  And the historical mapping and root server query data showed that name collisions potential issues are widespread.  My research thus far has generated as many questions as answers, and there is much more to uncover.


If you want the Board to lockup and defer going forward on the next round, your approach makes sense.

Quite the opposite. Some of the worst ideas being advocated within NCAP will lead to quagmire and controversy by punting string-by-string decisions to the ICANN Board for consideration with little to no evaluation criteria. This must not be done. Instead, we should re-affirm the methodologies successfully applied in the previous round and make surgical improvements to those processes.

I worry about calling previous round "successful."  How is success defined?  Is it related to the number of name collisions reports?  Or is it related to the presence or absence of uproar in IETF or *NOG?  Or is it that the Internet didn't stop?  Some of these are very subjective, and some are fraught with bias, for various reasons (e.g., the text on the name collisions submission site itself). There are certainly advantages to the approach used in the previous round, but suggesting that it was successful--at least without any definitive metrics on which to base that subjective description--is counter-productive.

Finally, with regard to the use of "science experiment" as a description, I agree. However, that's exactly what controlled interruption has been!  I don't disagree with improvements over time, but I believe that the data needs to be analyzed.  That means that we can't simply try something and then call or successful--or alternatively call it unsuccessful and replace it--unless we have meaningful data to analyze.  We have *some* of that now, but I believe that there is more to be done to assess previous-round delegations, including controlled interruption.

Casey
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mm.icann.org/pipermail/ncap-discuss/attachments/20220611/68306dd2/attachment.html>


More information about the NCAP-Discuss mailing list