|
“While SOPA may be dead (for now) in the U.S., lobby groups are likely to intensify their efforts to export SOPA-like rules to other countries,” says Michael Geist in a blog post today.
Geist writes: “With Bill C-11 back on the legislative agenda at the end of the month, Canada will be a prime target for SOPA style rules. In fact, a close review of the unpublished submissions to the Bill C-32 legislative committee reveals that several groups have laid the groundwork to add SOPA-like rules into Bill C-11, including blocking websites and expanding the ‘enabler provision’ to target a wider range of websites. Given the reaction to SOPA in the U.S., where millions contacted their elected representatives to object to rules that threatened their Internet and digital rights, the political risks inherent in embracing SOPA-like rules are significant.”
Sponsored byCSC
Sponsored byDNIB.com
Sponsored byWhoisXML API
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byRadix
Sponsored byVerisign
This will be an interesting one to watch. You would think they would have learned their lessons from what happened in the USA last week.
The issue is “simple”.
1) the Internet legacy use is under an internationally coordinated attack by powerful interests (so powerful that they most probably erode our respective civilizations: financially dominants (not to confuse with rich people) vs. the rest of the world). This could be the premise of WWIII. Our job, since we are those under attack, is to cooperate in engaging a counter-war (to prevent a WWIII) and provide the people with not only the fences but also the fortifications that they need (this is what the world unanimously implied in calling for a people centered and, therefore, people protected, information society - Geneva WSIS declaration).
2) fortifications mean architecture. The Internet legacy architecture is incomplete. We all know that and talk about no-built-in-security. This, in particular, results from the lack of a presentation layer. This results in the negative consequences that we are living with. It also has a greater lack of positive consequences that we are not living with, in terms of possible incremental, disruptive, and even fundamental innovation and the resulting capacities and services we miss.
However, we were able to publish RFC 5890 to 5895 and address the IDNA2008 issue at least on the network side. This is astonishing because the support of linguistic diversity is the presentation layer’s job. This only means that the thinking that has been carried over IDNA2008 has actually, de facto, and in some way introduced , a “virtual” (since it is not documented and coded as such) presentation layer and we have used at least one of its occurrences.
I observed first-hand the IDNA2008 consensus surprise because I (not alone, but in a minority) deeply opposed the “industry” people (in this area, they are gathered in the Unicode Consortium) who favored IETF control of people’s behavior in order to best protect Internet stability. And all of the sudden I was able to join the consensus because of the text now published as RFC 5895, which did in fact obtain minority support. IDNA2008 does not address everything we/ICANN need in order to use IDNs efficiently and surely (as the questions, which were raised by the AD and that are now being considered by the IAB, show it). However, it does address the Internet side issues, and it introduced the principle of subsidiarity in the Internet architecture in order to address the rest. This is why I became a strong supporter of IDNA2008, explaining to the IESG that they had (which they did) to publish the IDNA2008 ASAP, but that as the Chair of Projet.FRA (using the “.fra” namespace as a taxonomy for an open networked francophone ontology) I needed (as every other language needs it) orthotypographic (script syntax) support [there is a need for majuscule metadata, because they impact the semantic meaning of terms] and I explained how wewould obtain it,
The way of obtaining it is in plainly deploying the presentation layer by subsidiarity, i.e. instead of considering the presentation as layer 6, to consider it as the undocumented virtual layerit now is in the Internet architecture, and to support it by additional network layers on top of layer 7. This is a major change and improvement over OSI. This is where subsidiarity applies: in placing (as per RFC 1958) these additional layers at the fringe, i.e. as an OPES between the ISP and the User, or as an Intelligent Use Interface on the user’s machine (as Plugged Layers on the User Side, PLUS). In both cases, this results in a fringe to fringe Internet+:
a) including the Internet legacy layers as its stable, time proven, universally deployed core. In the DNS, this is the regular ASCII DNS.
b) adding a smart intelligence atop the user end. “Intelligence” means the capacity to add extended network services, including security, encryption, authentication, etc. “Smart” means that the center of the people’s individual global network (IGNet) is no longer the Internet, but rather themselves. The Internet legacy layers are their common (on a relational space basis) subsidiary. The Internet legacy is the commodity for everyone to design, use, and manage his/her own IGNet.
The problem is that the IETF mission is to make the “end to end Internet” work better. RFC 1958 says that everything else is to be handled at the fringe, and the fringe does not belong to the IETF scope. This is clearly emphasized in RFC 5895, which documents the over the end issues (how a user can proceed to interface the Internet side of IDNA2008) and qualifies such a documentation attempt as “unusual”. This is why the IESG refused to publish it as a WG/IDNAbis RFC, but accepted it as a private (P. Hoffmann and P. Resnick) submission and as a for information RFC. This is a clear signal since the IDNA2008 consensus, which was published as standard track RFCs, only resulted from it.
I, therefore, informed the IESG that I would appeal the IDNA2008 RFCs, which I did, in order to clarify who was to document the Internet+ scaling Intelligent Use extension. There were four possibilities (outside a political impeachment through the GAC) :
a) the IETF extending its charter: I facilitate the .(JavaScript must be enabled to view this email address) - Internet/IETF users contributing group - mailing list that could have been used in the interim.
b) ICANN: as Vint Cerf, Chair of the WG/IDNAbis suggested it for user-side issues,
c) the Industry: this probably meant Google because others seemed less directly committed,
d) an IUTF to emerge, oriented towards an Intelligent Use of the whole digital ecosystem (therefore, not oriented towards the digital convergence, including the Internet and other technologies).
The responses to the appeals showed that the IESG, and then the IAB, felt concerned but not responsible. Then, Google initiated Google+ showing that the concept was industrially workable. ICANN argued with GAC over its New gTLDs Program without considering the pending evolution and eventually published its report on “Variants” showing its interest in documenting Users’ needs, but not in working out Internet Use oriented specifications and documentation.
This means that the counter-war today, to protect us against an anti-innovation war imposed on us by “status quo” forces (that the IAB documented in RFC 3869 as the commercial “bias”), consists in exploring the Internet+:
- nature, abilities, and possible architectural framework
- and its capacity of resilience to risks, such as those documented by the SOPA/PIPA kind of pressure.
SPAM, Viruses, DoT, and SOPA/PIPA are “high-Internet” dangers that the presentation and companion additional layers are supposed to protect our cybships from.
This is why I am working on an Internet+ architectural framework IETF Draft (http://www.ietf.org/id/draft-iucg-internet-plus-05.txt) that can be discussed on the .(JavaScript must be enabled to view this email address) (http://iucg.org) and the .(JavaScript must be enabled to view this email address) (http://iutf.org) mailing lists. Once a fringe to fringe Internet+ architectural framework documentation stabilizes, experimentation should urgently begin (actually most of the necessary components do exist already for the “InterPlus” prototyping that we can work out) and the “adminance” (network administration stewardship) support structures should be established.
The real issue is the transition to an Internet+ world. This is because the unleashed power of the Internet legacy, if correctly used in an open subsidiarity context, is probably tremendous. This is exciting but, in refusing to see and accept it, ICANN (which however documented some parts of its framework and the need to experiment it in 2001 in its ICP-3 document) has dramatically confused the issues (Fast Track and the New gTLDs Program) and delayed it. The pressure is now such that there may be some crashes in the Internet Governance, one of which may concern the ICANN version of the “DNS” (as defined in the Affirmation of Commitment). A way to reduce the risk would be for ICANN to enter in such Affirmations of Commitment with every Member of the GAC and to restrain their ambitions to the limits imposed on them by the technology. ICANN operates in the ICANN/NTIA CLASS (“IN”). The Internet legacy provides 65,635 other CLASSes, including 255 destined to private projects.
This means that a 256 legitimate “.com” registries Internet legacy is the norm today. ICANN did not emphasize this in its New gTLDs Program. I documented it as one of the last entries in their public comment area, so that their lawyers can claim that ICANN never attempted to hide that fact.