|
An article written by Paul Wilson, Director General of Asia Pacific Network Information Centre (APNIC), and Geoff Huston, Senior Internet Research Scientist at APNIC.
In recent months proposals have been made for the introduction of competition into the system of allocation of IP addresses. In particular, calls have made for new IP address registries to be established which would compete with the existing Regional Internet address Registries (RIRs). Specific proposals have been made by Houlin Zhao of the ITU-T (see ITU and Internet Governance [DOC]) and by Milton Mueller of the Internet Governance Project (see What to Do About ICANN: A Proposal for Structural Reform [PDF]), both of which propose that the ITU itself could establish such a registry group, operating as a collection of national registries.
At the time of writing this article both these documents represent current proposals that have been published as part of the broader program of work associated with Phase II of the World Summit on the Information Society.
It would appear that part of the rationale for these proposals lies in the expectation that the introduction of competition would naturally lead to outcomes of “better” or “more efficient” services the address distribution function. This article is a commentary on this expectation, looking at the relationship between a competitive supply framework and the role of address distribution, and offering some perspective on the potential outcomes that may be associated with such a scenario for IP addresses, or indeed for network addresses in general.
The Invisible Hand
“...every individual necessarily labors to render the annual revenue of the society as great as he can. He generally, indeed, neither intends to promote the public interest, nor knows how much he is promoting it. By preferring the support of domestic to that of foreign industry, he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was no part of it. By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it.”
[“An Inquiry into the Nature and Causes of the Wealth of Nations”, Adam Smith, 1776]
These days the expression of the “invisible hand” is often associated with open markets, where an outcome of the matching of demands of suppliers and the capability of producers is achieved through the operation market forces as expressed through pricing signals. An excess of demand creates competitive interest between consumers, who are willing to pay a scarcity premium in order to obtain the commodity, which in turn lifts production revenues well above production costs and incents other entrepreneurs to enter the market to satisfy this unmet demand. An excess of supply forces providers into competition for consumers, and prices are reduced. This price drop, in turn, exposes more consumers into the market and demand levels lift. The equilibrium point of this market dynamic is where production volume equals demand levels and the market price for the commodity settles at the marginal cost of production as set by the most efficient provider. There are no explicit agreements between the various actors in the market, nor any form of orchestration of deliberate coordination of activity. Each actor, whether they are a consumer of a producer, does not intentionally strive to achieve this state of equilibration at a point of maximal efficiency, nor are they even aware of any such common intention. For this reason the market forces at work were termed “invisible”.
Adam Smith assumed that actors strive to maximize self-interest, so that consumers choose for the lowest price that can meet their demands, and that entrepreneurs choose for the highest rate of profit. He asserted that by making their excess or insufficient demand known through market prices, consumers effectively directed entrepreneurs’ investment attention to the most profitable industry. This was the industry producing the goods most highly valued by consumers, so that economic well-being was increased thereby. He asserted that a compelling attribute of a market-based economy was that it forced each player to think about what other players want and value, and strive for the most efficient means of meeting those desires.
The implicit goal with these proposals for competition in the address distribution function appears to be that such measures are intended to provide the Internet Service Provider with more efficient or easier access to larger quantities of address space, and the consumer with a more efficient, capable and presumably cheaper Internet service. There are two assumptions being made in these proposals that need to be examined: firstly that “better service” as a result of such measures is objectively defined and desired by all stakeholders; and secondly that apparent barriers to access to the efficient operation of address distribution in the existing RIR system (i.e. “bad service” according the first assumption) are a result of structural inefficiencies that only the discipline of competitive supply channels will rectify.
Under the current RIR system, access to address space is governed by policies which must necessarily pose a barrier to unfettered or unconstrained resource distribution. Internet resource management is not a ‘free-for-all’ without any form of constraint. The policies that constrain resource allocations are intended to ensure that the resources are readily available now, and in an anticipated future, to meet demonstrated needs, and the policies also describe how that “demonstrated need” is to be documented and assessed.
To determine whether a claimed need for address space is genuine is a non-trivial exercise, necessarily involving the collection of detailed information from an applicant, and the technical analysis of that information. This analytical activity is the primary challenge of the Internet address registry in performing its function, particularly when the policies under which it is performed are subject to constant change, and when consistency of analysis (corresponding to a fair and objective approach) is to be maintained as an overall objective of the system. By contrast, the actual selection and registration of a specific address block for an approved allocation is secondary, and a relatively mechanical and trivial component of the process.
The explicit goal of the RIR system is to support an address distribution system of a finite pool of addresses that is objective, fair and equitable, while avoiding some of the pitfalls associated with various forms of excessive wastage of addresses and the possibility of hoarding of address space by those who would profit later when scarcity drives the address value up. To determine whether a claimed need is genuine (or is “demonstrated”) is a non-trivial exercise, necessarily involving the collection and analysis of information received from the applicant, and the application of a set of evaluation criteria in a uniform manner such that the same set of evaluation constraints are applied to the address distribution function in every individual case. These constraints are expressed as policies, which in turn are generated by industry players and related stakeholders, so that the constraints are the expression of common objectives. This is by no means a unique arrangement, and this structure is a very typical example of industry self-regulation as seen in many other activity sectors.
Is this address distribution function one that could benefit from the introduction of competitive suppliers?
In general terms this is an instance of a very common area of study of markets of suppliers and consumers. Competition in markets for undifferentiated commodities cannot be based on differentiation of the goods themselves precisely because they are undifferentiated commodities. This is certainly the case in address distribution, as one address value is undistinguishable from any other. Nor can the competition be based on efficiency of production processes and the resultant marginal cost of production of the commodity, given that the good is not the outcome of any production process. The only other attribute where competitive differentiation is possible within this type of market is that of competitive differentiation of the constraining policies themselves. In other words the competitive differentiation is expressed in terms of policy shopping, where a consumer transacts with a particular supplier on the basis that the supplier will accede to the consumer’s request. Here the competitive impetus is that a supplier is incented to dilute the constraints in order to gain a larger customer base, leading initially to accelerated consumption end decreased efficiency of usage, and ultimately to the removal of all constraint , resulting inevitably in premature exhaustion.
Applying this economic perspective to the distribution of Internet addresses, it is clear that if competitive supply systems were introduced to address space management, the basis of that competition would be in terms of policy differentiation, or, in other words competition in the relative ease of access to address space.
It appears likely that the initial outcome of such a competitive supply structure would be the introduction of differences in the form of constraints applied by the competing address suppliers. What we would probably see is policy divergence within competing management systems. By contrast, at present we have one single globally cohesive Internet, which results not only from the ubiquity of the Internet Protocol, but also from the consistency of the policies under which various Internet resources are managed. The global consistency of address management policies and specifically of the associated aspect of the use of addresses in the context of a functioning global Internet routing system is a necessary and vital part of the cohesive bonds that link together thousands of individual networks into a single global Internet.
It seems intuitive that differentiation of address policy in a competitive environment would not naturally result in an increase in the level of constraint placed on the address distribution function. Indeed the opposite is the more probable case, where the outcome of such competitive address distribution systems would be the progressive relaxation of associated policies and procedures, and a continuing acceleration in address space allocation rates, leading to early exhaustion of the entire address pool, even one as large at the IPv6 address space. This outcome would appear to compromise the fundamental goals of responsible stewardship of a finite common public resource.
The five Regional Internet Registries cooperate closely to ensure consistency of policies that are developed in their regions. Other competitors would not necessarily do so, nor would they be strongly motivated to do so under a competitive discipline. A necessary characteristic of these competitive supply proposals is that suppliers (in the case of the ITU-T proposal these would be exclusive national monopoly suppliers) should be able to manage address space in a relatively autonomous fashion, which implies not one additional address management system, but up to 200 or so such national entities. Close coordination among these various regimes would be difficult or even impossible, even if such an arrangement were to be fully and genuinely intended by all participants if open competition is the intended framework. It is also clear that competition would not be constrained to that competition between each national supply system and the relevant regional registry. As we have seen in the Domain Name business the market would likely open out across national systems. In the same way that country-based top level domains such as “.tv” or “.nu” are marketed globally, IP address supply from national registries would naturally follow the same path if there is a business advantage to expand the scope of each individual national registry enterprise. There is no natural constraint that individual IP addresses have to remain firmly rooted in any particular national environment, nor any natural imposition that such national address registries are constrained to offer services only to their national community, particularly if competition in an open market is the desired outcome. This is then not a duopoly of supply within any national regime between the national address registry and the associated regional address registry, but one of intense competitive pressure bought about by hundreds of actors, where the competitive pressure is ultimately expressed as the removal of any form of constraint in making address allocations. The term “headlong stampede to resource exhaustion,” or perhaps more often termed a “race to the bottom” comes to mind to describe the consequences of such an environment.
The results of divergent address management policies would have global impact, for instance in terms of the size or stability of global routing tables, which could certainly threaten global Internet stability and routability. The irony of this form of outcome is that routing table effects would heavily impact smaller ISPs and particularly those in developing nations, which are less likely to have the latest high capacity hardware and related routing capacity, and it is this same community who are said to be in the greatest need of this form of enhanced access to IP addresses.
Further discussion regarding the impact of divergent policy systems is provided in the paper “The Geography of Internet Addressing” [PDF].
But it’s also possible that the outcomes of such a competitive supply framework could be even more perverse in their distortion of the role of addresses themselves. In a completely unregulated market there are few forms of imposition of binding regulatory control. Such markets are often subject to pressures of hoarding, speculation, and attempts to monopolize supply, to name a few potential market aberrations. In such scenarios there is the distinct risk that IP addresses will become akin to property, and be openly traded like any other form of asset. The problem is that in so doing addresses may lose their close relationship with the underlying network, and the addresses could be withheld from the network in order to be played on the market rather then be used to support the communication function in order to maximize the exploitable value of the address. What effectively prevents this form of outcome today within the RIR framework is the continual controlled availability of ‘new’ addresses to meet growing demand at a level of constraint that is directed at ensuring stable equilibration of demand and supply. The ultimate beneficiary of the entire system is the end user of the communications network. Addresses are readily available to meet service provider requirements, in order to meet end user needs.
If this is a natural outcome of multiple providers in a commodity market, why have we not seen such outcomes of market distortions from the existing RIR system, where there are 5 separate entities performing this supply function? While the RIRs are regarded as service organizations, the goal of the RIR system is not to remove all forms of supply constraint on the availability of access to IP address space at the expense of the viability of the network itself. Within the constraints imposed by address management policies, the RIRs have the common objective of ensuring that service quality is maximized and the operators of networks have access to addresses to support their deployment of network infrastructure. Indeed as membership based organizations, RIRs are subject to the scrutiny of their members, industry players and wider community of stakeholders, through regular open policy meetings and associated processes. This self-control structure ensures that the constraints applied at any time are an expression of the common desire for a fair and transparent set of constraints that foster an efficient and effective communications network. This is indeed the manner in which self-regulatory frameworks are intended to operate, in ensuring that through effective balancing of a full spectrum of interests, a common position of responsible constraint works in the longer term interests of the ultimate funding source of the entire industry - the end-user of the Internet.
This commentary should not be read as a diatribe against all forms of competition as a mechanism of market control. Indeed, one view of the Internet itself is that it is a very eloquent statement of the power of competitive frameworks where suppliers are incented to continually innovate and refine their offering to offer their customers a superior service in terms of quality and price. Failure to do so on the part of any single supplier leads to the ascendancy of competitive suppliers who are capable of performing their service role in a more efficient and innovative manner. But competition is not a panacea and there are a large number of situations where unfettered competition in the supply of a resource can lead to various destructive outcomes that may completely destroy the value of the resource itself. This is often seen in aspects of environmental economics where the balancing factors of an open market often cannot take into consideration the longer term interests in conserving the exploitable value of a renewable resource.
Adam Smith’s invisible hand of individual self interest working to achieve a common beneficial outcome is not applicable to every form of societal activity.
In feudal English law the “commons” were areas of land that were held in common by the general population, “the commoners,” as opposed to specific tracts that were held by the nobility. The grounds may have been pasture lands, woodlands, or open space used by the general population. The word “commons” is derived from Latin “communis” and means the quality of sharing by all or many.
Fourteenth-century Britain was organized as a loosely aligned collection of villages, each with a common pasture for villagers to graze horses, cattle, and sheep. Each household attempted to gain wealth by putting as many animals on the commons as it could afford. As the village grew in size, more and more animals were placed on the commons, and the resultant overgrazing ruined the pasture for all users. No stock could be supported on the commons thereafter. As a consequence, village after village collapsed.
In the case of the Internet, addressing lies at the very heart of the network. Without a framework of stable, unique and ubiquitous addresses there is no single cohesive network. Without a continuing stable supply of addresses further growth of the network simply cannot be sustained. Without absolute confidence in the continuing stability in this supply chain the communications industry will inevitably be forced to look elsewhere for a suitable technology platform for the needs of networked data communications. If the industry is pushed into such an uncomfortable position of turning its attention elsewhere simply because the Internet is incapable of operating its infrastructure in a stable and cost effective manner, this would be a most unfortunate unintended outcome for the Internet and its billions of current and future users of this uniquely valuable common resource.
Sponsored byDNIB.com
Sponsored byRadix
Sponsored byWhoisXML API
Sponsored byVerisign
Sponsored byVerisign
Sponsored byIPv4.Global
Sponsored byCSC
Brilliantly written article, there.
But, on the topic of the commons, and overgrazing of them, the alternative was much worse - pastures enclosed by landlords, and let out on a share or rent basis to tenants, who were then at the mercy of the landlords.
With a benign and enlightened landlord, this arrangement worked very well indeed. With absentee, rackrenting landlords who stayed in London or on the continent [well, Europe], playing cards and drinking in fashionable clubs, just sending an agent once a quarter to extort as much rent as he could, the arrangement was a spectacular failure.
Much the same with the conflict between homesteaders / settlers and ranchers in the old west, and the role that barbed wire fencing played. There, there was a conflict between possession of huge amounts of land to feed a comparatively small number of free range cattle, and a crowd of homesteaders whose cropping techniques swept away the thin layer of grass holding the soil together through most of the midwest, producing the famous dust bowl, the resulting drought driving several families out of their houses.
IP address allocation and management, compared to land ownership and management is an interesting study I’d say. The parallels become immediately obvious, and we are rewarded with a lot of history to study, that, with careful policy making, need not necessarily repeat itself.
I don’t think that the word “competition” is really the right one to use; in the context of IP address allocation it is not a word that I would use.
I agree that rational allocation of IP addresses is important.
However, your note glides over the extremely difficult issue of deciding whether a claimed-need for address space is “genuine” (your word) or not. What is a “genuine” need to one may be considered frivioulous by another.
That decision is today made using policies that give more weight to quantitative technical metrics than they do to softer social and economic metrics. And there is not a clear articulation in those policies of the desired balance between short-term goals and long-term goals.
We don’t really know the total effect of the RIR and ICANN/IANA IP addresss policies. My own sense is that at least some of the drive for deployment of NATs comes from the perception that it is easier to deploy a NAT (and thus break the end-to-end principle of the net) than it is to obtain address space. In other words, it may well be that the attempt to conserve and aggregate that is in the existing IP address policies is causing a subtle erosion in the end-to-end quality of the net.
In any event, these are difficult choices - and I believe that the RIRs have been slowly working through to answers that are adequate if not perfect. My primary concern about IP address allocation is that it is perceived as sufficiently arcane that those who are more distantly affected by these policies tend to refrain from engaging in the making of these policies and, as a consequence, the softer social and economic aspects of IP address policy tend to be under explored. And at the ICANN level, IP address issues seem to be an institutional blind spot.
As for “competition”, or rather as for the idea of creating per-country pools of IP addresses. From a technical perspective that seems like a bad idea to me - it has usually been my experience that in most cases dividing a single pool into multiple pools leads to less efficient use. My own rule of thumb is that a pool of resources should not be split unless there is a specific concrete justification for doing so.
To make a difficult issue even more difficult, in the case of IPv4, I do not believe that we have sufficient total space to absorb the impact of the inefficiencies that would be caused by per-country pools.
But with IPv6 with its much larger number of addresses the answer might be different.
While we are discussing the fragmentation caused by per-country pools, it is often overlooked that the RIRs form per-continent pools and that the same kind of fragmentation and inefficiencies are caused by having multiple RIRs as would be caused by having per-country IP address registries. The difference is merely of the amount of inefficiency (less) of having the existing continental RIRs versus the amount of inefficiency (more) were there to be per-country IP address registries.
So in the sense that per-country registries are simply a more fine-grained system than we have in the existing continental RIRS, the difference may not be a difference in kind but rather simply the difference between two points on a continuum.
—karl—
For some more perception on competition and adam smith’s economic theories, try Tom Vest’s paper “The Wealth of Networks”, http://www.pch.net/resources/papers/the-wealth-of-networks/[email protected]
> quantitative technical metrics than they
> do to softer social and economic metrics.
Nothing much more than “I’ve made the best possible use of the IP space I have, I plan to use so much more IP space in the coming year, so please allot me some IPs”.
People deploying NAT and other people pointing to this as evidence of the shortage of v4 addresses - NAT deployers dont typically get IPs direct from the RIR. They’re a much smaller operation who gets IPs from their ISP. If the ISP is stingy or excessively bureaucratic about assigning IP addresses
This situation is quite likely to be the case, especially when you consider the customers of several incumbent telcos in china / other parts of asia / africa - places from where most of the complaints of the we’re so short of IP addresses we HAVE to use NAT” originate
Per country IP registries - what does exist is LIRs - local internet registries like JPNIC for example, or CNNIC, that receive IPs from APNIC and take on the task of distributing IPs in their country, but according to the APNIC framework (including requirements about justifying IP addresses, etc)
In the ITU v/s RIRs case being discussed, it is more of a turf war / battle for control, where countries want to just be allotted IP space, which they can manage and distribute as they see fit, with or without regard for the frameworks already in place.
Geoff and Suresh have said it well - this is essentially a blind power grab by the ITU.
You can’t have competition amongst what are basically regulatory organizations - you just get more regulators. The RIRs are membership organizations, not cartels. The members can vote to change prices and policies at will.
A more radical change, such as allowing the trading (buying and selling) of IP Address space as a commodity would have much greater impact on the availability of address space. If the ITU is truly in love with the invisible hand, let them embrace it by convincing the RIRs that address space should be a fungible commodity.
Geoff, Paul,
Well written article.
I have heard the following arguments at various times. Doubtless, so have you:
* IP addresses are hoarded by “developed nations” - if only “underdeveloped” nations [replace with similar word] were given more IP addresses, the Internet would grow more/better
* Since a regional IP allocation strategy already works, there is no technical or principle-based reason to not have it work on a country basis
* IP addresses are a national resource; no small group of geeks should control it
* With the huge number of IPv6 addresses available, there is no mathematical or logical reason to create an “artificial scarcity”
* The ITU helped create a smooth, well done national telephone system worldwide, with clean bridges nation to nation. And this was done decades ago
* This nation based phone system has brought billions of dollars of economic prosperity to the nations, allowing nations decide the rates and tariffs themselves and avoiding undue regulation. Address registries are keeping nations from their legitimate economic due
* Each nation will have power and be able to govern “their” IP space in their own national self-interest, without interference from other global bodies
I don’t believe there is a clearly articulated, well-written response that responds to many of these statements. Without an appropriate response, I fear that many administrators of Internet policy in many nations will find far more resonance with the appeal to the nationalistic instinct than the appeal to rationale and global addressing sanity.
Ram, here are a few answers, sort of -
A lot of the points you raise are covered in several papers available on NRO / the apnic website’s public policy section etc, but ...
1. It is a case of demand and supply, and any entity, anywhere, can get IP space as long as they can justify their current utilization. Best of all, I’d suggest showing Geoff Huston’s BGP movie to skeptics
2. Economies like China are currently some of the largest recipients of IPv4 IP blocks, and large netblocks have been allotted to AFRINIC and LACNIC for distribution, in recent times.
3. The telephone number system, for all its complexity, is nowhere near the size, magnitude or costs of the internet. If you were in india in the mid 90s when the ‘net was first introduced, you too will remember paying INR 15K for a flaky dialup, and very high local call costs increased by the number of times you had to redial because your modem dropped without connecting. If the internet is supposed to work the same inefficient way the phone system works, and is subject to the usual extortionate settlement charges that form the bulk of developing country telco earnings, and absorb a whole lot of their losses ... [now guess who the strongest proponents of settlement based peering are?]
-srs
Suresh, I agree with your perspective; however, in some nations, this perspective is far less attractive than the nationalistic one - witness the public statements made by various government folks at the last WGIG meeting in support of nation-based IP allocation policies.
In addition, conventional wisdom is that the phone network just “works”, not that it is inefficient; similarly, the Internet is often defined as “work in progress” (implying not as stable). More eduction is needed to chip away at this conventional wisdom.
I read the session transcripts of that WGIG meeting that have been posted on www.wgig.org, and am gratified to know that Leichtenstein (on behalf of the presidency of the EU), Australia et al do understand the issues involved.
Most of the people making those comments (especially the representatives from India and China, both of whom I have had the pleasure of meeting at various other conferences) are, with all due respect, not aware of the true technical or even social / economic complexity of the issues being discussed, and nor are they really aware of the true nature and governance of the RIRs. Oh, I am glad to see that the delegate from Syria was in full form. He is, how shall I put it, quite well known for his frequent contributions at other ITU meetings :)
Here is my perspective on a few issues raised there ..
APNIC, for example, is governed by its AC / EC, and members from any APNIC region can, and regularly do, stand for election to these committees. This, and other RIRs, are a cooperative undertaking by ISPs from around the region of coverage of the RIRs, asiapac in the case of APNIC.
ICANN, both the board and its At Large component, is fully open to people from any country, anywhere, standing for election and making their opinions known as well, besides helping them gaining some better understanding of the underlying processes.
So, raising the “other” invisible hand - the invisible hand of US Government remote control - is not going to be productive or useful.
Nor are the usual claims that “China is short of IP addresses” - China has been receiving ever increasingly larger blocks of IP space in the recent past, and all IPs there are allocated by the LIR, CNNIC - which shares the apnic common pool of addresses, but is responsible for allocating it locally.. so any shortfall of IP space that China raises in future should result in them being referred right back to CNNIC, which is a quasi governmental agency, I believe.
Then there is the “holders of international bandwidth are at an advantage” theme that gets harped on - this time by India. Some people in India, and others active in the asiapac network operators conferences, have been advocating the need for an internet exchange long before NIXI was formed. However, the way NIXI took shape and is being run is far below potential, and improving this (plus adding an Akamai cluster, and mirrors of popular download sites, colocated at the datacenter of any of the ISPs that peer at nixi) will go a long way towards improving this situation. Oh, I almost forgot. India’s input to the WGIG said something about root servers all being located in developed countries. I am sure now that they’ve retained Afilias for the .in ccTLD they’ll have some slightly better knowledge of anycast. Put it to them that the root nameservers are anycast, and mirrors of (say) I Root or F Root would be an excellent thing to have at the NIXI [or at an isp that fully peers, advertises all its routes, at nixi]
I could say more, but these issues have been pointed out in the past, and right now, I am concerned that the effort to promote internet governance is being superseded by an effort to wrest control of the existing structures from their incumbent operators, who hold these in trust for the international internet community and operate it as directed by the community.
This is not a war of independence against a colonial power, though it appears to be projected as such.
If governments wish to participate in the governance of the internet, the existing framework is extremely accomodating and can fit them in quite easily. Government owned telecom and internet providers (BSNL / MTNL in India, China Telecom in China etc) can, and should, play a larger role in this, by standing for election to the APNIC AC for example, and actively involving themselves and contributing their expertise in other ways at APNIC.
-suresh
Tom Vest just posted an excellent circleid article on this subject - http://www.circleid.com/article/1064_0_1_0_C/
Required reading, at least by the indian and chinese representatives at WGIG, before they continue to advocate the Zhao proposals.
In response to Suresh Ramasubramanian:
You wrote:
“ICANN, both the board and its At Large component, is fully open to people from any country, anywhere, standing for election”
My opinion and experience (I was the only board member ever to be elected to ICANN to represent North America) are quite to the contrary.
ICANN has eliminated all means through which people from any country can stand for election for any role in ICANN that is more than that of a mere observer.
ICANN’s “ALAC” to which you allude requires that a person join an ICANN approved club. That club must, in turn, join a larger ICANN approved regional club. That regional club can name a few people to yet another ICANN approved club. And that regional club can name a few people to a global, but still ICANN run, club. That global club can then name a few people to a nominating committee. That nominating committee is then diluted by a strong component of industry designated members. Eventually this last club gets to appoint a small subset of ICANN’s board of directors, the rest of the directors coming from pre-selected industry segments.
The system that ICANN has established for the public to participate strangely resembles the old system of soviets (committees) that existed in the old USSR. And the net effect - that of insulating the decision making forums form the public - is about the same.
On another point, you mention the lack of technical expertise on the part of some WGIG participants.
It is easy to dismiss the people who are working on the WGIG issues because they lack technical credentials. But one must realize that most people from the technical community are equally ignorant of matters of law, economics, and governmental processes. For those reasons, it is easy for those on the non-technical side to dismiss the opinions of those with more technical backgrounds.
Karl
I wont comment on ICANN here. But in response to this -
On another point, you mention the lack of technical expertise on the part of some WGIG participants.
> It is easy to dismiss the people who are working on the WGIG issues
> because they lack technical credentials. But one must realize
I rather agree, that is a valid point. In response to this I’ll just repeat what I said at an igovernance panel organized by apdip/undp during apricot kyoto ..
http://www.apnic.net/meetings/19/docs/transcripts/igov.txt
The ideas are mine. The words are mine. The english as transcribed in that link sure doesnt sound like mine though, I guess I can put it down to transcription error :) Anyway, here goes.
I have filled in one or two blanks left in transcription, from my memory of what I said there. And corrected some of the english as well.
————-
So here what we have is technologists appointing to poor misguided regulation saying that isn’t going to work or pointing out that the Internet has no national boundaries so traditional regulation is not going to work. On the other hand we have governments insisting we do need a route because we have substantial interests that are being affected by this. For whatever motive. To preserve a national monopoly that is bringing substantial money to the government for example an income in telco. Or for other even more mercenary reasons such as a plain desire for control - or from a sincere desire to do good and ensure good for the people by good governance. So where and how do we channel
the energies of governments towards good governance where they can do the best.
For example, you have a lot of issues that depend on international cooperation between governments, and between governments and industry. Cyber crime, for example, and anti-spam regulations - wherever there is a crossover between the online world and the offline world, and where participation and cooperation between law enforcement / judiciary / governments / ISPs around the world is essential in order to get the job done.
In the course of my job I work with a whole lot of governments, besides working together with other ISPs and businesses. And I keep wishing there was already a framework in place instead of trying to build it from scratch at ITU/OECD meetings, as well as at network operator meetings like the ones I help organize at apcauce.org
In the case of the telecom sector and inequitable pricing of telephony and bandwidth in developing economies, governments could help localise the industry [by helping set up internet exchanges], arrange / broker appropriately equitable prices with other providers.
That’s what I was trying to say. How do we get governments to participate in the current governance process, without undermining the foundations of the existing process, and without unintentionally making their participation intrusive or harmful?