The data exists and doesn’t at the same time unless observed.
The Internet is a marvel of modern society, breeding a culture that up until the late 20th century didn’t really exist. As of late, I’ve been really thinking about the context of the Internet and the networking that has arisen from it all. There is, of course, the W3C setting standards for things like HTML5 and WebGL, and I’m really interested in all of that. After all, being the Vice Chair for the IEEE Virtual World Standard group comes with that interest overall.
But the more I think about the current system that we call the Internet, the more I think we’re missing something really important.
The thing is, we essentially have a brilliant and decentralized network of specific something around the world and this has done an excellent job at connecting society while fostering the information age. But that’s just the problem… all of these servers around the world serving up specific something and being the master of their own domain. Whenever the traffic and load get too high there is load balancing on-location in data centers, or other data centers connected with copies of that specific something to serve up.
Even in the cloud storage structure of say, Amazon Cloud, it’s an Achilles Heel. When the cloud goes down, hundreds or thousands of websites disappear as well as a plethora of assets that were stored there. I think this is because of the context of the data and how it is addressed in the bigger scale.
After all, if Amazon is the data center that holds your stuff, when it goes down so too does your stuff disappear. Of course this isn’t just an Internet problem, it’s a virtual world problem as well. All of your stuff is essentially in the hands of Linden Lab if you use Second Life, and for that asset system there is a data center somewhere at a central location. If that data center goes offline, the virtual world goes with it.
This is more of a DarkNet structure of doing things – where the user is anonymized, or the traffic and the data is specific with meaning. Somewhere on a server the file you are looking for is either there or it isn’t. That troubles me… I see a world of servers all being singular entities of information repository and destinations rather than a collaborative volume working together.
What if the data was anonymous but the context wasn’t?
Right now, when you go to Google.com, the data center over there serves up Google specific files to your web browser. But what if the data at the Google Data Center didn’t have anything specific to Google but instead just a bunch of asset keys and bulk volume storage whereby the data itself was just gibberish and multi-use data?
In a Peer to Peer fashion, the servers of the world would act like a collective singular master volume of data. This is BrightNet thinking… and it looks a lot like the evolution of the Internet when you think about it.
In the BrightNet manner of thinking, everything is part of everything else. when we look at the data, there isn’t anything specific going on there… just a bunch of 128kib blocks of random data. Think of it like the periodic table of data.
We’re then no longer asking a server or domain for the data but instead the context keys to get the data specific to that domain, but the data that comprises that specific domain is spread out around the world among every single server connected to the Internet. This way there is no specific server housing the unique data for anything, but the simple keys to get to that data.
What I’m implying here is that every single server in the world could act like a single, decentralized, master volume of data that would harness the potential of the entire Internet all at once. If whatever you are looking for is shadowed as multi-use data across millions of servers around the world in basic reconstructive blocks that are used as part of other files, it would be nearly impossible to lose any data that goes into that system.
Even if the original domain and data disappears.
So long as the Internet itself remains and uses polymorphic data interconnected, if anyone has a context key it can be retrieved from the master volume of data worldwide.
I think the Internet itself is the first stage, because it provides the standard for underlying networking which enables the current generation. But the next step is revolutionizing the storage and retrieval to adequately utilize the latent power of the Internet which is still going to waste.
By approaching it like this, you begin to realize that treating every single server connected to the Internet as if it were part of a collective volume of cloud storage makes perfect sense. It’s like breaking data down into a periodic table and only transmitting the basic blocks needed to reconstruct files much in the same manner as things in real life are made of elements, which are made of atoms, etc…
Of course, this has wide reaching implications for virtual world technology as well.
See also: Multi-Use Data
It seems highly unlikely to most people that the same exact data can be used to represent several things at once. But indeed, the same digital representation can be, “both a floor wax and a dessert topping.” The reason is purely mathematical.
- There are an infinite number of ways to digitally represent any given work.
- Every digital representation can be used to perceive an infinite number of works.
A BrightNet simply chooses one of the infinite ways that is non-copyrightable. (There must be at least one, or else every possible digital representation would already be copyrighted.) It just so happens that, in most cases, the same representation is already being used for other things.
Think of it like an anagram. You are rearranging the letters of a word or phrase to find new meaning with the fundamental blocks of that data (ie: the letters). So Clint Eastwood turns into Old West Action. Of course, you see the same exact data and know it has multiple meanings now… which is what Polymorphic Data is about.
On a worldwide scale it means an Internet of Nothing Specific. The collective volume can house just about everything and yet, nothing at all up front. Just a sea of static… waiting to find meaning.
That MP3 Library you have is a great example, and an anagram of basic data that can be rearranged into many other things that aren’t MP3 files. Maybe the NetFlix datacenter would double as the Library of Congress as well in Polymorphic data…
This is the storage paradigm that I think about… and how the Internet of the future looks to me. Everything is everything else… and nothing at all until given a context to reconstruct. That data is decentralized all over the world via every single datacenter and server hooked up to the Internet.
If you’re feeling particularly skippy, take a look at the following PDF for details: