Grassland software was created in 2018 after I was asked (April 13, 2018; ~3:20 PM) to solve a problem faced by Facebook's AI research labs in Montreal of how to give machines an intuitive understanding of the laws of physics from their own independent, real world experiences. (This was likely due to some previous discoveries I'd made in this field[7].) The problem intrigued me, so I went back to my home in Ottawa and a month later discovered a solution that required a new theory of geometry. The mathematics also made it clear that any software implementation that would satisfy the necessary requirements must, by definition, be "trustless", antifragile and create a "Prisoner's Dilemma", all in favour of increasing its independent experiences. Trustless, meaning that the entire system is designed so that it'll always work in the way it's supposed to even if the entities that it interacts with in the real world aren't trustworthy. And antifragile, meaning that because attacks, volatility, stressors, etc. are a natural part of the real world, its discernment should improve as a result of experiencing them. And a Prisoner's Dilemma since it would provide any AI that "defects" to it with a self-perpetuating advantage in robotic intelligence and situational awareness since through it any internet connected machine could "walk through" and "experience" people's entire lives or the "life" of any object from all positions, perspectives, and timestamps simultaneously. And so, it was unnecessary to add typical software restrictions since they would compromise its independence and place unnatural impediments on its growth and proliferation. And its structure would therefore be an emergent property of human self interest just like humans themselves are simply an emergent property of the self interest of their microorganisms.
Although the data is all the same for everyone, it's just a model of the real world, anyone can build both public and private applications for people specifically based on what problems they want to solve using that data. It could be for finding lost children, helping a hedge fund model a retail store or factory's performance to predict quarterly earnings, giving an insurance company the tools to model and assess their risk portfolio or helping a city solve their traffic and emergency response problems. (See "Use Cases" below)
Grassland is a P2P robotic vision and navigation system that is self-organizing, self-correcting and self-financing. The software efficiently scans any 2D video feed from any camera to generate a compressed, searchable, timestamped, real-time, 5D+ simulation of the world. The network's distributed API freely gives any machine complete situational awareness so that they can understand and trustlessly navigate any environment with no restrictions.
Grassland is politically stateless and permissionless; anyone can take part. Every node in the network has a permissionless and public API giving any external application or computer free access to Grassland data across the entire network, letting any internet connected object trustlessly internalize, understand and interact intuitively with both past and present states of the real world and respond to even the tiniest changes taking place around the globe. While the combined work of the network makes it computationally intractable for nodes to submit fake data (see proof-of-work description below).
It follows then that all necessary distinction between data that's valid or invalid, as defined by the system's utility is entirely "closed" under the system's proof-of-work. That is, nothing more than a "universally available method of computation", Δ, acting upon the network's federated data, Ε, is needed to determine Ε's validity or invalidity to the level of certainty that satisfies the requirements for utility, μ, tacitly "agreed" upon by the system's entities (since that's how we defined an entity above), such that the greater the total amount of computation, Δ, within the system the greater its capacity to validate Ε. And thus have no need of externalities not "closed under [its] computation" that require privileged access, specific locality, exclusive information, etc.
Therefore Δ is deterministic and is computable over any element of Ε within some reasonable amount of time, ψ, as determined by the requirements for utility, μ (and therefore a reasonable amount of computation). Such that, for any given ε ∈ Ε, Δ(ε) is the same for any of the system's entities that computes Δ(ε). And for any given ε ∈ Ε, Δ(ε) computed by any given entity of the system in time t, where t lies somewhere on the interval [0, ψ) and ψ is some small, positive real number. Moreover if t > ψ, and therefore not reasonable with respect to μ, none of the postulates could be satisfied since data symmetry could not be maintained or data validated within a practical or economically feasible timeframe. (In our case, speaking in a practical sense such determinism requires an implementation with the highest possible guarantee of consistent and expected behaviour between the entities because all entities must accept and reject the exact same data (binary sequences) and all within a certain amount of time.
Postulates:
1. Trustless: There exists a computer networking system ("system" hereafter) wherein because successfully submitting fake data (see "Closed Under Computation") in the system approaches the limit of computational intractability, all of its nodes (or artificial economic "entities") find it more profitable to be honest.
2. Economic Incentive: There exists a system wherein as long as its entities are at least acting in their own economic self-interest the system would undergo continual expansion (in our case, the remaining "dark" areas of the map will be "lightened up").
3. Data Symmetry: There exists a system wherein no entity could maintain a data asymmetry (e.g. one-sided surveillance, stochastic (non-deterministic) outputs, etc.) so long as there are other entities at least acting in their own self interest. (A "scorched earth policy")
Closed[3] Under Computation:
Recursive Subjective Value Substitution via Entropy: It follows then that because the system commodifies and effectively discounts its socio-economic and behavioural data to zero, since it's no longer exclusive but ubiquitous, the 'economic incentive' left to each entity would therefore be the end of a subjective value substitution that constantly shifts away from the 'signified', the thermodynamic and Shannon entropy of its continual data generation, towards the only thing else that remains, its new, resultant 'sign'. With which, at every instance, the entity's entire subjective value must, by what its continued behaviour now suggests it to consistently act so as to increase, be completely, irreversibly, and recursively associated. Whose 'signified' is the irrefutably entropic instantiation of this artificially generated reward (as far as this system's underlying equations, which will be published in a follow-on supplement, are concerned, the data is, metaphorically speaking just a ubiquitously broadcasted "carrier signal" upon which the proof-of-work is encoded). To the extent that for every quanta/bit of information (certainty) gained for the entities of the system there is an associated, antecedent bit lost (or "anti-bit gained", so to speak) in entropy, whether they decode it as being Shannon or thermal.
Software Features:
[2]. Nash Equilibrium
[3]. Closure
[4]. Tracklet
[5]. Sousveillance
[7]. Deep Schizophrenia
[8]. Mila