Saturday, September 22, 2007

beauty in h-grammars under m-languages

Exquisite pattern formation introduced by sufficiently deep h-grammars is mind-boggling. The after-image of a tree, burned against a crystal blue sky, the ensemble array of leaves whisking in the wind. Underneath it all lies layer upon layer of conversant grammars, quietly producing symbols with certain probabilities, based upon the heterogeneous and open nature in which they exist. Rivers of energy and information flow through these systems; coagulate in precious miniature miracles. A resounding harmony, and often randomness, of many statements proposed in particular contexts. Sometimes, when considered together, the impact of the entire set of statements grows in magnitude, to engulf the being of whatever subjectivity happens to be witness. As the h-grammars become more flexible, malleable, their depth shimmers. Inherently fractal.

I truly believe the adjacent possible extends beyond the last fence we can imagine, even though by expanding outwards we manipulate the state of affairs of a larger space, the space is larger than we can truly imagine and intuit. It is truly beyond meta* which is in itself beautiful.

Friday, September 21, 2007

Holy Experience

The semantics of the language would have to be rooted in a strict but putatively dynamic framework. Ideally one would be able to incorporate as arbitrary of symbols as possible, to build complex underlying suites of hyper-relationships. I would argue that for most of these systems the underlying hypergrammar would not be as complex as it could possibly be. In fact, in the power set of possible relationships between the elements of the system, most of the elements would be conditionally independent of one another at some point. This sparsity of higher order structure would allow one to construct incredibly powerful models without having to have them be so intensely complex as to incomprehensible.

The trick would be to incorporate the necessary and sufficient nonlinearities on the hyper grammar that would allow for the partitions of the state space that were most intutitive and meaningful. This brings into quesiton what intutitve and meaningful mean in this context...I would argue that they mean that the partition would give insight into the relationship between the local and global behavior of all the elements of the system. This is where it is definitely necessary to include the higher order conditional independencies, since without those the higher order structure cannot feed down and change the structure of the local interactions in increasingly complex ways. I think that is why most computational experiments with autocatalytic sets or evolutionary algorithms eventually tap out, since they do not explicitly incorporate the effect of the entire set of interactions taken as a whole on the local interactions of each element in the system. This feedback between levels of the hierarchy is absolutely necessary to have a true understanding of any of these complex systems, since it is the feedback between the emergent whole and the local behavior that gives rise to all qualia and phenomenon that we as human beings find beautiful and awe-inspiring.

It would be amazing to synthesize all sorts of coupled systems in one realization of the process. This would allow one to partition phase space in elegant and beautiful ways. I have a feeling that music does this to one's mind. The patterns of neurons flicker and flare bursting out of nowhere and fading into blackness.

The trick for the m-language will be to develop the appropriate hyper-grammar. This h-grammar will have to be constructed in a very clever way, such as to make the large order conditional independencies and the local conditional independencies play off of one another in highly intuitive ways. This means that the effect of the suite of all genes on the interaction between any gene, or the effect of the ensemble structure of the protein on any particular van-der waals interaction will be framed in such a way as to capture as many of the dynamically sufficient characterizations of the system as possible. One way to do this would be to assign some sort of low dimensional manifold to the suite of all variables, then "integrate out" that manifold from all of the local conditional indendencies (like it was a nuisance parameter). For example, send the states of the entire system to a 2 dimensional sphere. This lower dimensional manifold would then represent the collective effect of the entire system, and wherever on the sphere the particular realization was sent would be integrated out of the conditional relationships for each of the variables. One could choose as arbitrary (and as high dimensional) a manifold as one wished that would hopefully reflect the ensemble (or sub-ensemble) behavior of the system on each of the local dependencies. An example of this with protein folding would be, if one broke the protein into a hierarchy of structures. There would be the fine scale structures (e.g. local interactions) and the large scale structures (e.g. secondary or tertiary structure interactions). To find the most parsimonious time course trajectory, would would assign some sort of mainfold to each of the levels of this system, and integrate out the effects of each level on the lower levels, until the true time course trajectory of each atom could be predicted based on the entire ensemble h-grammar, implemented in an m-language. One could also do this for gene networks, for the structure of membranes for the formation of any sort of higher order structure in biology or nature, as long as a well-defined mechanistic and stochastic process could be defined upon it. The key aspect of this process would be to optimize for large ensembles of heterogenous objects, hence the local and global interactions would not be obvious, but these systems are the most interesting systems we encounter in nature.

Another incredibly exciting possibility would be to have self-optimizing substrates, that would take the form of the h-grammar they were operating on. The problem with many computational problems is that we have to restrict ourselves to a very specific physical computational architecture. Yet this is not the end all when it comes to computation (or directed realization of various states based upon some suite of criteria). We could optimize various computational questions (such as large scale modeling and optimization of heterogenous h-grammars) much more efficiently by having computational substrates that mirror the structure of the grammars themselves.

Essentially, human beings are hyper-grammars implemented in a particular meta-language. Yet there are also a myriad of h-grammars surrounding humanity, and flowing in and out of everyone of us. If the code of the m-language is broken so as to better understand the h-grammar, there will be a meta(infinity symbol) experience. A realization of a rosetta stone of the adjacent possible leading deeper down the rabbit hole. Leading beyond all thought, and beyond all possible comprehension. Weird how god sneaks in, in the strangest places.

Monday, September 17, 2007

Meta-meta-meta-meta....language

It is interesting how functional constraints imposed by the mere form of objects of interest can give rise to novel characteristics. Yet, language sets the upper bound on what one can communicate about novel characteristics. And what makes a characteristic novel anyway? If I flip a quarter a million times, any particular outcome (when considering the entire string of flips) will be highly unlikely to observe. But we could care less if the information encoded by the string of 1's and 0's is truly random. What we truly care about is if there is some deeper pattern or organization to the string of 1's and 0's. But given an apparently random string of 1's and 0's how can we tell if it is truly random? What is randomness? This is a deep question in computer science relating to the most efficient algorithm to produce a particular output (e.g. a string of "random" 1's and 0's).

It seems likely that in the space of possible string's of 1's and 0's representations of ourselves and our reality(ies) exist in an infinite multitude of forms. This is both terrifying and awe-inspiring. In fact I would argue that if power-sets of infinities (ad nauseum) exist, then reality is much weirder than we could even begin to imagine, i.e. the holes and bridges that actually exist across all relations and non-relations are infinitely more complex than we can even begin to begin to...understand.

How does one go about constructing increasingly powerful languages, that can identify all (or at least most) salient characteristics of objects which they operate on? There is an upper bound on how powerful a language can be (i.e. it can't evaluate the truth of its own statements in finite time). But that doesn't mean we can't build languages that more accurately mirror the form of the objects they operate on. Say we have some complicated web of causal interactions with noise (protein folding, gene networks, membrane dynamics, any complicated ensemble process...). We want to talk in meaningful ways about the system. All we have to do is identify all conditional independencies in state-space, then all meta-conditional independencies (e.g. conditional independencies between groups of variables), then all meta-meta...(groups of groups), etc... These may also be multilinear groups of conditional independencies, (i.e. we may have hyper-(n)-graphs). This will parse the system into the relevant hierarchies. From this highly complex hyper-graph, we can develop qualitative state-space analysis techniques, essentially cutting up the state space based on the structure of conditional independence. Every level of conditional independency corresponds to some higher order structure we cannot see merely at the bottom level, but which may play an important role in the dynamics/behavior of the system. The most interesting cases will be where the meta-* conditional independencies end up feeding back down to the bottom of the hierarchy (and vice versa), thereby partitioning state-space in increasingly complex ways. (Feedback can be used to break symmetries in state space, including symmetries on higher order structures.) We can then assign probability to entire chunks of state space based on the inferred global structure of the system in question. This probability assignment will allow us to make statements about the current and future state of the system, couched in the noisy nature of our understanding of the system in question. We can then operate on this hypergraph embedded in some nonlinear manifold. We can ask questions like, if I couple these subsystems, what will happen to the global dynamics of my system? Since we know all the conditional independencies, we can immediately identify which structures any operations will affect, and then it should be possible to focus on just those structures to elaborate what the effects will be.

To make each level of meta-conditional independency tractable, there will need be clever notational and semantic rules. A language constructed in this framework would have the advantage of being both general and specific, which means that one could taylor the language well for multiple systems, then compare each of these specific languages to look for isomorphism. More on this when I'm not crunched for time...

Sunday, September 02, 2007

Jorval and brey

Quix core, cithe
Chart en racheq
Deaor brithel danr the
Nor falk and corl

Quarl, cae jalp
Pose sh., (<
Org and ynor
Wre colp ja mor ral tanta
o mley un-d a breol

Mae, moorsh laeon grysh
Youel, woer lofeirl
em
Ne
doorway