game independent ai topic

For after seeing, reading basic blocks of machine learning, starts the part of thinking of how our brain works again. As reading that people restarted mentioning exploration based methodologies (I don’t know if they mean exploration as done in reinforcement learning) while starting reading interesting topics in http://www.gvgai.net /, starting to assemble in mind partially learnt topics like LSTMS, SOMs, Hebbian connections, Bayesian belief networks, CNNs, general feed forward structure for such topic, and then again you stumble upon crude thought experiments on how it works in our consciousness case. Its just at philosophical level not a real scientific data backed process. Since, for instance we know that our visual system’s CNN similarity or other detailed analysis paper’s should be checked beforehand to talk about any hypothesis. But its interesting to do thought experiments with those toolset. The game ai systems for which don’t depend on specific game, as read newly, has been a topic in newsletters, that makes me wonder how that’s possible. Lets start thinking how our perceptions from finger tips feels. First comes to mind, in those topics, SOM like regions which are connected to other parts. Unrelatedly, to exemplify that much of that wirings comes from evolution & from birth, would like to give example that our sense of space and 3d has inherent wirings to be able to think of perspective projection, as some blind people also add perspective projection while asked to draw buildings. Lets think our finger tips, touching edgy surface and a flat surface, how that 3d sense of the touched object is inferred. For direction sensing, before that, the following article would be read: http://www.tandfonline.com/doi/abs/10.1088/0954-898X_8_2_006 to get little less philosophical. Our sense of 3d, our sense of dimensions and cartesian system, how is that handled is one topic i wonder. Most possibly cartesian system comes inherently due to evolution and also similarly our 3d sense (e.g. sense of geometry of 3d shapes in mind) also comes hardwired. But how is that handled? Our cartesian system is so accurate and covering a wide grid. Our sense of dimentions and geometry is similaryly very accurate. In that sense, for instance, we could close eyes and think of a wireframe screen continuosly extending with 3d objects on in mind (like a tree, some houses etc.). Some how, our attention mechanism’s RNN like window, when walking on that screen with our eyes closed, could generate a smooth continuity feeling of reality. They say that attention is instantenous and flickering so that continuity feeling must be just feeling. Lets think of this thought experiment of that scene generation details, how is that 3d object samples generated, from schemas of objects we see/learn in world to that auto generated scene in mind. For me most cryptic part is how that extremely accurate cartesian coordinate system is handled. Think of a wireframe landscape in mind, then you can extend it along the surface in mind to infinity, hmm what kind of structure that enables this? or a 3d object that you rotate in mind in this wireframe scene, how is that handled by our brain? If it had been a 3d SOM, how do we handle texture mapping? Or how do we handle zooming out and in this structure? The culling of visible spaces in our mind’s 3d model, how is that also done? If it had been 3d SOM. We are just doing hypothethical talking. If it had been learnt through probabilistic mapping trials through our infancy to 3d shapes, still it feels a weird way and most possibly is hard wired through evolution. Our inherent pattern matching and intrapolation capabilities, do they give this smoothness feeling of the 3d model that we think in our mind? (talking about a wireframe 3d model we dream in our mind) Thinking logits and all those soup of possible outputs of preceeding cmputations, how that smoothness feeling is established? actually real world is full of flickering probabilistic definitions, but we always have smooth definition in our conscious side of our minds. Might be rooted to CNN structure and pooling. So this thought experiment ended up in way that i need to read more neuroscience articles to see whats employed as current mathemathical models, specifically in the visual system. Seems as tools from courses only construct a mini toolset. For instance for our inherent face recognition capabilities, it could had been an autoencoder with generalization done correctly, but is a mini part of the entire thing. When we recognize analyze faces, might had fMRI help in understanding which part of brain might be responsible for that but that underlying generalization mechanism, it wouldnt have been visible from fMRI.(must had been done before)Everytime i think of these topics, i feel as really most of the things are inherently hardwired and we just learn to use them in major ways(what kind of .ckng low level of vocabulary i have now:S).

But other aspects like that our mathematical modelling capabilities be bounded by our own infrastructure, ah keske turkce ya da turkilizce yazsam cunku ceviremiyorum, we might endup not being able to understand how evolution built up this thing that we see with our own machinery. Since our geometric and analytic thinking all be defined by this evolutionary structure, only creativity might pinpoint light to what real mechanisms could be like. As like in done in theoretical physics. But again, must had evolution

so this thought experiment study ended up in needing to read articles & books.

Take our

Some other mathemathical meta structure might be the framework of this since we can easily move objects in ths mind map. In evolutionary thinking, its seems as a structure formed by visual system. So instead of thinking basic things like 3d soms, must go over things like:

https://arxiv.org/pdf/1704.02831.pdf before attending this topic.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.