So I'm back...I think. Perhaps this time with a more realistic idea of how much involvement with the Creatures community I'll be able to manage with my time. The whole idea of artificial intelligence and the question of how much they simulate real brains and consciousness has inspired me to come back, so I'll be focusing on doing research for that mostly. I don't have time for documenting me playing for the fun of it, although I might occasionally review a few breeds or metarooms as part of my "research". Although I am studying psychology, I must admit neuroscience/neuropsychology is not my area of focus and I actually have a lot of beef with it because it seems overly reductionist but that's a rant for another day. I am more interested in social and cognitive psychology, pretty much why people think and do what they do, I'm not as interested in how (the actual brain processes). But of course, learning the underlying processes is important too, and AI makes it much more interesting, so here I am -and I shall learn slowly. Did I mention recent health issues? I actually wanted to read some stuff Steve Grand wrote today, but as I started I felt like I had heavy weights attached to me eh, so I'll have to focus on getting well first.
Anyway, thanks to Malkin, who sent me a message a while ago asking about stimuli generalisation (how the brain comes to associate a group of similar stimuli to an original conditioned stimulus) it got me thinking about how a majority of what norns do comes down to conditioning. Their behavior is learned by associating a stimulus (which becomes the conditioned stimulus) with a particular response -this association is formed by reinforcement (tickling) and when we don't want an association to form (such as eat elevator!) we use punishment (smacking). The use of reward and punishment to bring about desired behaviour is what we call conditioning. So when a norn gives a response to a particular stumulus (e.g. push elevator), it is because we taught them that the specific response (push) is a correct one to that specific stimulus (elevator). I hope that wasn't too confusing. I know that their behavior is also influenced by other processes aside from conditioning, such as biochemistry and such, but I'd think conditioning is the predominant cognitive process.
Now if AI is to ever really simulate the human brain, it would have to include other cognitive processes that cause the formation of opinions, preferences, worldviews etc. This is not impossible, as it may just seem like a matter of putting in more algorithms. But what about things like EQ or emotional intelligence? That plays a huge role in our thinking and acting too. And will AI actually be able to ponder? Work out things without being told to do so? I think these questions may have to do with consciousness, which is a mind boggling concept without little debate so perhaps I'll explore that further and offer my view as I'm not expecting to find "right" answers to something that has been debated for centuries.
I totally got sidetracked from the revamp part of this post! I'll be removing stuff like useful links and resources since the Creatures Caves is like the one-stop-shop for that kind of thing and I want a new look for this blog. I'm still debating on removing some of the old posts since they seem a bit random. I also might do a few arty things here and there but I'll post those on CC and I'll just put those graphics from the metaroom project (not that there's much) up there as well and people can do whatever with it.