Saturday, November 29, 2008

Singular Value Decomposition

An extremely nice tutorial about Singular Value Decomposition shows how you can extract pretty specific information from a bunch of data. I think SVD is very interesting to analyze data from different perspectives, one perspective is the product (how close is it really to another?) and the other perspective is the customer (how close is customer A to customer B?).

The problem starts to occur when people change their preferences. People generally go through phases (well, not all of us, but many!), and this is accompanied by different needs and different preferences. For this reason, A.I. designers need to understand that historical information only has limited value. The temporal trends in such analysis never come up to the surface, but I'm sure that some research is being done in this area, to further contextualize data in the realms of time.

I'm likely to speak at Sogeti Engineering World 2009, yet to be confirmed. My presentation will be about Artificial Intelligence and how it applies to business. Already now, businesses at lower levels are getting more interested in making the most out of their data. They have good knowledge about how their business works and who (in general) their customers are, but they cannot quantify their customer base from different perspectives.

My presentation will make clear how Artificial Intelligence is important to cases like response modeling, online recommendations, retention modeling and it will explain to engineers how they can apply certain techniques (borrowed from libraries) to their own problems at hand.

Where most people think of A.I. as some kind of black magic or silver bullet, I think it's important to realize that it's just juggling with numbers (at the moment). Over the past 50 years, A.I. has expanded into a number of different territories. One territory is more related to our "explicit knowledge" about things, the rule based systems and prolog. The other area is more related to "tacit knowledge", or what we know without being able to tell how we know it. It just works/is.

Neural networks, SVD, Kohonen are more mathematical constructs around the idea of tacit knowledge. We can't really trace it from input -> output, we just know it works. Other languages like Prolog work on the execution of basic rules or truths and demonstrate how the real world would act.

Our minds continuously sway between these two different areas of knowledge. We infer a lot of different information just through observation, sometimes supported by external teachers. But we also judge observations on truths that we have learned, or rules.

Many solutions in A.I. have depended on the combination of different techniques to offer the best solution. One solution that seems to work well now, for example, is spam assassination. SpamAssassin, now an Apache project, is one of the most popular spam-fighting schemes for email servers. It doesn't depend on a single scheme to rule out spam, but combines them as part of a certain model. Each different technique is either restraining or backing up another technique.

The very interesting question here is that in computers, we tend to use either RBS (Rule Based Systems) or other techniques like Neural Networks or Bayesian Belief Networks to solve a certain problem. One system is invoked before the other, as in a type of hierarchy. If we assume that the human brain only has neurons at his disposal, how can all these different techniques be applied in unison at the right time and moment? How do we know which strategy to rely on?

Tuesday, November 11, 2008

linux intrepid tricks

I've upgraded to Intrepid recently and just two days ago, my system collapsed. For some reason, while opening a new tab in Firefox, the entire system just stopped functioning. No terminal, no Shift+F1, no login... So I reset, expecting things to resolve itself. Naturally, the reboot entered "fsck", which found a number of errors. However, I couldn't leave the machine working on that since I had to leave. In the evening, I tried things again, but it got slightly worse. It took 1 hour for a single fsck run with loads of messages inbetween. By then, I was thinking that I could reduce the time for fsck by removing a DVD dump from one of the DVD's I am owning. Bad idea. As soon as I restarted and went into rw mode, I got grub "17" errors on restart. That means that the boot loader can't even resolve the partition to boot from.

I did have a live cd lying around somewhere, but that was not of great help. "cfdisk" absolutely refused to run. I could not mount from a terminal in the liveCD

( mount -t ext3 /dev/mapper/isw_xxxxxxx /mnt/target )

resulting in "superblock errors" or "partition could not be recognized" and those sorts of things.

and from within "grub", it couldn't even see /boot/grub/stage1. setup (hd0) didn´t work either.

Well, searching around on the internet seems to regularly suggest to use the "grub" trick, or suggests that the root (hdx,y) setting is incorrect, but my problem clearly was a hosing of the entire file system. I thought.

Well, since I am running from a fake RAID array, I needed to remember to install "dmraid" (intrepid has this by default now), but in feisty that needed to be activated through the sources.list first, then apt-get updated and then installed. Then perform "dmraid -ay" to get the /dev/mapper devices to work.

It makes no sense to mount a RAID-ed partition directly through /dev/sda2. You should remember that as well :). On the internet, I couldn't see very good pointers, but eventually I decided to finish where the single user mode left off: fsck.

root@recife# fsck -y /dev/mapper/isw_xxxxxxxx02

eventually ran the entire file system check and resolved looooads of errors. Mounting this on /mnt/target later did work. I could also sort of boot into the system, but because /etc was gone, it wasn't very helpful :). So the entire system got hosed, but from the Live CD system, I could rescue a couple of important files and put them onto different systems or mail them around. Thus, I didn't lose my university assignments and what have you, but the entire installed system is a loss.

I've now re-installed intrepid from the netboot cd (download from the internet) and that worked in one go. There's a guide on the internet on how to install that for fakeraid systems. It's a lot easier. Grub however still has problems getting things organized, so you should pay heed there. Also, it seems that "update-grub" doesn't work properly when menu.lst does not exist. Actually, it does attempt to ask you if it should be generated, but that doesn't work well. I ended up creating a single file "line" with a "y" in there and then adjusting the /usr/sbin/update-grub script (line 1085).

On reboot, things already worked fine, but I like to install the nvidia restricted module drivers for better performance. The screen resolution for my IIyama was still problematic though. It only got to 1024x768. Eventually, I ran nvidia-xconfig, which put in more crap into xorg.conf, then restarted xorg ( nohup /etc/init.d/gdm restart ), after which I had more options to choose from.

Right now, I think I've more or less entirely upgraded to the system I had, so I can carry on hacking and doing things. For some reason, the old system was slowing down significantly. And then there's not even a heavy registry to be supported.

Monday, November 03, 2008

Mental Causation

An old philosophical problem is the problem related to mental causation. The question relates to how a mental event can cause physical events or whether mental events are the results of physical events. In my previous blogs, I once posted about how clever we think we are. This post is sort of an extension on that. In the post, I pointed out that we consciously often consider ourselves more intelligent and better than other species, but our actions are not necessarily that much better in regard to action -> consequence. It's just more words and more fluff. In short, we easily believe that we're radically analyzing a certain situation, considering it from any angle, objectively, but when one uses hindsight to analyze the situational developments later on, we often see that the original arguments were severely misguided or didn't have any such intended effect.

In my studies, I'm now following courses on modelling. The A.I. classes are divided into a group following Collective Web Intelligence and another is following Human Ambience. The latter requires to understand more stuff about decision-making, well-being, psychology, sociology, altruism and so on. You wouldn't possibly exactly expect it from courses in A.I., but there you go.

It's intensely interesting. One of the courses today is about emergence, which I also blogged about before. Emergence is about simple constructs which act/interact in rather simple ways, which eventually construct a new model of behaviour at a higher level. Ants are the most common examples, where each individual ant follows a couple of simple rules, but the behaviour of the ant-hill overall is far more complex than the sum of individual ant together.

You could consider the mind not having any actual conscious thought at all. A not-so inspiring idea is to think of ourselves as soul-less beings, within which just run a very high number of different physiological processes (100 billion neurons), shooting off electrical messages between one another whilst being impacted by a couple of hundreds of different proteins, which are messages from one organ to another. So, we have no specific 'soul', we're just like robots with very complex physiological processes, eventually yielding a certain behaviour that allows us to interact with others.

The ability of a neuron to form an electrical current than is the physiological level. Let's call this emergence level A. But by forming this current together with a simple method for recognizing a previous pattern (neuron A firing off and then neuron B responding similarly because it has done so before, also known as strengthening of a synapse), is a cognitive process, where it doesn't just become a process of firing electrical currents between neurons, but a more complicated process of responding to certain firing patterns. Let's call this emergence level B.

(We then need to take a couple of too quick steps by jumping to enormous assumptions and conclusions) If we assume that thoughts are somehow emerging from these patterns of firing neurons, then the 'memory' together with some other 'machinery' for computing and predicting the results of actions could be seen as the basis of our behaviour. Thus, behaviour in this definition is the ability to recognize and remember and predict future outcomes and then acting on those computations. The next level is our decision-making and behaviour, level C.

When you go one more level up, you get to the behaviour level of a complete society. Remember the ants? For humans, you can develop similar models, because we have a model for our economy (where each of us acts as agents) and a model for certain criminological events, etc. The behaviour of society is made up out of individual decisions at level C, but overall might develop a new emergence level D, that of the collective.

The interesting part in this consideration is that mental processes aren't so much "spirited". From the Stanford Encyclopedia of Philosophy:

(1) The human body is a material thing.
(2) The human mind is a spiritual thing.
(3) Mind and body interact.
(4) Spirit and matter do not interact.

The above four rules regard the mind as a very special kind of element, sort of like a merger of the soul with some physical abilities that the brain can do (vision, smell, motor control, etc.), but decision making, emotion, etc. are considered somewhat deitous.

If we simply regard the mind as a number of computations that are biologically there and thoughts and consciousness are the de-materialization(?) of certain cell assemblies becoming activated or not, then we can find ways to merge this blog story with certain theories about how DNA is actually indirectly programming us and how we serve as carrying "agents" for the continuation of the DNA structure. Thus, in that sense, we are walking biological computers, which are continuously responding to our environment, learning from it and through those processes become more efficient in the propagation of cultures of DNA.

One can wonder whether our consciousness is really that 'evolved' in the sense that it is the motor of all our cognitive processes, decisions and what have you. Are we guiding our actions and thoughts processes through our conscious 'participation' in this process or is consciousness the reflection of the human brain itself, which has basically already determined the best course of action and has considered each alternative? Thus, in this latter idea, consciousness is more like an observation of "mental processes" that have already taken place or are going to take place thereafter. Thus, the difference here is that we must properly identify the CPU, memory and machine and not point at the monitor screen to describe "the computer". In this analogy, consciousness is the reflection of what goes on in a computer (thus, the image on the computer monitor), but it should not be mistaken for the computer itself, which is generally more out of view, housing the CPU and memory.

What is not explained though in this entire story is the element of attention and how we are able to 'consciously' execute certain actions or pay attention to important things. Is that just a matter of directing more attention and execution power to physical events? If it is, then who's instructing our machine that it is important and should be paid attention to? Is the brain in this sense self-preserving and intelligent that it controls itself? Or is there an externality involved which directs the attention of the machine? Or are we thinking too much in hierarchical terms and is the entire problem of decision-making the problem of weighing off cost/benefit and dealing with direct influences first vs. more indirect influences?