Friday, September 28, 2007

New machine

I've received my new machine that I ordered and managed to get it installed and working. It has been working great so far. Part of the challenge of this machine is to get Linux and Windows in a dual boot configuration on a RAID-0 array. Well, after puzzling for a day or two, I managed to get things done.

I installed Windows first. For this, just follow the steps in the manual. You'll need a single floppy disk with the RAID drivers on it, then you allocate a portion of the RAID array to Windows and the rest is similar to what you are used to.

For Linux, you'll need some more work for installation. I use Ubuntu and booted from the regular Live CD. Then I followed parts of this guide first:

https://help.ubuntu.com/community/FakeRaidHowto

But I did not proceed with the installation of the software. A very important step is the mkswap / swapon commands, as this will otherwise stop regular installation. I actually continued from the LiveCD installation of this guide:

http://ubuntuforums.org/showthread.php?t=464758

So, the use of gparted is totally unnecessary. I partitioned using dmraid and fdisk, then formatted with mkfs and set swapon/mkswap as in the guides. Then immediately started the installation process and finished off as in the second guide.

My total system has Intel E6850 dual core, 2x1GB (667Mhz) low-latency memory in dual-channel setup and 2x10,000rpm WD Raptor drives of 75G each in RAID-0 configuration.

Sunday, September 23, 2007

How the Mind Works

I bought the book "How the Mind Works" from Steven Pinker. It is a very interesting book regarding the evolution and operation of the mind. You should of course not expect a book detailing the exact workings, since that is still unknown, but a series of philosophical reflections regarding the topic.

Reading the book until now, I can see how the invention of the computer makes people believe that at some point the mind can be replicated in a machine. But I have some serious doubts on this.

I think a couple of items will become very difficult to implement in machines with the current technology developed (since computers are necessarily "formal" machines that operate on "formal" symbols and need deterministic results):
  • The mind is strongly goal-driven. A computer is not.
  • The mind does not compare formal symbols, they appear more as very fuzzy. We compare and develop rules in our mind that matches potential elements with other symbols we perceive or think. (learning is the development and extension of those rules?)
  • The mind follows a goal and extracts, from our memory/experience, relevant symbols for further processing. This can even result in a learning exercise (new rules?). The key point here being that only relevant memories are very quickly extracted at an enormous quick pace. So how does a memory extractor know what is relevant and what is not beforehand?
These are already three large problems that a software engineer should face and solve before any true intelligence is remotely possible. As a key note on neural networks, before we get there... some critics have suggested that only after large amounts of training (100,000 cycles?) does the network show the behaviour that is expected. The human mind however needs a much smaller number of iterations to pick up a new ability or skill.

Hence, my point above about rule-based networks. It is as if the memory extractor picks out certain memories (let's say mentalese fuzzy symbols) that match it to what we are perceiving or comparing, out of which may be developed a new rule that is stored in our memory for further processing.

It should be a very intelligent machine that can develop rules and even has the ability to represent (internally!) fuzzy mentalese symbols. We tend to always represent items as formal elements, since these are ultimately deterministic. So, in a way, our communication with the machine never gets translated to an "inner" representation in the machine, but always as a formal representation that makes it easier for us to analyze.

Monday, September 17, 2007

Cognitive Science and Artificial Intelligence

In some other article I discussed some of my personal perspectives on how the mind works. I've been reading in the book of "Introduction to Cognitive Science" whilst in Paris, sitting in one of the brasseries near Gare de l'Est. Not exactly the most pittoresque places, but any other place would probably distract :).

It's a very interesting book with lots of different views, perspectives and theories. It makes clear that current theories consider three different levels for analysis and these have direct analogies with computers. The lowest level is at the hardware level, where the researcher attempts to understand the mind at the level of the synapse and the biology (which is the level of the circuit board, the volts, current and silicon components). Another level looks at the component level, where and how different components of the mind work together to improve our understanding of the world and contextualize input. The highest level looks at the functional level and thus describes the representation of meaning and the end results of the overall functions.

All levels are very important. The highest level is where philosophy is most helpful, the lowest level is where biology and technology measure. One school of thought suggests that the mind is some kind of associative network that is activated through thoughts themselves (or are recollections of long-term memory).

This, to me, somehow suggests that for Artificial Intelligence to really succeed, it must spend time on re-implementing the very basics of computers. Actually, to go the route of Haskell and Erlang and stackless Python.

To make a clear distinction... the architecture of a Pentium processor uses a stack by default. This is a temporary storage in memory that is used and reserved for the processor and used to "track back" into the main line of a certain program. A program is generally written in a way that it becomes more specific for each function. So a generic function would calculate discounts for an account in a larger process, a called function retrieves the account, another called function retrieves applicable discounts.

The organization of programs this way allows us to get the programs " in our heads". The complexity of a network is highly intensive for us to resolve, as compared to hierarchical trees for example. One suggested reason for this is the limited amount of working memory that is dedicated to solve a small problem.

In my imagination, it's as if we have 3-4 CPU registers and a limited L2 cache and a strange kind of memory. This memory does not work through "locators" externally, but gets "triggered" by input and starts feeding our thoughts system.

One of the most important things to consider is that AI could benefit from computer programming without stacks, so stackless computing. Look for "stackless" python to see some examples. There are significant differences and possibilities when there is no stack in programming:
  • Programs can run without pre-determined goal. That is interesting, since programs run and act in a deterministic way. We program them to behave systematically and consistently. In the absence of a stack it is theoretically possible to introduce non-consistent behavior (which might be a pre-requisite for true intelligence).
  • General batch program architecture organizes a processing loop of some kind that always perform the same hierarchically organized routines. Without a stack and with different architectures, it is possible to consider a system that has a certain "memory" of what it did before, possibly allowing for contextual determination of certain events.
  • Continuation of a program occurs by passing in the address of a function to another. This can both be a function that complements the called function or it can be the function to process next.
Stackless computing is significantly harder to architect and program than stack-based computing. The programs closer resemble a kind of network and there is no longer (necessarily) deterministic behaviour, which is a necessity to resolve a certain problem in a consistent manner. Neural networks used in Artificial Intelligence are examples where patterns are identified, but it is in my imagination impossible to build intelligent systems from neural networks alone.

I started this story with three distinct levels for analyzing behavior. The most basic level is most important, since it's the level where things execute and exchange information. If we attempt to run our functions on incompatible hardware, we're not likely to get good results. Can we redesign the computer not to use stacks, but to require programs that are behaving as different kinds of networks and are compiled to continue execution forward, never unwind a stack-entity and in the process gather and structure their memory and other functions to develop a sense of context? It might be the key to real intelligence :)

Friday, September 07, 2007

Bye Bye Brasil...

I'm relocating myself for some time and going back to Holland. Reasons mostly have to do with family and possibilities/opportunities career-wise. Besides that, it's a question of the ability to do a Master's, the work conditions, the violence and some absolutely appalling cases of corruption / abuse of public services / government that the world has ever seen :\ (and in my opinion a general lack of common applied sense and/or lack of action. Brasil (the people, the judiciary system, the democracy) will have to throw out a good lot of incompetent or thieving personas that somehow got their position there before it can go forward.

I'm already looking around for opportunities and have some interviews planned. Later on, I'll have to see how things match together. Project Dune is still going forward, the forums could improve a bit qua traffic. I'm reinstalling and moving between computers at the moment, so editing and other activities may be a bit difficult.

Saturday, September 01, 2007

Why quality plans should use wiki's

I'm writing up a lot of information in the Project Dune wiki and start to realize the potential and importance of the wiki itself. I have been browsing wiki's for some time, but now is the first time I am actually editing a lot of pages.

The Project Dune wiki is about software quality and has two main purposes. It documents the project and it documents consolidated knowledge about quality.

As I go through the pages, I experience the difference between a site with static information that is maintained by a number of editors versus a site that has freely editable information with a couple of access constraints. So when a reader can become an editor at the touch of a button, it gives the feeling (and potential) of participation. This is important for a human being and for companies to generate a sense of identity.

When a company would use a wiki to document quality plans and use the discussion and talk extensions, consider the difference in attitude that the engineers would have on the quality policy and plans (in the case of companies where only managers are owners of the policy and dictate it 100%). The question is not so much that engineers must have made a contribution on the wiki. The difference is the possibilityto suggest changes to the policy immediately and do so on the record in public.

But I don't think the wiki is immediately sufficient. I've worked in some companies that have a very archaic view of the quality plan / policy. It is probably comparable to code crush, which is when a developer becomes highly defensive against any proposed changes on the code and may get furious when he finds out that someone else messed about in the implementation. Even though the statement is often made that it's totally open and we're willing to change, it doesn't necessarily apply in practice.

It shouldn't be like that. We should consider the quality plan and policy an adopted approach documented and taken by all participants (certainly in the case of the wiki). In this view, the quality manager, managers and people that make decisions become stewards of this information. Their role isn't to judge content, it's to take care of it and continuously ensure that the group as a whole steers the quality plan and policy to better definitions.

Consider opening up your policy and plans to your internal engineers. Trust them to be able to apply common sense (or make sure they receive additional, adequate training to make better decisions). You might also want to back up your wiki with discussion forums. This way, any doubts on the policy can be cleared by other participants with the added benefit that it can also be used to document certain conversations and retain that knowledge.