## transport needed

Yesterday, while walking home, I got whacked by a truck.

For part of the journey home, I walk the whole 2.9km distance of the Monaghan Bypass, a road which, despite costing a whopping great €26,000,000, somehow didn’t have the change left over to supply street lights along its length.

I was walking along, one foot brushing the grass verge, and the other foot in the hard-shoulder. I was only 100M or so into the walk when a sudden whack made me lose track of the page I was on in the Andrew Vacchs book I was reading.

I was more annoyed than hurt – if I have enough light to read, surely the trucks have enough light to see me? Also, they have great big headlights and I don’t!

So anyway – I got home, checked my laptop to see if it was fine, hammered my optical mouse back into shape, and resolved to get transport.

Tomorrow, I’m buying a bike. An upgrade from walking. See, I’m sneaking up on the 21st century, but very slowly.

It’ll be a BMX, so I can use it in that skatepark that opened recently.

## learning how to learn

Yesterday’s attempts at ANN training seemed at first to be successful, but I had overlooked one simple put curious flaw – there was training going on all the time, and each test was run 10 times… This means that each neuron was trying different values all the time until it got the right one, then it would be the next neuron’s turn (a bit of a simplified answer, but I don’t know how to describe what was actually happening). This ended up causing the tests to look a lot more successful than they actually were.

This morning, I did a lot of work figuring out how to get around the problem. It turns out the problem is not with the neural network – that appears to be working perfectly. The problem is with the method of training.

Just like with people, you cannot just throw a net into a series of 26 tests which it has never seen before and expect it to learn it any time soon. For any particular neuron, 25 of the tests will have “No” as the answer, and it is too easy for the neuron to just answer “No” to everything and get a 96+% correct answer.

Instead, you need to start with just one test, keep trying until that’s right, then add another test, keep going until they’re both right, then add another test, etc.

Even that was not enough, though – it turns out that the capital letters “B” and “E” are very similar to each other, as are “H” and “M” (at least, in my sequence, they are).

I managed to improve the learning of the neurons by following rules such as these:

• If a test’s answer is “No” but the neuron says “Yes” (ie; has a returned value above 0), then adjust the weights of the neuron it (correct/punish it).
• If a test’s answer is “Yes”, and the neuron is anywhere less than 75% certain of Yes, then adjust/reward the neuron.

In all other cases (neuron is certain of Yes and is right, or neuron is vaguely sure of No and is right) leave the neuron’s weights alone.

This has helped to avoid the problem where neurons get extremely confident of Yes or No and are hard to correct when a similar test to a previously learned one comes along (O and Q for example).

It’s still not perfect, but perfection takes time…

## letter recognition network

Last week, I wrote a neural network that could balance a stick. That was a simple problem which really only takes a single neuron to figure out.

This week, I promised to write a net which could learn to recognise letters.

demo

For this, I enhanced the network a bit. I added a more sensible weight-correction algorithm, and separated the code (ANN code).

I was considering whether hidden inputs were required at all for this task. I didn’t think so – in my opinion, recognising the letter ‘I’ for example, should depend on some information such as “does it look like a J, and not like an M?” – in other words, recognising a letter depends on how confident you are about whether other values are right or wrong.

The network I chose to implement is, I think, called a “simple recurrent network” with stochastic elements. This means that every neuron reads from every other neuron and not itself, and corrections are not exact – there is a small element of randomness or “noise” in there.

The popular choice for this kind of test is a feed-forward network, which is trained through back-propagation. That network requires hidden units, and each output (is it N, it it Q) is totally ignorant of the other outputs, which I think is a detriment.

My network variant has just finished a training run after 44 training cycles. That is proof that the simple recurrent network can learn to recognise simple letters without relying on hidden units.

Another interesting thing about the method I used is how the training works. Instead of throwing a huge list of tests at the network, I have 26 tests, but only a set number of them are run in each cycle depending on how many were gotten right up until then. For example, a training cycle with 13 tests will only be allowed if the network previously successfully passed 12 tests.

There are still a few small details I’d want to be sure about before pronouncing this test an absolute success, but I’m very happy with it.

Next week, I hope to have this demo re-written in Java, and a new demo recognising flowers in full-colour pictures (stretching myself, maybe…).

As always, this has the end goal of being inserted in a tiny robot which will then do my gardening for me. Not a mad idea, I think you’re beginning to see – just a lot of work.

update As I thought, there were some points which were not quiet perfect. There was a part of the algorithm which would artifically boost the success of the net. With those deficiencies corrected, it takes over 500 cycles to get to 6 correct letters. I think this can be improved… (moments later – now only takes 150+ to reach 6 letters)

## firefox not properly parsing XML/XHTML ?

Look at these examples – semantically, they are exactly the same:

```<ul><li />one<li>two</li><li />three</ul>
```
```<ul><li></li>one<li>two</li><li></li>three</ul>
```

In both cases, they are invalid, but Firefox renders one of them correctly and not the other…

example one

• one
• two
• three

example two

• one

• two
• three

Interesting! Noticed while explaining why ‘ /’ is used at the end of self-closing elements in XHTML.

## neural net balancing thing

Last century, when I worked for Orbism, a co-worker, Olivier Ansaldi (now working for Google) showed me a Java applet he was working on which learned how to balance a stick using a neural net.

I decided to try it myself, and wrote a neural net that does it yesterday.

demo (Firefox only)

There is a total of 3 neurons in the net – 1 bias, 1 input (stick angle) and 1 output.

It usually takes about 20 iterations to train the net. Sometimes, it gets trained in such a way that the platform waggles back and forward like a drunk, and sometimes it gets trained so perfectly that it’s damn boring towatch (basically, it’s a platform with a stationary stick on it).

Anyway… for my next trick, I’ll try building a net which can recognise letters and numbers.

Partly the point of this was that it was an itch I wanted to scratch. But also, I wanted to write it in C++ but am better at JavaScript so wrote it in JS to test it first before I attempt a port.

I’m getting interested in my robot gardener idea again, so am building up a net that I can use for it.

Some points about how this differs from “proper” ANNs.

• Training is done against a single neuron at a time (not important in this case, as there are only three neurons anyway).
• This net will attach all “normal” neurons to all other neurons. I don’t like the “hidden layer” model of ANNs – I think they’re limited.
• No back-prop algorithms are used – I don’t trust “perfect” nets and prefer a bit of organic learning in my nets.
• The net code itself is object-oriented and self-sufficient. It would be possible to take the code and use in another JS project.

## Front Line Assembly – Fallout

Wow! It’s been a long time since I was excited by FLA. As a teenager, I was blown away by the proto-industrial efforts in Corrosion and Disorder, and especially Tactical Neural Implant. Millennium came out with its reinterpretation of Pantera’s guitar riffs and was a hit with me as well, even if it was very very different from their previous electronic work. After Hard Wired, though, I kind of floated away to other bands – rediscovering Skinny Puppy and starting to figure out mainstream bands such as Muse, Arcade Fire, etc.

Fallout is touted as a remix album, with some mixes by the band itself, some by others, and some new songs by FLA. I think, though, that it can stand as an original album itself. There’s none of the usual rubbish that you would get with “pop” remixes (garbled and stuttered words, and poor rebuilds) – every song on this album appears to have been polished and given an individual feel. If you want a more reliable indicator of how cool this album is – even my 1-year old daughter was hyper while I was playing it. Rocking and waving her tiny little hands like an EBM poster child.

If you hanker after your industrial youth, thrashing away all night to the likes of Wumpscut, Covenant, Apoptygma Berzerk, etc, then this album is for you. After listening to this album, I wish I was in Dublin ten years younger, downing a Pernod and Red Bull on the bus in for a night in Dominion followed by an early morning on the roof of Garvan‘s flat downing home-made peach schnapps.

Enough from me – I’m off to work on my KFM project.

## coin flip trick

This is a “rounds” trick for deciding who buys the round when you’re drinking with two friends. You won’t win all the time, but your drinks bill will be cut by 25%.

You’ll need an accomplice and a victim (the person you want to pay for the drinks). The only instruction to the accomplice is that when your left palm is up, bet ‘heads’. otherwise, bet ‘tails’.

• flip a coin with your right hand.
• your left hand should be turned randomly palm up or palm down.
• as the coin comes down. slap it onto the left arm with your right hand (in the usual manner) such that no-one knows what it is.
• if the palm is up, you and your accomplice say ‘heads’. otherwise say ‘tails’.
• if all calls are the same, do the toss again.
• if the victim gets the call wrong, he pays for 3 drinks. otherwise, you and your accomplice buy 1.5 drinks each.

On average, you and your accomplice will pay for .75 drinks per round, and your victim will pay for 1.5 drinks per round.

You can also do this with cards. shuffle a deck and remove one card face down. bet on whether it is odd or even. As long as you and your accomplice always bet the exact same, the drinks payment ratio will be the same even though you only ‘win’ 50% of the time.

On reflection, 25% is not an amazing saving, but those are the thoughts that I go to sleep with…

## Elena Rosa Posse Oliver

A few months ago, my sister and her husband Juan went to Spain to visit his family.

One day, Tarynn bought a pregnancy test. After using it, she was puzzling over the results with Juan. Their spanish was not good enough to tell anything for certain, so Juan brought the thing into his aunt. She took a look at it, and immediately grabbed Tarynn in a huge hug.

After a suprisingly long wait, my sister has given birth to her second child, Elena.

There are a few things surprising about this. Her first, James, was born a few months early with a hole in his heart. He has since grown into the most amazing lad, with a huge memory and unstoppable creativity.

At the time, Tarynn was told she would never have another child. I think we can safely say that the doctors were wrong.

There was always the worry that Elena would be born early as well, so it is a great relief that all went so smoothly!

Juan, Tarynn – I hope to be see you around Christmas. The pints are on me.

## variables in css

There is an article over at the Ruby On Rails site about Dynamic CSS. I read through it, and it was pretty cool. It occurred to me that it should be fairly simple to do some of those tricks on-the-fly with ordinary CSS files and a little PHP.

Look at this:

```/*
!TEXTCOLOUR    #369
!BORDER        1px solid #369
*/

h1 { color: !TEXTCOLOUR; font-size: 1.1em }
p { color: !TEXTCOLOUR; font-style: italic }
div { color: !TEXTCOLOUR; border: !BORDER }
```

This, of course, is not valid CSS, but it would be cool if it worked.

Now it does!

Save this as `/css_parser.php` on your site:

```<?php

if(!isset(\$_GET['css']))exit('/* please supply a "css" parameter */');
\$filename=\$_GET['css'];

\$filename=\$_SERVER['DOCUMENT_ROOT'].'/'.\$filename;
if(!file_exists(\$filename))exit('/* referred css file does not exist */');

\$matches=array();
\$file=file_get_contents(\$filename);
preg_match_all('/^(!.*)\$/m',\$file,\$matches);

\$names=array();
\$values=array();
foreach(array_reverse(\$matches[0]) as \$match){
\$match=preg_replace('/\s+/',' ',rtrim(ltrim(\$match)));
\$names[]=preg_replace('/\s.*/','',\$match);
\$values[]=preg_replace('/^[^\s]*\s/','',\$match);
}

header('Expires: Fri, 1 Jan 2500 01:01:01 GMT');

echo str_replace(\$names,\$values,\$file);
```

Then put this in your root `.htaccess` file:

```RewriteRule ^(.*)\.css\$ /css_parser.php?css=\$1.css [L]
```

Isn’t that just so cool!? Now, every time you request a file that ends in .css, it will be pre-processed by the css parser.

The trick doesn’t have to be strictly about the value part of the CSS either – you can use it for full commands:

```/*
!BlueText    color:#00f;
!Underlined  text-decoration:underline;
*/

a { !BlueText !Underlined }
```

And if that’s not enough – you can also use the variables to define other variables:

```/*
!SelectedColour     #00f
!Text               color:!SelectedColour;
!Border             border:1px solid !SelectedColour;
*/