The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.
This is version 0.77, a complete internal upgrade from version 0.42. A
new feature is the introduction of a randomness factor in the network,
optional to disable. The restriction on 0s are removed, so you can run
any network you like. Included is an improved learn() function, and a much
more accurate internal fixed-point system for learning. Also included is
automated learning of input sets. See learn_set() and learn_rand_set().

Be sure to look for the two brand new examples, finance.pl and
images.pl. finance.pl demonstrates simple DOW prediction based on
6 months of data in 1989, and images.pl demonstrates simple bitmap
classification. Many other examples were updated and modfied.

AI::NeuralNet::BackProp is a simply back-propagation,
feed-foward neural network designed to learn using
a generalization of the Delta rule and a bit of Hopefield
theory. Still in beta stages.

Use it, let me know what you all think. This is just a
groud-up write of a neural network, no code stolen or
anything else. It uses the -IDEA- of back-propagation
for error correction, with the -IDEA- of the delta
rule and hopefield theory, as I understand them. So, don't expect
a classicist view of nerual networking here. I simply wrote
from operating theory, not math theory. Any die-hard neural
networking gurus out there? Let me know how far off I am with
this code! :-)
	
Thankyou all for your help.

~ Josiah
	
jdb@wcoil.com
http://www.josiah.countystart.com/modules/AI/cgi-bin/rec.pl - dowload latest dist