To-do list for the Neural Network software of OpenAI
Seriously, this is a desperate need. Once this thing has a look
to it then people will really get involved in at least running it
and giving us feedback. I've created a preliminary specification for
different components of the GUI...feel free to have a look and
give plenty of feedback.
UPDATE: We've had a generous volunteer offer to do a GUI for us...yippee!
UPDATE: Peter Hanson
has been making great strides to put together a GUI for us...if
you'd like to help email him and ask how you can contribute.
Currently the network is not capable of being "trained". It can
learn, but there is no facility to show how well it has learned
the problem by exhibiting it's knowledge on previously unseen
data. This must be finished ASAP.
There is currently no way of saving out the state of the network.
UPDATE: We've had a volunteer for this...his name is Andre
Sobotovych, he hails from Canada and has been evaluating JAXB
. Currently, the
favorite is Castor and Andre has been coding a few examples on how
to save things out as XML by using their code. If you'd like to
help Andre with this task email him at firstname.lastname@example.org
how you might be able to contribute.
Eventually I'd like to expand the library of learning rules and
architectures that the network can use. Currently we only have
one of each (Back Propagation and Multilayer Feed Forward). If
anyone would like to implement some new algorithms let us know.
This is one place where those who have not had much experience
will be able to contribute. There isn't a whole lot of design or
creative thinking to be done here, just follow our interface and
implement the classic algorithms for other learning rules.
If there's anyone out there with a commercial profiling tool, or
someone who is a javap guru and can do an analysis on the NN code,
please let us know.
Change all Vectors to arrays before iterating, this should get us
a slight speed up.
Make sure everything that should be final (and nothing more) is
designated that way. We need to keep in mind how and who might be
extending this software.
This goes hand-in-hand with the optimization stuff...we need a way
to keep track of how the performance of the NN is progressing.
I envision a tool that would exercise different aspects of the
neuralnet (numerous hidden layers, numerous neurons, tough
problems (TSP maybe)) and generate a report of the number of
iterations required to converge to a specified error criterion and
the time elapsed for that benchmark.
Each subsequent run of the benchmarking tool must use the same
initial conditions for the neural network and the same input and
output specifications. If this is not the case then the results
will vary wildly and the benchmark will have no meaning.
It would be nice if this benchmark tool was automatic and
integrated into the website, using perl and generating a graph of
it's daily performance based on the currently checked in code.
Much like the benchmarks of Mozilla.
Also, it would be nice if the benchmark tool could get the
architecture, JVM version and the benchmark results and send them
to email@example.com so that we can keep track of how this is
running on other platforms, what platforms people are using and
how many people are using it.
Maybe we could even incorporate the benchmark into the main
application (with the GUI) so that one of the options is to
perofrm a benchmark at the beginning and send those results in to