Next: A general interface for AI predictors, Previous: Implemented strategies, Up: Machine learning with tarot [Contents][Index]
Static learning models are great for the AI, because we learn them at compile time. However, it may be a good addition to have a learning algorithm that is able to learn to know a player.
A model that is capable of being learnt in the course of the program (at the end of a finished game).
Allocate a new perceptron with the given hyperparameters: the number of hidden layers, their corresponding sizes, and the learning rate.
Allocate and return the best perceptron that the maintainer has been able to learn. This perceptron is static, because no learning will be remembered the next time you will call this function.
Starting at base, for each of the n candidates, make a prediction using perceptron. Discard the predictions for the start first candidates, then store the following max predictions inside scores.
Learn that playing event in base leads to the given final score.
Return an allocated copy of perceptron.
Delete a perceptron allocated by ‘tarot_perceptron_alloc’, ‘tarot_perceptron_dup’ or ‘tarot_perceptron_static_default’.
Change the learning rate in perceptron.
Load the internal parameters (size of nweights) into perceptron. If perceptron has n parameters, then we skip the first start ones.
Get the max first internal parameters after skipping the start first and return the total number of parameters. The ‘_alloc’ function does all the allocations for you, and simply sets *n to the total number of parameters and copy them in the newly-allocated *data.
Next: A general interface for AI predictors, Previous: Implemented strategies, Up: Machine learning with tarot [Contents][Index]