Questions – Learning Experiment

November 22nd, 2012

Home Forums General Questions – Learning Experiment

This topic contains 2 replies, has 2 voices, and was last updated by  mfranz 4 years, 4 months ago.

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #217

    mfranz
    Member

    Hi all,

    first let me thank you for this excellent piece of open source software! Second, i am struggling a bit as the learning curve for CF is quite steep. I am trying to modify the “LearningExperimentExample” to a (Binary) Classifier problem. I was able to use a Binary Confusion Matrix as Performance Evaluator on a Perceptron as follows:

    DefaultBinaryConfusionMatrix.PerformanceEvaluator<Vector> dbcmp = new DefaultBinaryConfusionMatrix.PerformanceEvaluator<Vector>();

     

    final double confidence = 0.95;

    Summarizer<DefaultBinaryConfusionMatrix, DefaultBinaryConfusionMatrixConfidenceInterval> interval = new DefaultBinaryConfusionMatrixConfidenceInterval.Summary(confidence);

     

    final int numFolds = 10;

    CrossFoldCreator<InputOutputPair<Vector, Boolean>> foldCreator = new CrossFoldCreator<InputOutputPair<Vector, Boolean>>(numFolds, randomNumberGenerator);

     

    SupervisedLearnerValidationExperiment<Vector, Boolean, DefaultBinaryConfusionMatrix, DefaultBinaryConfusionMatrixConfidenceInterval> experiment = new SupervisedLearnerValidationExperiment<Vector, Boolean, DefaultBinaryConfusionMatrix, DefaultBinaryConfusionMatrixConfidenceInterval>(foldCreator, dbcmp, interval);

     

     

    Perceptron p = new Perceptron(1000, 1, 1);

     

    DefaultBinaryConfusionMatrixConfidenceInterval perceptronResult = experiment.evaluatePerformance(p, labeledDataset);

    System.out.println(ObjectUtil.getShortClassName(p) + " Result: \n" + perceptronResult);

     

    LinearBinaryCategorizer result = p.getResult();

    This works fine (although i do not understand, why i am always getting different results although i am using the same Random seed…), but i would love to see the results of a ThreeLayerFeedforwardNeuralNetwork. I tried something like this:

    int dimensionality = FeatureUtils.N_FEATURES;

    ThreeLayerFeedforwardNeuralNetwork ann = new ThreeLayerFeedforwardNeuralNetwork(dimensionality, 2 * dimensionality, 1);

     

    ParameterDifferentiableCostMinimizer conjugateGradient = new ParameterDifferentiableCostMinimizer(new FunctionMinimizerLiuStorey());

    conjugateGradient.setObjectToOptimize(ann);

    conjugateGradient.setCostFunction(new MeanSquaredErrorCostFunction());

     

    VectorFunctionToScalarFunction.Learner<Vector> adapterLeaner = new VectorFunctionToScalarFunction.Learner<Vector>(conjugateGradient);

     

    Evaluator<? super Vector, Double> evaluator = adapterLeaner.learn(data);

     

    ScalarFunctionToBinaryCategorizerAdapter<Vector> adapter2 = new ScalarFunctionToBinaryCategorizerAdapter<Vector>(evaluator );

     

    DefaultBinaryConfusionMatrixConfidenceInterval annResult = experiment.evaluatePerformance(adapterLeaner, labeledDataset);

     

    System.out.println(ObjectUtil.getShortClassName(ann) + " Result: \n" + annResult);

     

    But i was unable to find the right adapter-structure to achieve my goal. Any help would be much appreciated.

     

    #218

    Baz
    Member

    Hello,

    I’m glad you are finding the Foundry useful. It seems like you have most of the pieces you need to adapt the neural network for doing binary categorization. Its actually funny that you bring this up now since a just committed some general-purpose adapter learners that can help with these types of problems in the future, since they seem to come up pretty often. You can look at the code here if you are interested, but it does depend on a new reversible evaluator interface for handling the output type, and won’t be generally available until the next release.

    For your example, the basic issue is that you need a wrapper learner that will unpack the boolean label value and create a double (and then 1-d vector) that you can use to train the neural network. After learning the neural network, you need to compose the output with a function that goes the reverse direction: mapping the 1-d vector value to a double to a boolean. The ScalarFunctionToBinaryCategorizerAdapter provides that last step and the composition, however what you are missing is the learner that will wrap the neural network, transform the labels, call the network learner, and then pack up the resulting composite.

    Thus, you want the code to look something like this:

    VectorFunctionToScalarFunction.Learner<Vector> adapterLearner =
    new VectorFunctionToScalarFunction.Learner<Vector>(conjugateGradient);

    AdapterLearner<Vector> adapter2Learner = new AdapterLearner<Vector>(adapterLearner);

    DefaultBinaryConfusionMatrixConfidenceInterval annResult =
    experiment.evaluatePerformance(adapter2Learner, labeledDataset);

    With an adapter learner like this:

    public static class AdapterLearner<InputType>

    extends AbstractBatchLearnerContainer<BatchLearner<Collection<? extends InputOutputPair<? extends InputType, Double>>, ? extends Evaluator<? super InputType, Double>>>

    implements SupervisedBatchLearner<InputType, Boolean, ScalarFunctionToBinaryCategorizerAdapter<InputType>>

    {

    public AdapterLearner(

    final BatchLearner<Collection<? extends InputOutputPair<? extends InputType, Double>>,

    ? extends Evaluator<? super InputType, Double>> learner)

    {

    super(learner);

    }

    public ScalarFunctionToBinaryCategorizerAdapter<InputType> learn(

    final Collection<? extends InputOutputPair<? extends InputType, Boolean>> data)

    {

    // Create a copy of the data where the output is a double, not a

    // boolean.

    final int count = data.size();

    final ArrayList<WeightedInputOutputPair<InputType, Double>> transformedData =

    new ArrayList<WeightedInputOutputPair<InputType, Double>>(

    count);

    // Convert all the data.

    for (InputOutputPair<? extends InputType, Boolean> example : data)

    {

    final InputType input = example.getInput();

    final boolean output = example.getOutput();

    final double outputDouble = output ? +1.0 : -1.0;

    final double weight = DatasetUtil.getWeight(example);

    transformedData.add(DefaultWeightedInputOutputPair.create(

    input, outputDouble, weight));

    }

    // Learn the scalar function.

    final Evaluator<? super InputType, Double> learned =

    this.learner.learn(transformedData);

    // Return the wrapper.

    return new ScalarFunctionToBinaryCategorizerAdapter<InputType>(

    learned);

    }

    }

    Let me know how this works for you or if you have any other questions.

    Thanks. : )

    #219

    mfranz
    Member

    Thank you very much! Works like a charm. I had a quick look at the InputOutputTransformedLearner. This looks like the perfect solution for this kind of problem in the future, nonetheless i am very happy now! And i am impressed by how quick i got an answer.

Viewing 3 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic.

Comments are closed.