This version upgrades to Java 1.8 and contains several bug fixes. You can download it directly here or from Maven Central through dependency management tools like Ivy and Maven.
* Updated to Java 1.8.
* Added callback-based transform methods to Vector that pass index and value.
* Fixed issue in RandomSubVectorThresholdLearner where feature selection was ignored if no sampling was done.
* Fixed mean and variance computation in StudentTConfidence.
* Added hyperbolic tangent function.
This version contains several new components, performance enhancements, and an upgraded version of MTJ. You can download it directly here or from Maven Central through dependency management tools like Ivy and Maven.
* Upgraded MTJ to 1.0.3.
* Added package for hash function computation including Eva, FNV-1a, MD5, Murmur2, Prime, SHA1, SHA2
* Added callback-based forEach implementations to Vector and InfiniteVector, which can be faster for iterating through some vector types.
* Optimized DenseVector by removing a layer of indirection.
* Added method to compute set of percentiles in UnivariateStatisticsUtil and fixed issue with percentile interpolation.
* Added utility class for enumerating combinations.
* Adjusted ScalarMap implementation hierarchy.
* Added method for copying a map to VectorFactory and moved createVectorCapacity up from SparseVectorFactory.
* Added method for creating square identity matrix to MatrixFactory.
* Added Random implementation that uses a cached set of values.
* Implemented feature hashing.
* Added factory for random forests.
* Implemented uniform distribution over integer values.
* Added Chi-squared similarity.
* Added KL divergence.
* Added general conditional probability distribution.
* Added interfaces for Regression, UnivariateRegression, and MultivariateRegression.
* Fixed null pointer exception that can happen in K-means with an empty cluster.
* Fixed name of maxClusters property on AgglomerativeClusterer (was called maxMinDistance).
* Improvements to LDA Gibbs sampler.
This version contains mainly upgraded versions of dependencies. It also has some bug fixes, performance improvements, and adds an ALS implementation of Factorization Machines. You can download it directly here or from Maven Central through dependency management tools like Ivy and Maven.
* Updated MTJ to version 1.0.2 and netlib-java to 1.1.2.
* Updated XStream to version 1.4.8.
* Fixed issue in VectorUnionIterator.
* Added Alternating Least Squares (ALS) Factorization Machine training
* Fixed performance issue in Factorization Machine where linear component
was not making use of sparsity.
* Added utility function to sigmoid unit.
This version contains an implementation of factorization machines, improvements to statistical distributions, improvements to tree learners, convenience methods, performance improvements, and bug fixes. It also upgrades to Java 1.7. You can download it directly here or from Maven Central through dependency management tools like Ivy and Maven.
* Now requires Java 1.7 or higher.
* Improved compatibility with Java 1.8 functions by removing
ClonableSerializable requirement from many function-style interfaces.
* Common Core:
* Improved iteration speed over sparse MTJ vectors.
* Added utility methods for more stable log(1+x), exp(1-x), log(1 – exp(x)),
and log(1 + exp(x)) to LogMath.
* Added method for creating a partial permutations to Permutation.
* Added methods for computing standard deviation to UnivariateStatisticsUtil.
* Added increment, decrement, and list view methods to Vector and Matrix.
* Added shorter versions of get and set for Vector and Matrix getElement and
* Added aliases of dot for dotProduct in VectorSpace.
* Added utility methods for divideByNorm2 to VectorUtil.
* Added a learner for a Factorization Machine using SGD.
* Added a iterative reporter for validation set performance.
* Added new methods to statistical distribution classes to allow for faster
sampling without boxing, in batches, or without creating extra memory.
* Made generics for performance evaluators more permissive.
* ParameterGradientEvaluator changed to not require input, output, and
gradient types to be the same. This allows more sane gradient definitions
for scalar functions.
* Added parameter to enforce a minimum size in a leaf node for decision
tree learning. It is configured through the splitting function.
* Added ability to filter which dimensions to use in the random subspace
and variance tree node splitter.
* Added ReLU, leaky ReLU, and soft plus activation functions for neural
* Added IntegerDistribution interface for distributions over natural numbers.
* Added a method to get the mean of a numeric distribution without boxing.
* Fixed an issue in DefaultDataDistribution that caused the total to be off
when a value was set to less than or equal to 0.
* Added property for rate to GammaDistribution.
* Added method to get standard deviation from a UnivariateGaussian.
* Added clone operations for decision tree classes.
* Fixed issue TukeyKramerConfidence interval computation.
* Fixed serialization issue with SMO output.
Version 3.3.3 of the Cognitive Foundry is now available for download.
Here are the release notes:
Release 3.3.3 (2013-05-20):
* Made code able to compile under both Java 1.6 and 1.7. This required
removing some potentially unsafe methods that used varargs with generics.
* Upgraded XStream dependency to 1.4.4.
* Improved support for regression algorithms in learning.
* Added general-purpose adapters to make it easier to compose learning
algorithms and adapt their input or output.
* Common Core:
* Added isSparse, toArray, dotDivide, and dotDivideEquals methods for
Vector and Matrix.
* Added scaledPlus, scaledPlusEquals, scaledMinus, and scaledMinusEquals to
Ring (and thus Vector and Matrix) for potentially faster such operations.
* Fixed issue where matrix and dense vector equals was not checking for
* Added transform, transformEquals, tranformNonZeros, and
transformNonZerosEquals to Vector.
* Made LogNumber into a signed version of a log number and moved the prior
unsigned implementation into UnsignedLogNumber.
* Added EuclideanRing interface that provides methods for times,
timesEquals, divide, and divideEquals. Also added Field interface that
provides methods for inverse and inverseEquals. These interfaces are now
implemented by the appropriate number classes such as ComplexNumber,
MutableInteger, MutableLong, MutableDouble, LogNumber, and
* Added interface for Indexer and DefaultIndexer implementation for
creating a zero-based indexing of values.
* Added interfaces for MatrixFactoryContainer and
* Added ReversibleEvaluator, which various identity functions implement as
well as a new utility class ForwardReverseEvaluatorPair to create a
reversible evaluator from a pair of other evaluators.
* Added method to create an ArrayList from a pair of values in
* ArgumentChecker now properly throws assertion errors for NaN values.
Also added checks for long types.
* Fixed handling of Infinity in subtraction for LogMath.
* Fixed issue with angle method that would cause a NaN if cosine had a
* Added new createMatrix methods to MatrixFactory that initializes the
Matrix with the given value.
* Added copy, reverse, and isEmpty methods for several array types to
* Added utility methods for creating a HashMap, LinkedHashMap, HashSet, or
LinkedHashSet with an expected size to CollectionUtil.
* Added getFirst and getLast methods for List types to CollectionUtil.
* Removed some calls to System.out and Exception.printStackTrace.
* Common Data:
* Added create method for IdentityDataConverter.
* ReversibleDataConverter now is an extension of ReversibleEvaluator.
* Learning Core:
* Added general learner transformation capability to make it easier to adapt
and compose algorithms. InputOutputTransformedBatchLearner provides this
capability for supervised learning algorithms by composing together a
triplet. CompositeBatchLearnerPair does it for a pair of algorithms.
* Added a constant and identity learners.
* Added Chebyshev, Identity, and Minkowski distance metrics.
* Added methods to DatasetUtil to get the output values for a dataset and
to compute the sum of weights.
* Made generics more permissive for supervised cost functions.
* Added ClusterDistanceEvaluator for taking a clustering that encodes the
distance from an input value to all clusters and returns the result as a
* Fixed potential round-off issue in decision tree splitter.
* Added random subspace technique, implemented in RandomSubspace.
* Separated functionality from LinearFunction into IdentityScalarFunction.
LinearFunction by default is the same, but has parameters that can change
the slope and offset of the function.
* Default squashing function for GeneralizedLinearModel and
DifferentiableGeneralizedLinearModel is now a linear function instead of
an atan function.
* Added a weighted estimator for the Poisson distribution.
* Added Regressor interface for evaluators that are the output of
(single-output) regression learning algorithms. Existing such evaluators
have been updated to implement this interface.
* Added support for regression ensembles including additive and averaging
ensembles with and without weights. Added a learner for regression bagging
* Added a simple univariate regression class in UnivariateLinearRegression.
* MultivariateDecorrelator now is a VectorInputEvaluator and
* Added bias term to PrimalEstimatedSubGradient.
* Text Core:
* Fixed issue with the start position for tokens from LetterNumberTokenizer
being off by one except for the first one.
Version 3.3.2 of the Cognitive Foundry is now available for download.
Release 3.3.2 (2011-11-07):
* Common Core:
* Added checkedAdd and checkedMultiply functions to MathUtil, providing a
means for conducting Integer addition and multiplication with explicit
checking for overflow and underflow, and throwing an ArithmeticException
if they occur. Java fails silently in integer over(under)flow situations.
* Added explicit integer overflow checks to DenseMatrix. The underlying MTJ
library stores dense matrices as a single dimensional arrays of integers,
which in Java are 32-bit. When creating a matrix with numRows rows and
numColumns columns, if numRows * numColumns is more than 2^31 - 1, a
silent integer overflow would occur, resulting in later
ArrayIndexOutOfBoundsExceptions when attempting to access matrix elements
that didn't get allocated.
* Added new methods to DiagonalMatrix interface for multiplying diagonal
matrices together and for inverting a DiagonalMatrix.
* Optimized operations on diagonal matrices in DiagonalMatrixMTJ.
* Added checks to norm method in AbstractVectorSpace and DefaultInfiniteVector
for power set to NaN, throwing an ArithmeticException if encountered.
* Learning Core:
* Optimized matrix multiplies in LogisticRegression to avoid creating dense
matrices unnecessarily and to reduce computation time using improved
* Added regularization and explicit bias estimation to
* Added ConvexReceiverOperatingCharacteristic, which computes the convex
hull of the ROC.
* Fixed rare corner-case bug in ReceiverOperatingCharacteristic and added
optional trapezoidal AUC computation.
* Cleaned up constant in MultivariateCumulativeDistributionFunction and
added publication references.
Version 3.3.1 of the Cognitive Foundry is released. Go and download it now. Here is the list of changes:
Release 3.3.1 (2011-10-06):
* Common Core:
* Added NumericMap interface, which provides a mapping of keys to numeric
* Added ScalarMap interface, which extends NumericMap to provide a mapping
of objects to scalar values represented as doubled.
* Added AbstractScalarMap and AbstractMutableDoubleMap to provide abstract,
partial implementations of the ScalarMap interface.
* Added VectorSpace interface, where a VectorSpace is a type of Ring that
you can perform Vector-like operations on such as norm, distances, etc.
* Added AbstractVectorSpace, which provides an abstract, partial
implementation of the VectorSpace interface.
* Updated Vector, AbstractVector, VectorEntry to build on new VectorSpace
interface and AbstractVectorSpace class.
* Added InfiniteVector interface, which has a potentially infinite number
of indices, but contains only a countable number in any given instance.
* Added DefaultInfiniteVector, an implementation of the InfiniteVector
interface backed by a LinkedHashMap.
* Rewrote FiniteCapacityBuffer from the ground up, now with backing from a
fixed-size array to minimize memory allocation.
* Renamed IntegerCollection to IntegerSpan.
* Learning Core:
* Updated ReceiverOperatingCharacteristic to improve calculation
* Added PriorWeightedNodeLearner interface, which provides for configuring the
prior weights on the learning algorithm that searches for a decision
function inside a decision tree.
* Updated AbstractDecisionTreeNode to fix off by one error in computing node's
* Updated CategorizationTreeLearner to add ability to specify class priors
for decision tree algorithm.
* Updated VectorThresholdInformationGainLearner to add class priors to
information gain calculation.
* Updated SequentialMinimalOptimization to improve speed.
* Added LinearBasisRegression, which uses a basis function to generate
vectors before performing a LinearRegression.
* Added MultivariateLinearRegression, which performs multivariate regression;
does not explicitly estimate a bias term or perform regularization.
* Added LinearDiscriminantWithBias, which provides a LinearDiscriminant with
an additional bias term that gets added to the output of the dot product.
* Updated LinearRegression and LogisticRegression to provide for bias term
estimation and use of L2 regularization.
* Renamed SquashedMatrixMultiplyVectorFunction to GeneralizedLinearModel.
* Renamed DifferentiableSquashedMatrixMultiplyVectorFunction to
* Renamed MatrixMultiplyVectorFunction to MultivariateDiscriminant.
* Added MultivariateDiscriminantWithBias, which provides a multivariate
discriminant with a bias term.
* Renamed DataHistogram to DataDistribution.
* Renamed AbstractDataHistogram to AbstractDataDistribution.
* Added DefaultDataDistribution, a default implementation of the
DataDistribution interface that uses a backing map.
* Added LogisticDistribution, an implementation of the scalar logistic
* Updated MultivariateGaussian to provide for incremental estimation of
covariance-matrix inverse without a single matrix inversion.
* Removed DecoupledVectorFunction.
* Removed DecoupledVectorLinearRegression.
* Removed PointMassDistribution.
* Removed MapBasedDataHistogram.
* Removed MapBasedPointDistribution.
* Removed MapBasedSortedDataHistogram.
* Removed AbstractBayseianRegression.
* Additional general reworking and clean up of distribution code,
impacting classes in gov.sandia.cognition.statistics.distribution
* Text Core:
* Renamed LatentDirichetAllocationVectorGibbsSampler to
LatentDirichletAllocationVectorGibbsSampler to fix misspelling.
* Added ParallelLatentDirichletAllocationVectorGibbsSampler, a parallelized
version of Latent Dirichlet Allocation.
We’ll try to get it up in Maven central soon.