(<-)

[BioinfoMla/ProbabilisticExample]

BioinfoMla

[BioinfoMla/NeuralNetworkTheory]

(->)

BioinformaticsTheMachineLearningApproach Chap 4.

Introduction

Once a parameterized model M(w) for th data has been constructed, next step :

  1. The estimation of the complete distribution P(w,D) and the posterior P(w|D)

  2. The estimation of the optimal set of parameters w by maximizing P(w|D)

  3. The estimation of marginals and expectations with respect to the posterior

Dynamic Programming

Gradient Descent

이 방법은 GlobalOptima 를 찾기 보다는, LocalOptima에 머무는 경우가 많다. 이에 다양한 변형된 GradientDescent방법이 존재하는데, ConjugateGradientDescent가 그중 하나.

Random-Direction Descent

EM/GEM Algorithm

ExpectationMaximization (EM) 및 generalized ExpectationMaximization algorithm

Markov-Chain Monte-Carlo Methods

일반적인 Bayesian framework의 목적은 high-dimensional probability distribution P(x1,...,xn)에서의 expectation을 계산해내는 일이다. 이에관한 MarkovChainMonteCarlo (MCMC)방법은,

A MarkovChain is entirely defined by the initial distribution P(S0) and the TransitionProbability Pt = P(S(t+1)|S(t)).

In order to achieve our goal of sampling from P(x1,...,xn), we now turn to the two main MCMC algorithm: GibbsSampling and MetropolisAlgorithm

Simulated Annealing

SimulatedAnnealing : StatisticalMechanics에서 영감을 얻은 general-purpose optimization algorithm, MetropolisAlgorithm과 같은 MCMC ideas와 결합하여 사용된다.

Probability of being in state s at temp T is given by BoltzmannGibbsDistribution

                       e^(-f(s)/kT)
P(s) = P(x1,...,xn) = --------------
                            Z

Evolutionary and Genetic Algorithm

Broad class of optimization algorithms simulate evolution.

Learning Algorithm: Miscellaneous Aspects

Control of Model Complexity

Online/Batch Learning

Training/Test/Validation

Early Stopping

Ensembles

Balancing and Weighting Schemes

BioinfoMla/MachineLearningAlgorithm (last edited 2011-08-03 11:00:50 by localhost)