[H-GEN] [UQCSA] Seminar on Support Vector Machines (hottest alternative to NNs) (fwd)

Raymond Smith raymonds at uq.net.au
Mon Jun 22 02:20:05 EDT 1998


FYI

---
raymond at humbug.org.au


---------- Forwarded message ----------
Date: Mon, 22 Jun 1998 15:30:03 +1000 (EST)
From: Janet Wiles <janetw at csee.uq.edu.au>
To: UQCSA <uqcsa at humbug.org.au>
Cc: Rafael Brander <brander at csee.uq.edu.au>,
    Bradley Tonkes <btonkes at csee.uq.edu.au>,
    Hanna Majewski <hanna at csee.uq.edu.au>,
    Michael Norris <michaeln at csee.uq.edu.au>,
    Scott Bolland <scottb at csee.uq.edu.au>
Subject: [UQCSA] Seminar on Support Vector Machines (hottest alternative to NNs)

This seminar is a MUST-SEE for neural network researchers, particularly
postgrads in CSEE, and may also interest mathematically-inclined
undergrads in cognitive science and related areas.

cheers
Janet
-----------------------------------

SEMINAR

An Introduction to Support Vector Machines
by Tom Downs, CS&EE Dept, University of Queensland.

4.00pm Thursday June 25
Room 306B Axon Building


Traditional techniques of training neural networks and other learning
machines have been largely based upon empirical risk minimization, ie.
the minimization of training error. Unfortunately, empirical risk by
itself is not a good guide to generalization performance (ie
performance on new data after training) and more recently the idea of
structural risk minimization has emerged as a potentially more
reliable technique of assuring good generalization following training.
In structural risk minimization, one seeks to minimize a bound on
generalization error. 

The study of structural risk minimization has been responsible for the
emergence of the support vector machine as a new type of learning
machine that is applicable to both pattern classification and
(nonlinear) regression problems. The process of training a support
vector machine (SVM) is essentially an implementation of structural
risk minimization.

In contrast to neural networks, SVMs have a significant advantage in
that their training involves finding a unique global minimum in a
solution space that is free of local minima. In addition, as will be
explained in the seminar, training is by quadratic programming (for
which several efficient and reliable algorithms exist) and feature
selection is not necessary - SVMs essentially select the best features
from data automatically. Published results have indicated outstanding
performance on practical problems. 

The seminar will explain the basic theory underlying support vector
machines, provide examples of performance on problems in character
recognition and face identification, and will discuss briefly some of
the computational issues. 







More information about the General mailing list