ELF>.@@HH=HfH1H=HDAWI?Ih㈵>HcAVAUI?ATUHSHHtrans_prHHL$LD$Isvm_modeL$LL$ IMbP?HT$0LIV瞯-Hl$8LIfDV#Wv$111f.$fDLhH|$@J4fDIEL9kH4H€>-tHl$8IH|$0H$L;l$H$} HtHI|$`D$@Ht$@9hT$A:WI$I|$xtI>I|$pdIt$8HDI;t$@It$`HfWfA.D$AD$ f.}fWfA.D$]fWfA.D$PBfWfA.$I$dHĸ[]A\A]A^A_@LhH|$J4IEL9f12f.Lh 1J<I$GLh1J<AD$%DLh 1J<HT$ HLhH|$J4fLh 1J<I@Lh1J<AFfDLh1J<AF ~fDLh 1J<ID$8ZfLh1J<AD$ =DLh1J<A$fLh 1J<ID$@fLh 1J<ID$HfLhHJ4Lh 1J<I$Lh1J<AD$}DLh 1J<ID$pZfLh 1J<ID$`:fLh1J<AFfDLh1 J<HID$xfLh1J<AD$PDLh 1J<IFLh1J<AD$DLh 1J<ID$(jfLhH|$(J4QfLh 1J<ID$h*f9u=V:u1I$@I>HHHdID$`>9u1V:u%I$EID$xK9t)9tG116fDV:uI$AiF:uI$ID$01It$@1넿1saB$ ޿1ffff.AVAAUIATUSH H$Ld$ LAADHHl$L$$H$H$H$L$E1=H$HT$ 1HDŽ$HHGHHHtH޿L1HH$~+1DH$H file with training data model_file -> file to store learned decision rule in -? -> this help -v [0..3] -> verbosity level (default 1) -z {c,r,p} -> select between classification (c), regression (r), and preference ranking (p) (default classification) -c float -> C: trade-off between training error and margin (default [avg. x*x]^-1) -w [0..] -> epsilon width of tube for regression (default 0.1) -j float -> Cost: cost-factor, by which training errors on positive examples outweight errors on negative examples (default 1) (see [4]) -b [0,1] -> use biased hyperplane (i.e. x*w+b>0) instead of unbiased hyperplane (i.e. x*w>0) (default 1) -i [0,1] -> remove inconsistent training examples and retrain (default 0)Performance estimation options: -x [0,1] -> compute leave-one-out estimates (default 0) (see [5]) -o ]0..2] -> value of rho for XiAlpha-estimator and for pruning leave-one-out computation (default 1.0) (see [2]) -k [0..100] -> search depth for extended XiAlpha-estimator (default 0)Transduction options (see [3]): -p [0..1] -> fraction of unlabeled examples to be classified into the positive class (default is the ratio of positive and negative examples in the training data) -t int -> type of kernel function: 0: linear (default) 1: polynomial (s a*b+c)^d 2: radial basis function exp(-gamma ||a-b||^2) 3: sigmoid tanh(s a*b + c) 4: user defined kernel from kernel.h -d int -> parameter d in polynomial kernel -g float -> parameter gamma in rbf kernel -s float -> parameter s in sigmoid/poly kernel -r float -> parameter c in sigmoid/poly kernel -u string -> parameter of user defined kernelOptimization options (see [1]): -q [2..] -> maximum size of QP-subproblems (default 10) -n [2..q] -> number of new variables entering the working set in each iteration (default n = q). Set n size of cache for kernel evaluations in MB (default 40) The larger the faster... -e float -> eps: Allow that error for termination criterion [y [w*x+b] - 1] >= eps (default 0.001) -y [0,1] -> restart the optimization from alpha values in file specified by -a option. (default 0) -h [5..] -> number of iterations a variable needs to be optimal before considered for shrinking (default 100) -f [0,1] -> do final optimality check for variables removed by shrinking. Although this test is usually positive, there is no guarantee that the optimum was found if the test is omitted. (default 1) -y string -> if option is given, reads alphas from file with given and uses them as starting point. (default 'disabled') -# int -> terminate optimization, if no progress after this number of iterations. (default 100000) -l string -> file to write predicted labels of unlabeled examples into after transductive learning -a string -> write all alphas to this file after learning (in the same order as in the training set)[1] T. Joachims, Making Large-Scale SVM Learning Practical. Advances in Kernel Methods - Support Vector Learning, B. Schlkopf and C. Burges and A. Smola (ed.), MIT Press, 1999.[2] T. Joachims, Estimating the Generalization performance of an SVM Efficiently. International Conference on Machine Learning (ICML), 2000.[3] T. Joachims, Transductive Inference for Text Classification using Support Vector Machines. International Conference on Machine Learning (ICML),[4] K. Morik, P. Brockhausen, and T. Joachims, Combining statistical learning with a knowledge-based approach - A case study in intensive care monitoring. International Conference on Machine Learning (ICML), 1999.[5] T. Joachims, Learning to Classify Text Using Support Vector Machines: Methods, Theory, and Algorithms. Dissertation, Kluwer, Not enough input parameters! Unknown type '%s': Valid types are 'c' (classification), 'r' regession, and 'p' preference ranking. It does not make sense to skip the final optimality check for linear kernels. It is necessary to do the final optimality check when removing inconsistent examples. Maximum size of QP-subproblems not in valid range: %ld [2..] Maximum size of QP-subproblems [%ld] must be larger than the number of new variables [%ld] entering the working set in each iteration. Maximum number of iterations for shrinking not in valid range: %ld [1,..] The C parameter must be greater than zero! The fraction of unlabeled examples to classify as positives must The COSTRATIO parameter must be greater than zero! The epsilon parameter must be greater than zero! The parameter rho for xi/alpha-estimates and leave-one-out pruning mustbe greater than zero (typically 1.0 or 2.0, see T. Joachims, Estimating theGeneralization Performance of an SVM Efficiently, ICML, 2000.)! The parameter depth for ext. xi/alpha-estimates must be in [0..100] (zerofor switching to the conventional xa/estimates described in T. Joachims,Estimating the Generalization Performance of an SVM Efficiently, ICML, 2000.)?GCC: (GNU) 4.4.6 20110731 (Red Hat 4.4.6-3)zRx DU4DLP# BYB L(A0D8T 8A0A(B BBBE HBJE A(A0GC 0C(A BBBI .symtab.strtab.shstrtab.rela.text.data.bss.rodata.str1.1.rodata.str1.8.rela.rodata.rodata.cst8.comment.note.GNU-stack.rela.eh_frame @9&,12@26T*O(T@\,i0,-r--h\` -2 `7    $+29 @MRXa ls#      +8DVosvm_learn_main.c.LC89.LC99.LC93.LC100.LC101.LC102.LC103wait_any_keyputsstdin_IO_getcprint_helpprintfcopyright_noticeread_input_parametersexitstrcpystrtolstrtodmainmy_mallocverbosityrestartfilemodelfiledocfileread_documentskernel_cache_cleanupwrite_modelfreefree_modelfree_exampleread_alphaskernel_cache_initsvm_learn_classificationsvm_learn_regressionsvm_learn_rankingsvm_learn_optimization  % * / 6;@ @EJ OT xY^ ch "mr w|  3 H   ( h    `   ( h   !& +0 (5: x?D IN SX 8]b Xgl qv { E @ x   ( `     X         @      %*  /4 X 9>  CH  MR  W\ h af  kp  uz @        h   X U   ( p   e !& +0 P5: x?D IN SX `]b wgl qv { H       .d|  #   (Np 0 P l      0 [       ! [ i }     @      % , 3 P: D I V H[ ` e j t `y ~    (      x (    ! " #! $5 Z $b %h " & # ' ( )*%(2(Y "^+,-(.x/0  (08@HPX`hpx (08@HPX`hpx   p P   ( 0 8 @x H` P@ X ` hpx`@  8 T