Wednesday, March 25, 2009

mkoctfile Standalone Programs of C/C++

Original article is from
http://www.gnu.org/software/octave/doc/interpreter/Standalone-Programs.html#Standalone-Programs

If you just cope the code from the webpage. You will have a compiler error.
============================================
$ mkoctfile --link-stand-alone main.cc -o main
main.cc: In function 'int main()':
main.cc:13: error: 'row' was not declared in this scope
main.cc:13: error: 'column' was not declared in this scope
============================================

Modify row and column to i and j,
============================================
 #include
     #include
     int
     main (void)
     {
       std::cout << "Hello Octave world!\n";
       int n = 2;
       Matrix a_matrix = Matrix (n, n);
       for (octave_idx_type i = 0; i <>
         {
           for (octave_idx_type j = 0; j <>
             {
               a_matrix(i,j) = (i+1)*10 + (j+1);
             }
         }
       std::cout <<>
       return 0;
     }
============================================

Then you should get the result,
============================================
$ mkoctfile --link-stand-alone hello.cc -o hello
     $ ./hello
     Hello Octave world!
       11 12
       21 22
     $
============================================
If you have problem to use mkoctfile on Cygwin, you can refer to HERE.

十二星座的時間劃分





星座     日期(公曆)    英文名 
魔羯座 (12/22 - 01/19) Capricorn 
水瓶座 (01/20 - 02/18) Aquarius 
雙鱼座 (02/19 - 03/20) Pisces 
牡羊座 (03/21 - 04/20) Aries 
金牛座 (04/21 - 05/20) Taurus 
雙子座 (05/21 - 06/21) Gemini 
巨蟹座 (06/22 - 07/22) Cancer 
獅子座 (07/23 - 08/22) Leo 
處女座 (08/23 - 09/22) Virgo 
天秤座 (09/23 - 10/22) Libra 
天蝎座 (10/23 - 11/21) Scorpio 
射手座 (11/22 - 12/21) Sagittarius
怎麼記?以終止時間為準,然後以20為單位,所以可改寫成
-1, -2, 0, 0, 0, 1, 2, 2, 2, 2, 1, 1
-1代表20-1,-2代表20-2,然後依序加上月份1,2,3,4,5, ..., 12。
所以就知道第一個星座是1/19結束,所以第二個星座開始是1/20,以此類推。

怎麼記星座順序?
就魔水魚,羊牛子,蟹獅女,秤蝎射。
希望可以一解我老是記不起來的困擾!

參考至

Tuesday, March 24, 2009

HTK in Ubuntu

Reference from http://www.voxforge.org/home/dev/acousticmodels/linux/create/htkjulius/tutorial/download/comments/additional-instructions-to-compile-htk-on-ubuntu-8-hardy

make sure you have gcc, make.
$ gcc -v
$ make -v

If you don't have make
$ sudo apt-get install make
If you don't have gcc
$ sudo apt-get install build-essential
You may need 
$ sudo apt-get install libx11-dev
if make all error

You should be get the right result.

Thursday, March 19, 2009

mkoctfile problem on Cygwin

I tired to use mkoctfile to compile hello.cc, the example of Octave and C/C++.

First, I got the error messages as follows,
==========================================================
$ mkoctfile hello.cc
/usr/lib/gcc/i686-pc-cygwin/4.3.2/../../../../i686-pc-cygwin/bin/ld: cannot find -loctave
collect2: ld returned 1 exit status
==========================================================
I found the solution from
Rename libctave.dll.a to liboctave.dll.a.

Then,
=========================================================
$ mkoctfile hello.cc
/usr/lib/gcc/i686-pc-cygwin/4.3.2/../../../../i686-pc-cygwin/bin/ld: cannot find -lreadline
collect2: ld returned 1 exit status
=========================================================
It seems to be I didn't have library readline.
I tried to install from Cygwin setup program,
libs/libreadline5 4.3-5 GNU readline and history libraries
libs/libreadline6 5.2.13-11 GNU readline and history libraries
But this doesn't solve my problem,
So I manually install the libraries,
Donwload packages,

And then, I got error messages as follows,
=========================================================
$ mkoctfile hello.cc
/usr/lib/gcc/i686-pc-cygwin/4.3.2/../../../../i686-pc-cygwin/bin/ld: cannot find -lgfortranbegin
collect2: ld returned 1 exit status
=========================================================
Go to Cygwin official site to search fortranbegin.
Then I have to install the package, devel/gcc4-fortran: Fortran subpackage.

Finally, I solved my problem ^^

mkoctfile in Ubuntu

We need octave3.0-header first.
$ sudo apt-get install octave3.0-header

After the accomplishment of download and installation,
you can use mkoctfile now.

Do followin the steps.
You can compile hello.cc successfully and run it helloworld (1,2,3)

Install Ocatve on Ubuntu

Update your apt-get first. 
  1. $ sudo apt-get update
  2. $ sudo apt-get install octave
Miss the first step, you may not find octave package when type step 2 command.
After updating, then input command of step 2.
Then system will ask you to agree the download process and upgrade process.
Press "y" and wait.

If you meet a problem, like 
ldconfig deferred processing tkaing place
dpkg: status database area is locked by another process
Just running step 2 again.

At least, I install it successfully.

Thursday, March 12, 2009

Friday, March 6, 2009

Octave and C/C++

Reference for using Octave and C/C++
參考這幾個網站,

Please confirm that Dynamic Linking feature is ont or off.
先確認Octave是否有支援或是開啟Dynamic Linking,
Typing,
輸入
$ octave_config_info ("ENABLE_DYNAMIC_LINKING")
if
ans = true
That means ON.
表示有開啟。

因為Octave要用gcc compiler,所以勢必得換開發環境了。
至少我不知道怎麼用VisualCompiler去使用。

所以我改用Cygwin和Ubuntu,Cygwin因為我缺少了一些packages,所以花了點工夫。請參考HERE
Ubuntu只要安裝好Octave和Octave-header後,就OK了。請參考HERE1 and HERE2

程式要inlcude
然後告訴compile正確的octave library位置,但是通常都會用make來完成compile動作。下載範例Makefile
內容大致如前面連結提供的範例相同,
makefile:
all:test
clean:
-rm test.o test
test: test.o
mkoctfile --link-stand-alone -o test hello.o
test.o: hello.cpp
g++ -c -I /usr/include/octave-3.0.3/ -o hello.o hello.cpp
指令輸入make all,就會完成Makefile裡指定的動作了。

Wednesday, March 4, 2009

The method of solving least-squares problems

Linear least squares problems
  • Newton's Method
Non-Linear least squares problems
  • Gauss-Newton Method
  • Quasi-Newton Method
  • Gauss-Raphson Method
  • Levenberg-Marguardt Method
  • Powell's Dog Leg Method
  • Hybrid Method: L-M and Quasi-Newton
Good references:
  • 1999, Poul E. Frandsen, http://www2.imm.dtu.dk/documents/ftp/publlec/lec2_99.pdf
Descent method
Conjugate Gradient mehod
Newton-based method, included Quasi-Newton
  • 1999, Hans B. Nielsen: http://www2.imm.dtu.dk/documents/ftp/tr99/tr05_99.pdf
Descent method
Gauss-Newton method
Levenberg-Marquardt method
Powell's Dog Leg method
Hybrid: LMA+QN
Newton's Method, NM
Also called Newton-Raphson Method.  It uses the first few terms of the Taylor series (Taylor expansion) of a function  in the vicinity of a suspected root.  
  • http://mathworld.wolfram.com/NewtonsMethod.html
  • http://en.wikipedia.org/wiki/Newton's_method

Relationship between Newton's method, Halley method, and Householder's method.
  • Newton's method is 1st in the class of Householder's method.
  • Halley's method is 2nd in the class of Householder's method.
  • http://en.wikipedia.org/wiki/Householder's_method
  • http://en.wikipedia.org/wiki/Halley's_method

  • Levenberg-Marquardt Algorithm, LMA
  • http://en.wikipedia.org/wiki/Levenberg-Marquardt_algorithm
  • http://mathworld.wolfram.com/Levenberg-MarquardtMethod.html
  • http://homepages.inf.ed.ac.uk/cgi/rbf/CVONLINE/entries.pl?TAG49
  • Nov. 1996, Sam Roweis: http://www.cs.toronto.edu/~roweis/notes/lm.pdf
  • Good Introduction to talk about the LM from Newton's method, modefied by Levenberg, then modefied by Marquardt to become LM algorithm.
1999, Hans B. Nielsen: http://www2.imm.dtu.dk/documents/ftp/tr99/tr05_99.pdf
Apr. 2004, Hans B. Nielsen: http://www.imm.dtu.dk/pubdb/views/edoc_download.php/3215/pdf/imm3215.pdf
  • Above two articles are refered by LEVMAR.
June 2004, Ananth R.: http://www.cc.gatech.edu/~ananth/docs/lmtut.pdf
  • A good article also to talk about the LM from Newton's, Levenberg, and Marquardt to become LMA.
Feb. 2005, Manolis L.: http://www.ics.forth.gr/~lourakis/levmar/levmar.pdf
  • The article talking about source code LEVMAR. A pseudocode provided.
Nov. 2006, Pradit M.: http://cobweb.ecn.purdue.edu/~kak/courses-i-teach/ECE661.08/homework/HW5_LM_handout.pdf
  • Simple example and Matlab code provided.
http://www.cse.ucsd.edu/classes/fa04/cse252c/vrabaud1.pdf
  • Matlab code, thanks for Pradit Mittrapiyanuruk.

Gauss-Newton Method, GNM
  • http://en.wikipedia.org/wiki/Gauss-Newton_algorithm
  • http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationGaussNewtonMethods.html
  • 1999, Hans B. Nielsen: http://www2.imm.dtu.dk/documents/ftp/tr99/tr05_99.pdf
Gauss-Raphson Method, GRM
Continue..
Quasi-Netwon Method, QNM
  • http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationQuasiNewtonMethods.html
  • 1999, Poul E. Frandsen, http://www2.imm.dtu.dk/documents/ftp/publlec/lec2_99.pdf
Powell's Dog Leg Method, PDLM
  • 1999, Hans B. Nielsen: http://www2.imm.dtu.dk/documents/ftp/tr99/tr05_99.pdf
Hybrid Method: L-M and Quasi-Newton, H:LM+QN
  • 1999, Hans B. Nielsen: http://www2.imm.dtu.dk/documents/ftp/tr99/tr05_99.pdf
LMA for linear convergence
QN for superlinear convergence
Papers:
  • Yao Jianchao, Chia Tien Chern, COMPARISON OF NEWTON-GAUSS WITH LEVENBERG-MARQUARDT ALGORITHM FOR SPACE RESECTION, Proc. ACRS 2001 - 22nd Asian Conference on Remote Sensing, 5-9 November 2001, Singapore. Vol. 1, pp. 256-261.
  • C. Kanzow, N. Yamashita and M. Fukushima, Levenberg–Marquardt methods with strong local convergence properties for solving nonlinear equations with convex constraints, J. Comput. Appl. Math.172 (2004), pp. 375–397.
  • Kanzow, C., Yamashita, N., and Fukushima, M. 2004. Levenberg-Marquardt methods with strong local convergence properties for solving nonlinear equations with convex constraints. J. Comput. Appl. Math. 172, 2 (Dec. 2004), 375-397. DOI= http://dx.doi.org/10.1016/j.cam.2004.02.013
  • Lourakis, M. I. and Argyros, A. A. 2005. Is Levenberg-Marquardt the Most Efficient Optimization Algorithm for Implementing Bundle Adjustment?. In Proceedings of the Tenth IEEE international Conference on Computer Vision - Volume 2 (October 17 - 20, 2005). ICCV. IEEE Computer Society, Washington, DC, 1526-1531. DOI= http://dx.doi.org/10.1109/ICCV.2005.128

Source codes:
levmar: Levenberg-Marquardt nonlinear least squares algorithms in C/C++
lmfit — a C/C++ routine for Levenberg-Marquardt minimization with wrapper for least-squares curve fitting
lmfit is easy to install it because it is without the dependency problem.
Levenberg-Marquardt for Visual C++ 2005
Levenberg-Marquardt algorithm for multivariate optimization in C#/C++/Delphi/VB6/Zonnon

Continue....

Tuesday, March 3, 2009

Pointer in C

A simple example to using pointer in C.

we claim a pointer in main function, and hope to pass it to subroutin to allocate the memory.
How to use it?
Solution 1, return the address back.
=======================
#include
#include

int array_size =10;


int fill_array( float *ptr)
 {
    int i;
printf("Address %d\n", &ptr);
    ptr = (float *) malloc(array_size *sizeof(float));

    for(i=0; i<>
    {
         
      ptr[i] = (float)i;

    }

    
   for(i=0; i
   {
 printf("In function\n");
 
      printf("%d - %f ", &ptr[i],ptr[i]);
   }
   printf("\n");
   printf("ptr = %d\n", &ptr);
   return(ptr);
 }
main()
{
float *ptr = NULL;
int i;
printf("Address %d\n", &ptr);
ptr=fill_array(&ptr);

printf("Out function\n");
printf("Address %d\n", &ptr);
for(i=0; i
{
printf("Out function\n");
printf("%d - %f ", &ptr[i], ptr[i]);
}
}
=======================
The result will be
-------------------------------------
Address 1245052 //ptr in main( )
Address 1244968 //ptr in fill_array( )
In function
4397824 - 0.000000 In function
4397828 - 1.000000 In function
4397832 - 2.000000 In function
4397836 - 3.000000 In function
4397840 - 4.000000 In function
4397844 - 5.000000 In function
4397848 - 6.000000 In function
4397852 - 7.000000 In function
4397856 - 8.000000 In function
4397860 - 9.000000
ptr = 1244968  //ptr in fill_array( )
Out function
Address 1245052
Out function
4397824 - 0.000000 Out function
4397828 - 1.000000 Out function
4397832 - 2.000000 Out function
4397836 - 3.000000 Out function
4397840 - 4.000000 Out function
4397844 - 5.000000 Out function
4397848 - 6.000000 Out function
4397852 - 7.000000 Out function
4397856 - 8.000000 Out function
4397860 - 9.000000 Press any key to continue
-------------------------------------

Solution 2 , call by address.
=======================
#include
#include

void function(float ** ptrf)
{

int i;
printf("address of *ptrf = %d\n", &*ptrf);
*ptrf = (float *) malloc(5*sizeof(float*));
printf("address of *ptrf = %d\n", &*ptrf);
for(i=0; i<5;>
{
(*ptrf)[i] =i;
}
}

int main()
{
float *ptr=NULL;

int i;
printf("address of *ptr = %d\n", &ptr);
function(&ptr);

for(i=0; i<5;>
{
printf("%f ", ptr[i]);
}
printf("\nAddress ponited to by the ptr out function \nptr=%d, &ptr=%d\n", ptr, &ptr);
}
=======================
The result will be
---------------------------------
address of *ptr = 1245052
address of *ptrf = 1245052
address of *ptrf = 1245052
0.000000 1.000000 2.000000 3.000000 4.000000
Address ponited to by the ptr out function
ptr=4397856, &ptr=1245052
Press any key to continue
---------------------------------

Simple Matlab demo of LM algorithm

Special thanks to Pradit Mittrapiyanuruk.
His open article gives us a very good simple demo of Levenberg-Marquardt Algorithm.

The code is almost the same with code provided from http://www.cse.ucsd.edu/classes/fa04/cse252c/vrabaud1.pdf
The red statements are simple plotting graphic. The statements will show the result as iteration=1,2,3,5,10, and 100.
=================================================================
%definition, giving a function f and create jacobian function for function
%f
syms a b x y real;
f=( a * cos(b*x) + b * sin(a*x))
d=y-f;
Jsym=jacobian(d,[a b]);
%Generate the synthetic data from the curve function with some additional
%noise
a=100;
b=102;
x=[0:0.1:2*pi]';
y = a * cos(b*x) + b * sin(a*x);
% add random noise
y_input = y + 5*rand(length(x),1);

%start the LMA
% initial guess for the parameters
a0=100.5; b0=102.5;
y_init = a0 * cos(b0*x) + b0 * sin(a0*x);
Ndata=length(y_input);
Nparams=2; % a and b are the parameters to be estimated
n_iters=100; % set # of iterations for the LM
lamda=0.01; % set an initial value of the damping factor for the LM
updateJ=1;
a_est=a0;
b_est=b0;
for it=1:n_iters
if updateJ==1
% Evaluate the Jacobian matrix at the current parameters (a_est, b_est)
J=zeros(Ndata,Nparams);
for i=1:length(x)
J(i,:)=[-cos(b_est*x(i))-(b_est*cos(a_est*x(i))*x(i)) (a_est*sin(b_est*x(i))*x(i))-sin(a_est*x(i))];
end
% Evaluate the distance error at the current parameters
y_est = a_est * cos(b_est*x) + b_est * sin(a_est*x);
d=y_input-y_est;
% compute the approximated Hessian matrix, J’ is the transpose of J
H=J'*J;
if it==1 % the first iteration : compute the total error
e=dot(d,d);
end
end

% Apply the damping factor to the Hessian matrix
H_lm=H+(lamda*eye(Nparams,Nparams));
% Compute the updated parameters
dp=-inv(H_lm)*(J'*d(:));
a_lm=a_est+dp(1);
b_lm=b_est+dp(2);
% Evaluate the total distance error at the updated parameters
y_est_lm = a_lm * cos(b_lm*x) + b_lm * sin(a_lm*x);
% plot
if it == 1
h1=plot(y_input, 'b');
hold on
h1=plot(y_est_lm, 'r');
legend(h1,'iteration=1');
hold off
pause
end
if it == 2
h1=plot(y_input, 'b');
hold on
h1=plot(y_est_lm, 'r');
legend(h1,'iteration=2');
hold off
pause
end
if it == 3
h1=plot(y_input, 'b');
hold on
h1=plot(y_est_lm, 'r');
legend(h1,'iteration=3');
hold off
pause
end
if it == 5
h1=plot(y_input, 'b');
hold on
h1=plot(y_est_lm, 'r');
legend(h1,'iteration=5');
hold off
pause
end
if it == 10
h2=plot(y_input, 'b');
hold on
h2=plot(y_est_lm, 'r');
legend(h2,'iteration=10');
hold off
pause
end
if it == 100
h3=plot(y_input, 'b');
hold on
h3=plot(y_est_lm, 'r');
legend(h3,'iteration=100');
hold off
pause
end
d_lm=y_input-y_est_lm;
e_lm=dot(d_lm,d_lm);
% If the total distance error of the updated parameters is less than the previous one
% then makes the updated parameters to be the current parameters
% and decreases the value of the damping factor
if e_lm
lamda=lamda/10;
a_est=a_lm;
b_est=b_lm;
e=e_lm;
disp(e);
updateJ=1;
else % otherwise increases the value of the damping factor
updateJ=0;
lamda=lamda*10;
end
end
=================================================================
Code download, lmademo.m
You will the blue line is fitting close to red line after few iterations.





Clicky

Clicky Web Analytics