Run Code
|
API
|
Code Wall
|
Misc
|
Feedback
|
Login
|
Theme
|
Privacy
|
Patreon
ele 888 lab 1 part 1
Language:
Ada
Assembly
Bash
C#
C++ (gcc)
C++ (clang)
C++ (vc++)
C (gcc)
C (clang)
C (vc)
Client Side
Clojure
Common Lisp
D
Elixir
Erlang
F#
Fortran
Go
Haskell
Java
Javascript
Kotlin
Lua
MySql
Node.js
Ocaml
Octave
Objective-C
Oracle
Pascal
Perl
Php
PostgreSQL
Prolog
Python
Python 3
R
Rust
Ruby
Scala
Scheme
Sql Server
Swift
Tcl
Visual Basic
Layout:
Vertical
Horizontal
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % ELE 888/ EE 8209: LAB 1: Bayesian Decision Theory %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [posteriors_x,g_x]=lab1(x,Training_Data,x2) % x = individual sample to be tested (to identify its probable class label) % featureOfInterest = index of relevant feature (column) in Training_Data % Train_Data = Matrix containing the training samples and numeric class labels % posterior_x = Posterior probabilities % g_x = value of the discriminant function D=Training_Data; % D is MxN (M samples, N columns = N-1 features + 1 label) [M,N]=size(D); feature = 2; %Feature 2 for x2 (Sepal Width) and 1 for x1 (Septal Length) f=D(:,feature); % feature samples la=D(:,N); % class labels %% %%%%Prior Probabilities%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Hint: use the commands "find" and "length" class1 = la==1; %Iris Setosa class2 = la==2; %Iris Versicolour disp('Prior probabilities:'); Pr1 = sum(class1)/length(la) Pr2 = sum(class2)/length(la) %% %%%%%Class-conditional probabilities%%%%%%%%%%%%%%%%%%%%%%% disp('Mean & Std for class 1 & 2'); m11 = mean(f(class1)) % mean of the class conditional density p(x/w1) std11 = std(f(class1)) % Standard deviation of the class conditional density p(x/w1) m12 = mean(f(class2)) % mean of the class conditional density p(x/w2) std12= std(f(class2)) % Standard deviation of the class conditional density p(x/w2) findfeature = x; disp(['Conditional probabilities for x=' num2str(x)]); cp11= 1/(sqrt(2*pi)*std11)*exp(-0.5*((findfeature-m11)/std11)^2) % use the above mean, std and the test feature to calculate p(x/w1) cp12= 1/(sqrt(2*pi)*std12)*exp(-.5*((findfeature-m12)/std12)^2) % use the above mean, std and the test feature to calculate p(x/w2) %% %%%%%%Compute the posterior probabilities%%%%%%%%%%%%%%%%%%%% disp('Posterior prob. for the test feature'); Px = cp11*Pr1 + cp12*Pr2; pos11= cp11*Pr1/Px % p(w1/x) for the given test feature value pos12= cp12*Pr2/Px % p(w2/x) for the given test feature value posteriors_x= [pos11, pos12]; % plot(f*pos11,f*Px); % figure, % plot(f*pos12,f*Px); %% %%%%%%Discriminant function for min error rate classifier%%% disp('Discriminant function for the test feature'); [C,I]= min(posteriors_x); g_x= I % compute the g(x) for min err rate classifier. %% Finding Threshold thres = sym('thres'); eq = 1/(sqrt(2*pi)*std11)*exp(-.5*((thres-m11)/std11)^2) * Pr1 == 1/(sqrt(2*pi)*std12)*exp(-.5*((thres-m12)/std12)^2) * Pr2; thres = solve(eq, thres); thres = double(thres); Th1 = thres(thres>0) figure, plot(thres) %% Unbalanced Loss % Theta = (0.8/0.9)*(Pr2/Pr1); % thres = sym('thres'); % eq = 1/(sqrt(2*pi)*std11)*exp(-.5*((thres-m11)/std11)^2) * Pr1 == 1/(sqrt(2*pi)*std12)*exp(-.5*((thres-m12)/std12)^2) * Pr2; % thres = solve(eq, thres); % thres = double(thres); % Th1 = thres(thres>Theta) % % %% Bonus % % f=D(:,1:2); % feature samples % la=D(:,N); % class labels % [M,N]=size(la); % sigma=cov(f); %Using Covariance matrix, sigma % % %% % class1 = la==1; % class2 = la==2; % % disp('Mean & Std for class 1 & 2'); % mu1 = mean(f(class1)) % mean of the class conditional density p(x/w1) % mu2 = mean(f(class2)) % mean of the class conditional density p(x/w2) % % lsigmal = det(sigma) % isigma=inv(sigma); % % %feature = y2(1); % %feature2 = y2(2); % % % % disp(['Conditional probabilities for x=' num2str(x)]); % % cp11 = 1/((2*pi)^((length(sigma))/2)*(lsigmal^0.5))*exp((-1/2)*(transpose(feature-mu1))*isigma*(feature-mu1));% use the above mean, std and the test feature to calculate p(x/w1) % % cp21 = 1/((2*pi)^((length(sigma))/2)*(lsigmal^0.5))*exp((-1/2)*(transpose(feature2-mu2))*isigma*(feature2-mu2)); % % cp12 = 1/((2*pi)^((length(sigma))/2)*(lsigmal^0.5))*exp((-1/2)*(transpose(feature-mu2))*isigma*(feature-mu2)); % use the above mean, std and the test feature to calculate p(x/w2) % % cp22 = 1/((2*pi)^((length(sigma))/2)*(lsigmal^0.5))*exp((-1/2)*(transpose(feature2-mu1))*isigma*(feature2-mu1)); % % %% % % disp('Posterior prob. for the test feature'); % % Px = cp11*Pr1 + cp12*Pr2; % Px2 = cp21*Pr1 + cp22*Pr2; % pos11 = cp11*Pr1/Px% p(w1/x) for the given test feature value % pos21 = cp21*Pr1/Px2 % pos12= cp12*Pr2/Px% p(w2/x) for the given test feature value % pos22 = cp22*Pr2/Px2 % posteriors_x = [pos11, pos12, pos21, pos22]; % % plot3(pos11,pos12,pos21);hold on % figure; % plot3(pos21,pos22); % % %% % % disp('Discriminant function for the test feature'); % % %[C,I]= min(posteriors_x); g_x = I % compute the g(x) for min err rate classifier.
gcc
Show compiler warnings
[
+
] Compiler args
[
+
]
Show input
Compilation time: 0.13 sec, absolute running time: 1.5 sec, cpu time: 1.68 sec, memory peak: 3 Mb, absolute service time: 1,64 sec
edit mode
|
history
|
discussion