machine learning octave code gradient descent question
$begingroup$
I'm taking Coursera Machine learning course. so who take this courses will able to help this problem.
this is the octave code to find the delta for gradient descent.
theta = theta - alpha / m * ((X * theta - y)'* X)';//this is the answerkey provided
First question)
the way i know to solve the gradient descent theta(0) and theta(1) should have different approach to get value as follow
theta(0) = theta(0) - alpha / m * ((X * theta(0) - y)')'; //my answer key
theta(1) = theta(1) - alpha / m * ((X * theta(1) - y)')'; //my answer key
but i'm not sure why the answer key only show
theta = theta - alpha / m * ((X * theta - y)'* X)';
this equation.
Second question) what is the ' ' doing in octave code?
theta = theta - alpha / m * ((X * theta - y)'* X)';
'* X)' // what ' ' thing do in here
machine-learning octave
$endgroup$
add a comment |
$begingroup$
I'm taking Coursera Machine learning course. so who take this courses will able to help this problem.
this is the octave code to find the delta for gradient descent.
theta = theta - alpha / m * ((X * theta - y)'* X)';//this is the answerkey provided
First question)
the way i know to solve the gradient descent theta(0) and theta(1) should have different approach to get value as follow
theta(0) = theta(0) - alpha / m * ((X * theta(0) - y)')'; //my answer key
theta(1) = theta(1) - alpha / m * ((X * theta(1) - y)')'; //my answer key
but i'm not sure why the answer key only show
theta = theta - alpha / m * ((X * theta - y)'* X)';
this equation.
Second question) what is the ' ' doing in octave code?
theta = theta - alpha / m * ((X * theta - y)'* X)';
'* X)' // what ' ' thing do in here
machine-learning octave
$endgroup$
$begingroup$
in octave/matlab, $x = u+v*w$ can be : $x,u,w$ : 3 vectors, $v$ : a matrix with $v*w$ the multiplication of a matrix with a vector. the main idea of matlab is that the basic datatypes instead of being integers and floating point numbers, are arrays / matrices of numbers.
$endgroup$
– reuns
May 25 '16 at 10:40
$begingroup$
In Octave, $X'$ corresponds to the transpose of the matrix (or the vector) $X$.
$endgroup$
– zuggg
May 25 '16 at 11:36
$begingroup$
oh ok so X' means transpose of X. is there someone who knows gradient descent ? I do not understand why they used transpose to find theta here
$endgroup$
– james Miler
May 26 '16 at 0:28
add a comment |
$begingroup$
I'm taking Coursera Machine learning course. so who take this courses will able to help this problem.
this is the octave code to find the delta for gradient descent.
theta = theta - alpha / m * ((X * theta - y)'* X)';//this is the answerkey provided
First question)
the way i know to solve the gradient descent theta(0) and theta(1) should have different approach to get value as follow
theta(0) = theta(0) - alpha / m * ((X * theta(0) - y)')'; //my answer key
theta(1) = theta(1) - alpha / m * ((X * theta(1) - y)')'; //my answer key
but i'm not sure why the answer key only show
theta = theta - alpha / m * ((X * theta - y)'* X)';
this equation.
Second question) what is the ' ' doing in octave code?
theta = theta - alpha / m * ((X * theta - y)'* X)';
'* X)' // what ' ' thing do in here
machine-learning octave
$endgroup$
I'm taking Coursera Machine learning course. so who take this courses will able to help this problem.
this is the octave code to find the delta for gradient descent.
theta = theta - alpha / m * ((X * theta - y)'* X)';//this is the answerkey provided
First question)
the way i know to solve the gradient descent theta(0) and theta(1) should have different approach to get value as follow
theta(0) = theta(0) - alpha / m * ((X * theta(0) - y)')'; //my answer key
theta(1) = theta(1) - alpha / m * ((X * theta(1) - y)')'; //my answer key
but i'm not sure why the answer key only show
theta = theta - alpha / m * ((X * theta - y)'* X)';
this equation.
Second question) what is the ' ' doing in octave code?
theta = theta - alpha / m * ((X * theta - y)'* X)';
'* X)' // what ' ' thing do in here
machine-learning octave
machine-learning octave
asked May 25 '16 at 10:34
james Milerjames Miler
159412
159412
$begingroup$
in octave/matlab, $x = u+v*w$ can be : $x,u,w$ : 3 vectors, $v$ : a matrix with $v*w$ the multiplication of a matrix with a vector. the main idea of matlab is that the basic datatypes instead of being integers and floating point numbers, are arrays / matrices of numbers.
$endgroup$
– reuns
May 25 '16 at 10:40
$begingroup$
In Octave, $X'$ corresponds to the transpose of the matrix (or the vector) $X$.
$endgroup$
– zuggg
May 25 '16 at 11:36
$begingroup$
oh ok so X' means transpose of X. is there someone who knows gradient descent ? I do not understand why they used transpose to find theta here
$endgroup$
– james Miler
May 26 '16 at 0:28
add a comment |
$begingroup$
in octave/matlab, $x = u+v*w$ can be : $x,u,w$ : 3 vectors, $v$ : a matrix with $v*w$ the multiplication of a matrix with a vector. the main idea of matlab is that the basic datatypes instead of being integers and floating point numbers, are arrays / matrices of numbers.
$endgroup$
– reuns
May 25 '16 at 10:40
$begingroup$
In Octave, $X'$ corresponds to the transpose of the matrix (or the vector) $X$.
$endgroup$
– zuggg
May 25 '16 at 11:36
$begingroup$
oh ok so X' means transpose of X. is there someone who knows gradient descent ? I do not understand why they used transpose to find theta here
$endgroup$
– james Miler
May 26 '16 at 0:28
$begingroup$
in octave/matlab, $x = u+v*w$ can be : $x,u,w$ : 3 vectors, $v$ : a matrix with $v*w$ the multiplication of a matrix with a vector. the main idea of matlab is that the basic datatypes instead of being integers and floating point numbers, are arrays / matrices of numbers.
$endgroup$
– reuns
May 25 '16 at 10:40
$begingroup$
in octave/matlab, $x = u+v*w$ can be : $x,u,w$ : 3 vectors, $v$ : a matrix with $v*w$ the multiplication of a matrix with a vector. the main idea of matlab is that the basic datatypes instead of being integers and floating point numbers, are arrays / matrices of numbers.
$endgroup$
– reuns
May 25 '16 at 10:40
$begingroup$
In Octave, $X'$ corresponds to the transpose of the matrix (or the vector) $X$.
$endgroup$
– zuggg
May 25 '16 at 11:36
$begingroup$
In Octave, $X'$ corresponds to the transpose of the matrix (or the vector) $X$.
$endgroup$
– zuggg
May 25 '16 at 11:36
$begingroup$
oh ok so X' means transpose of X. is there someone who knows gradient descent ? I do not understand why they used transpose to find theta here
$endgroup$
– james Miler
May 26 '16 at 0:28
$begingroup$
oh ok so X' means transpose of X. is there someone who knows gradient descent ? I do not understand why they used transpose to find theta here
$endgroup$
– james Miler
May 26 '16 at 0:28
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Transpose here is used for matching the columns of the X with rows of theta.
Ex:
size of
X=97x2;
y=97x1;
theta=2x1;
first calc is X * theta. The size of the resulting matrix will be 97x1. Then, the sub of two same size matrices. Now, we have to multiply X with the matrix obtained from the previous step. But, the sizes are different
(97x1) * (97x2)
Thus transposing the first matrix makes multiplication possible.
This results in a new matrix of size 1x2 (row vector). But, theta is of size 2x1 (column vector). Hence the final transpose.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1799305%2fmachine-learning-octave-code-gradient-descent-question%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Transpose here is used for matching the columns of the X with rows of theta.
Ex:
size of
X=97x2;
y=97x1;
theta=2x1;
first calc is X * theta. The size of the resulting matrix will be 97x1. Then, the sub of two same size matrices. Now, we have to multiply X with the matrix obtained from the previous step. But, the sizes are different
(97x1) * (97x2)
Thus transposing the first matrix makes multiplication possible.
This results in a new matrix of size 1x2 (row vector). But, theta is of size 2x1 (column vector). Hence the final transpose.
$endgroup$
add a comment |
$begingroup$
Transpose here is used for matching the columns of the X with rows of theta.
Ex:
size of
X=97x2;
y=97x1;
theta=2x1;
first calc is X * theta. The size of the resulting matrix will be 97x1. Then, the sub of two same size matrices. Now, we have to multiply X with the matrix obtained from the previous step. But, the sizes are different
(97x1) * (97x2)
Thus transposing the first matrix makes multiplication possible.
This results in a new matrix of size 1x2 (row vector). But, theta is of size 2x1 (column vector). Hence the final transpose.
$endgroup$
add a comment |
$begingroup$
Transpose here is used for matching the columns of the X with rows of theta.
Ex:
size of
X=97x2;
y=97x1;
theta=2x1;
first calc is X * theta. The size of the resulting matrix will be 97x1. Then, the sub of two same size matrices. Now, we have to multiply X with the matrix obtained from the previous step. But, the sizes are different
(97x1) * (97x2)
Thus transposing the first matrix makes multiplication possible.
This results in a new matrix of size 1x2 (row vector). But, theta is of size 2x1 (column vector). Hence the final transpose.
$endgroup$
Transpose here is used for matching the columns of the X with rows of theta.
Ex:
size of
X=97x2;
y=97x1;
theta=2x1;
first calc is X * theta. The size of the resulting matrix will be 97x1. Then, the sub of two same size matrices. Now, we have to multiply X with the matrix obtained from the previous step. But, the sizes are different
(97x1) * (97x2)
Thus transposing the first matrix makes multiplication possible.
This results in a new matrix of size 1x2 (row vector). But, theta is of size 2x1 (column vector). Hence the final transpose.
answered Mar 3 '17 at 20:28
Bharani K DharanBharani K Dharan
1
1
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1799305%2fmachine-learning-octave-code-gradient-descent-question%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
in octave/matlab, $x = u+v*w$ can be : $x,u,w$ : 3 vectors, $v$ : a matrix with $v*w$ the multiplication of a matrix with a vector. the main idea of matlab is that the basic datatypes instead of being integers and floating point numbers, are arrays / matrices of numbers.
$endgroup$
– reuns
May 25 '16 at 10:40
$begingroup$
In Octave, $X'$ corresponds to the transpose of the matrix (or the vector) $X$.
$endgroup$
– zuggg
May 25 '16 at 11:36
$begingroup$
oh ok so X' means transpose of X. is there someone who knows gradient descent ? I do not understand why they used transpose to find theta here
$endgroup$
– james Miler
May 26 '16 at 0:28