How to pass input to placeholders?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
import tensorflow as tf
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20, 6)
df1 = pd.read_csv("TrainData.csv")
df2 = pd.read_csv("TestData.csv")
train_data_X = np.asanyarray(df1['ENGINE SIZE'])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2['ENGINE SIZE'])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
W = tf.Variable(20.0, name= 'Weight')
b = tf.Variable(30.0, name= 'Bias')
X = tf.placeholder(tf.float32, name= 'Input')
Y = tf.placeholder(tf.float32, name= 'Output')
Y = W*X + b
loss = tf.reduce_mean(tf.square(Y - train_data_Y))
optimizer = tf.train.GradientDescentOptimizer(0.05)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
loss_values =
train_data =
for step in range(100):
_, loss_val, a_val, b_val = sess.run([train, loss, W, b], feed_dict={X:train_data_X, Y:train_data_Y})
loss_values.append(loss_val)
if step % 5 == 0:
print(step, loss_val, a_val, b_val)
train_data.append([a_val, b_val])
plt.plot(loss_values, 'ro')
plt.show()
I am trying to make a linear regression model to detect CO2 emission by giving size of engine as input. I am using the above code in tensorflow.
1) When I use this code Weight and Bias remains unchanged. What is the problem in code?
2) Also if I want engine size and milage both as inputs. what code changes should be made
Thanks in advance
python tensorflow machine-learning linear-regression
add a comment |
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
import tensorflow as tf
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20, 6)
df1 = pd.read_csv("TrainData.csv")
df2 = pd.read_csv("TestData.csv")
train_data_X = np.asanyarray(df1['ENGINE SIZE'])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2['ENGINE SIZE'])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
W = tf.Variable(20.0, name= 'Weight')
b = tf.Variable(30.0, name= 'Bias')
X = tf.placeholder(tf.float32, name= 'Input')
Y = tf.placeholder(tf.float32, name= 'Output')
Y = W*X + b
loss = tf.reduce_mean(tf.square(Y - train_data_Y))
optimizer = tf.train.GradientDescentOptimizer(0.05)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
loss_values =
train_data =
for step in range(100):
_, loss_val, a_val, b_val = sess.run([train, loss, W, b], feed_dict={X:train_data_X, Y:train_data_Y})
loss_values.append(loss_val)
if step % 5 == 0:
print(step, loss_val, a_val, b_val)
train_data.append([a_val, b_val])
plt.plot(loss_values, 'ro')
plt.show()
I am trying to make a linear regression model to detect CO2 emission by giving size of engine as input. I am using the above code in tensorflow.
1) When I use this code Weight and Bias remains unchanged. What is the problem in code?
2) Also if I want engine size and milage both as inputs. what code changes should be made
Thanks in advance
python tensorflow machine-learning linear-regression
"Weight and Bias remain the same": so the loss is also constant?
– Matthieu Brucher
Nov 24 '18 at 18:14
add a comment |
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
import tensorflow as tf
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20, 6)
df1 = pd.read_csv("TrainData.csv")
df2 = pd.read_csv("TestData.csv")
train_data_X = np.asanyarray(df1['ENGINE SIZE'])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2['ENGINE SIZE'])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
W = tf.Variable(20.0, name= 'Weight')
b = tf.Variable(30.0, name= 'Bias')
X = tf.placeholder(tf.float32, name= 'Input')
Y = tf.placeholder(tf.float32, name= 'Output')
Y = W*X + b
loss = tf.reduce_mean(tf.square(Y - train_data_Y))
optimizer = tf.train.GradientDescentOptimizer(0.05)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
loss_values =
train_data =
for step in range(100):
_, loss_val, a_val, b_val = sess.run([train, loss, W, b], feed_dict={X:train_data_X, Y:train_data_Y})
loss_values.append(loss_val)
if step % 5 == 0:
print(step, loss_val, a_val, b_val)
train_data.append([a_val, b_val])
plt.plot(loss_values, 'ro')
plt.show()
I am trying to make a linear regression model to detect CO2 emission by giving size of engine as input. I am using the above code in tensorflow.
1) When I use this code Weight and Bias remains unchanged. What is the problem in code?
2) Also if I want engine size and milage both as inputs. what code changes should be made
Thanks in advance
python tensorflow machine-learning linear-regression
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
import tensorflow as tf
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20, 6)
df1 = pd.read_csv("TrainData.csv")
df2 = pd.read_csv("TestData.csv")
train_data_X = np.asanyarray(df1['ENGINE SIZE'])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2['ENGINE SIZE'])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
W = tf.Variable(20.0, name= 'Weight')
b = tf.Variable(30.0, name= 'Bias')
X = tf.placeholder(tf.float32, name= 'Input')
Y = tf.placeholder(tf.float32, name= 'Output')
Y = W*X + b
loss = tf.reduce_mean(tf.square(Y - train_data_Y))
optimizer = tf.train.GradientDescentOptimizer(0.05)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
loss_values =
train_data =
for step in range(100):
_, loss_val, a_val, b_val = sess.run([train, loss, W, b], feed_dict={X:train_data_X, Y:train_data_Y})
loss_values.append(loss_val)
if step % 5 == 0:
print(step, loss_val, a_val, b_val)
train_data.append([a_val, b_val])
plt.plot(loss_values, 'ro')
plt.show()
I am trying to make a linear regression model to detect CO2 emission by giving size of engine as input. I am using the above code in tensorflow.
1) When I use this code Weight and Bias remains unchanged. What is the problem in code?
2) Also if I want engine size and milage both as inputs. what code changes should be made
Thanks in advance
python tensorflow machine-learning linear-regression
python tensorflow machine-learning linear-regression
asked Nov 24 '18 at 17:33
Skadna Naglapur RamamurthySkadna Naglapur Ramamurthy
316
316
"Weight and Bias remain the same": so the loss is also constant?
– Matthieu Brucher
Nov 24 '18 at 18:14
add a comment |
"Weight and Bias remain the same": so the loss is also constant?
– Matthieu Brucher
Nov 24 '18 at 18:14
"Weight and Bias remain the same": so the loss is also constant?
– Matthieu Brucher
Nov 24 '18 at 18:14
"Weight and Bias remain the same": so the loss is also constant?
– Matthieu Brucher
Nov 24 '18 at 18:14
add a comment |
1 Answer
1
active
oldest
votes
There were few mistakes in the code which are mentioned below :
- You were using placeholder
Y = W*X + b
, which in later section of the code was used to feed data (feed_dict={X:train_data_X, Y:train_data_Y}
). You should have used another variable for prediction (not the placeholder which you were using to feed data) and then you should have been able to calculate loss function. However, required changes have been made. Checkprediction= W*X + b
in the below code - You were passing complete data in feed_dict at once (
feed_dict={X:train_data_X, Y:train_data_Y}
). However, you need to pass single data value at a time (feed_dict={X:x, Y:y}
)
Below code with the needful correction should be working fine.
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
import tensorflow as tf
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20, 6)
df1 = pd.read_csv("TrainData.csv")
df2 = pd.read_csv("TestData.csv")
train_data_X = np.asanyarray(df1['ENGINE SIZE'])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2['ENGINE SIZE'])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
W = tf.Variable(20.0, name= 'Weight')
b = tf.Variable(30.0, name= 'Bias')
X = tf.placeholder(tf.float32, name= 'Input')
Y = tf.placeholder(tf.float32, name= 'Output')
prediction= W*X + b
loss = tf.reduce_mean(tf.square(prediction - Y))
optimizer = tf.train.GradientDescentOptimizer(0.05)
train = optimizer.minimize(loss)
loss_values =
train_data =
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for step in range(100):
for (x,y) in zip(train_data_X,train_data_Y):
_, loss_val, a_val, b_val = sess.run([train, loss, W, b], feed_dict={X:x, Y:y})
loss_values.append(loss_val)
if step % 5 == 0:
print(step, loss_val, a_val, b_val)
train_data.append([a_val, b_val])
plt.plot(loss_values, 'ro')
plt.show()
Note : Because of incorrect choice of loss function, your loss keeps on increasing with every step.
I have mentioned a loss function below, which might work for your data. I am not sure how your data looks like, but you can give this a try if you want and let me know if this worked.
n_samples = train_data_X.shape[0]
loss = tf.reduce_sum(tf.pow(prediction - Y, 2)) / (2 * n_samples)
Response to your second query.
Assuming that your data has column name as MILEAGE, You can perform the below changes in train_data_X
and test_data_X
. Rest of the code will remain the same as above.
train_data_X = np.asanyarray(df1[['ENGINE SIZE','MILEAGE']])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2[['ENGINE SIZE','MILEAGE']])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
Thank you. It worked for me. But how is the loss function you have suggested is different than I have already used?
– Skadna Naglapur Ramamurthy
Nov 26 '18 at 9:31
I experimented with the loss function with data (which in my case is IRIS flower data). It can be possible that my loss function doesn't work for you. It totally depends on data.
– Vineet Tripathi
Nov 26 '18 at 19:14
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53460728%2fhow-to-pass-input-to-placeholders%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
There were few mistakes in the code which are mentioned below :
- You were using placeholder
Y = W*X + b
, which in later section of the code was used to feed data (feed_dict={X:train_data_X, Y:train_data_Y}
). You should have used another variable for prediction (not the placeholder which you were using to feed data) and then you should have been able to calculate loss function. However, required changes have been made. Checkprediction= W*X + b
in the below code - You were passing complete data in feed_dict at once (
feed_dict={X:train_data_X, Y:train_data_Y}
). However, you need to pass single data value at a time (feed_dict={X:x, Y:y}
)
Below code with the needful correction should be working fine.
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
import tensorflow as tf
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20, 6)
df1 = pd.read_csv("TrainData.csv")
df2 = pd.read_csv("TestData.csv")
train_data_X = np.asanyarray(df1['ENGINE SIZE'])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2['ENGINE SIZE'])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
W = tf.Variable(20.0, name= 'Weight')
b = tf.Variable(30.0, name= 'Bias')
X = tf.placeholder(tf.float32, name= 'Input')
Y = tf.placeholder(tf.float32, name= 'Output')
prediction= W*X + b
loss = tf.reduce_mean(tf.square(prediction - Y))
optimizer = tf.train.GradientDescentOptimizer(0.05)
train = optimizer.minimize(loss)
loss_values =
train_data =
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for step in range(100):
for (x,y) in zip(train_data_X,train_data_Y):
_, loss_val, a_val, b_val = sess.run([train, loss, W, b], feed_dict={X:x, Y:y})
loss_values.append(loss_val)
if step % 5 == 0:
print(step, loss_val, a_val, b_val)
train_data.append([a_val, b_val])
plt.plot(loss_values, 'ro')
plt.show()
Note : Because of incorrect choice of loss function, your loss keeps on increasing with every step.
I have mentioned a loss function below, which might work for your data. I am not sure how your data looks like, but you can give this a try if you want and let me know if this worked.
n_samples = train_data_X.shape[0]
loss = tf.reduce_sum(tf.pow(prediction - Y, 2)) / (2 * n_samples)
Response to your second query.
Assuming that your data has column name as MILEAGE, You can perform the below changes in train_data_X
and test_data_X
. Rest of the code will remain the same as above.
train_data_X = np.asanyarray(df1[['ENGINE SIZE','MILEAGE']])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2[['ENGINE SIZE','MILEAGE']])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
Thank you. It worked for me. But how is the loss function you have suggested is different than I have already used?
– Skadna Naglapur Ramamurthy
Nov 26 '18 at 9:31
I experimented with the loss function with data (which in my case is IRIS flower data). It can be possible that my loss function doesn't work for you. It totally depends on data.
– Vineet Tripathi
Nov 26 '18 at 19:14
add a comment |
There were few mistakes in the code which are mentioned below :
- You were using placeholder
Y = W*X + b
, which in later section of the code was used to feed data (feed_dict={X:train_data_X, Y:train_data_Y}
). You should have used another variable for prediction (not the placeholder which you were using to feed data) and then you should have been able to calculate loss function. However, required changes have been made. Checkprediction= W*X + b
in the below code - You were passing complete data in feed_dict at once (
feed_dict={X:train_data_X, Y:train_data_Y}
). However, you need to pass single data value at a time (feed_dict={X:x, Y:y}
)
Below code with the needful correction should be working fine.
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
import tensorflow as tf
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20, 6)
df1 = pd.read_csv("TrainData.csv")
df2 = pd.read_csv("TestData.csv")
train_data_X = np.asanyarray(df1['ENGINE SIZE'])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2['ENGINE SIZE'])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
W = tf.Variable(20.0, name= 'Weight')
b = tf.Variable(30.0, name= 'Bias')
X = tf.placeholder(tf.float32, name= 'Input')
Y = tf.placeholder(tf.float32, name= 'Output')
prediction= W*X + b
loss = tf.reduce_mean(tf.square(prediction - Y))
optimizer = tf.train.GradientDescentOptimizer(0.05)
train = optimizer.minimize(loss)
loss_values =
train_data =
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for step in range(100):
for (x,y) in zip(train_data_X,train_data_Y):
_, loss_val, a_val, b_val = sess.run([train, loss, W, b], feed_dict={X:x, Y:y})
loss_values.append(loss_val)
if step % 5 == 0:
print(step, loss_val, a_val, b_val)
train_data.append([a_val, b_val])
plt.plot(loss_values, 'ro')
plt.show()
Note : Because of incorrect choice of loss function, your loss keeps on increasing with every step.
I have mentioned a loss function below, which might work for your data. I am not sure how your data looks like, but you can give this a try if you want and let me know if this worked.
n_samples = train_data_X.shape[0]
loss = tf.reduce_sum(tf.pow(prediction - Y, 2)) / (2 * n_samples)
Response to your second query.
Assuming that your data has column name as MILEAGE, You can perform the below changes in train_data_X
and test_data_X
. Rest of the code will remain the same as above.
train_data_X = np.asanyarray(df1[['ENGINE SIZE','MILEAGE']])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2[['ENGINE SIZE','MILEAGE']])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
Thank you. It worked for me. But how is the loss function you have suggested is different than I have already used?
– Skadna Naglapur Ramamurthy
Nov 26 '18 at 9:31
I experimented with the loss function with data (which in my case is IRIS flower data). It can be possible that my loss function doesn't work for you. It totally depends on data.
– Vineet Tripathi
Nov 26 '18 at 19:14
add a comment |
There were few mistakes in the code which are mentioned below :
- You were using placeholder
Y = W*X + b
, which in later section of the code was used to feed data (feed_dict={X:train_data_X, Y:train_data_Y}
). You should have used another variable for prediction (not the placeholder which you were using to feed data) and then you should have been able to calculate loss function. However, required changes have been made. Checkprediction= W*X + b
in the below code - You were passing complete data in feed_dict at once (
feed_dict={X:train_data_X, Y:train_data_Y}
). However, you need to pass single data value at a time (feed_dict={X:x, Y:y}
)
Below code with the needful correction should be working fine.
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
import tensorflow as tf
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20, 6)
df1 = pd.read_csv("TrainData.csv")
df2 = pd.read_csv("TestData.csv")
train_data_X = np.asanyarray(df1['ENGINE SIZE'])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2['ENGINE SIZE'])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
W = tf.Variable(20.0, name= 'Weight')
b = tf.Variable(30.0, name= 'Bias')
X = tf.placeholder(tf.float32, name= 'Input')
Y = tf.placeholder(tf.float32, name= 'Output')
prediction= W*X + b
loss = tf.reduce_mean(tf.square(prediction - Y))
optimizer = tf.train.GradientDescentOptimizer(0.05)
train = optimizer.minimize(loss)
loss_values =
train_data =
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for step in range(100):
for (x,y) in zip(train_data_X,train_data_Y):
_, loss_val, a_val, b_val = sess.run([train, loss, W, b], feed_dict={X:x, Y:y})
loss_values.append(loss_val)
if step % 5 == 0:
print(step, loss_val, a_val, b_val)
train_data.append([a_val, b_val])
plt.plot(loss_values, 'ro')
plt.show()
Note : Because of incorrect choice of loss function, your loss keeps on increasing with every step.
I have mentioned a loss function below, which might work for your data. I am not sure how your data looks like, but you can give this a try if you want and let me know if this worked.
n_samples = train_data_X.shape[0]
loss = tf.reduce_sum(tf.pow(prediction - Y, 2)) / (2 * n_samples)
Response to your second query.
Assuming that your data has column name as MILEAGE, You can perform the below changes in train_data_X
and test_data_X
. Rest of the code will remain the same as above.
train_data_X = np.asanyarray(df1[['ENGINE SIZE','MILEAGE']])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2[['ENGINE SIZE','MILEAGE']])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
There were few mistakes in the code which are mentioned below :
- You were using placeholder
Y = W*X + b
, which in later section of the code was used to feed data (feed_dict={X:train_data_X, Y:train_data_Y}
). You should have used another variable for prediction (not the placeholder which you were using to feed data) and then you should have been able to calculate loss function. However, required changes have been made. Checkprediction= W*X + b
in the below code - You were passing complete data in feed_dict at once (
feed_dict={X:train_data_X, Y:train_data_Y}
). However, you need to pass single data value at a time (feed_dict={X:x, Y:y}
)
Below code with the needful correction should be working fine.
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
import tensorflow as tf
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20, 6)
df1 = pd.read_csv("TrainData.csv")
df2 = pd.read_csv("TestData.csv")
train_data_X = np.asanyarray(df1['ENGINE SIZE'])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2['ENGINE SIZE'])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
W = tf.Variable(20.0, name= 'Weight')
b = tf.Variable(30.0, name= 'Bias')
X = tf.placeholder(tf.float32, name= 'Input')
Y = tf.placeholder(tf.float32, name= 'Output')
prediction= W*X + b
loss = tf.reduce_mean(tf.square(prediction - Y))
optimizer = tf.train.GradientDescentOptimizer(0.05)
train = optimizer.minimize(loss)
loss_values =
train_data =
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for step in range(100):
for (x,y) in zip(train_data_X,train_data_Y):
_, loss_val, a_val, b_val = sess.run([train, loss, W, b], feed_dict={X:x, Y:y})
loss_values.append(loss_val)
if step % 5 == 0:
print(step, loss_val, a_val, b_val)
train_data.append([a_val, b_val])
plt.plot(loss_values, 'ro')
plt.show()
Note : Because of incorrect choice of loss function, your loss keeps on increasing with every step.
I have mentioned a loss function below, which might work for your data. I am not sure how your data looks like, but you can give this a try if you want and let me know if this worked.
n_samples = train_data_X.shape[0]
loss = tf.reduce_sum(tf.pow(prediction - Y, 2)) / (2 * n_samples)
Response to your second query.
Assuming that your data has column name as MILEAGE, You can perform the below changes in train_data_X
and test_data_X
. Rest of the code will remain the same as above.
train_data_X = np.asanyarray(df1[['ENGINE SIZE','MILEAGE']])
train_data_Y = np.asanyarray(df1['CO2 EMISSIONS'])
test_data_X = np.asanyarray(df2[['ENGINE SIZE','MILEAGE']])
test_data_Y = np.asanyarray(df2['CO2 EMISSIONS'])
edited Nov 25 '18 at 0:30
answered Nov 24 '18 at 18:48
Vineet TripathiVineet Tripathi
163
163
Thank you. It worked for me. But how is the loss function you have suggested is different than I have already used?
– Skadna Naglapur Ramamurthy
Nov 26 '18 at 9:31
I experimented with the loss function with data (which in my case is IRIS flower data). It can be possible that my loss function doesn't work for you. It totally depends on data.
– Vineet Tripathi
Nov 26 '18 at 19:14
add a comment |
Thank you. It worked for me. But how is the loss function you have suggested is different than I have already used?
– Skadna Naglapur Ramamurthy
Nov 26 '18 at 9:31
I experimented with the loss function with data (which in my case is IRIS flower data). It can be possible that my loss function doesn't work for you. It totally depends on data.
– Vineet Tripathi
Nov 26 '18 at 19:14
Thank you. It worked for me. But how is the loss function you have suggested is different than I have already used?
– Skadna Naglapur Ramamurthy
Nov 26 '18 at 9:31
Thank you. It worked for me. But how is the loss function you have suggested is different than I have already used?
– Skadna Naglapur Ramamurthy
Nov 26 '18 at 9:31
I experimented with the loss function with data (which in my case is IRIS flower data). It can be possible that my loss function doesn't work for you. It totally depends on data.
– Vineet Tripathi
Nov 26 '18 at 19:14
I experimented with the loss function with data (which in my case is IRIS flower data). It can be possible that my loss function doesn't work for you. It totally depends on data.
– Vineet Tripathi
Nov 26 '18 at 19:14
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53460728%2fhow-to-pass-input-to-placeholders%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
"Weight and Bias remain the same": so the loss is also constant?
– Matthieu Brucher
Nov 24 '18 at 18:14