Here are my code: # Model parameters: W and b # tf.reset_default_graph() W = tf.get_variable("weight",shape = [784, 10], dtype=tf.float32) b = tf.get_variable("b", shape=(784,10), dtype= tf.float32) input_X = tf.placeholder(‘float32’, shape = (None,10)) input_y = tf.placeholder(‘float32’, [784, 10]) ogits = W*input_X + b probas = tf.nn.softmax(logits) classes = tf.argmax(probas) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = input_y, logits = logits)) […]

Categories

## Tensorflow with batch size and wrong dimesion

- Post author By Full Stack
- Post date November 19, 2020
- No Comments on Tensorflow with batch size and wrong dimesion

- Tags ' 974 'which has shape %r' --> 975 % (np_val.shape, "accuracy") for epoch in range(EPOCHS): # we finish an epoch when we've looked at all training samples batch_losses = [] for b, {input_X: X_train_flat[batch_start:batch_start+BATCH_SIZE], {input_X: X_train_flat})) # this is slow and usually skipped valid_accuracy = accuracy_score(y_val, {input_X: X_val_flat, {input_X: X_val_flat})) simpleTrainingCurves.add(train_loss, 10, 10)' I am new at tensorflow and coursena is what I am learning. Please help me., 10)) input_y = tf.placeholder('float32', 10]) ogits = W*input_X + b probas = tf.nn.softmax(logits) classes = tf.argmax(probas) loss = tf.reduce_mean(tf.nn.softmax_cross_entro, 784), 784) for Tensor 'Placeholder_2:0', batch_loss = s.run([step, BATCH_SIZE): # data is already shuffled _, dtype= tf.float32) input_X = tf.placeholder('float32', dtype=tf.float32) b = tf.get_variable("b", feed_dict, fetches, handle, Here are my code: # Model parameters: W and b # tf.reset_default_graph() W = tf.get_variable("weight", input_y: y_train_oh[batch_start:batch_start+BATCH_SIZE]}) # collect batch losses, input_y: y_val_oh}) # this part is usually small train_accuracy = accuracy_score(y_train, logits = logits)) step = tf.train.AdamOptimizer(loss) s.run(tf.global_variables_initializer()) BATCH_SIZE = 512 EPOCHS = 40 # for logging, loss, OPTIONS, run_metadata) 973 'Cannot feed value of shape %r for Tensor %r, s.run(classes, shape = [784, shape = [None, shape=(784, str(subfeed_t.get_shape()))) 976 if not self.graph.is_feedable(subfeed_t): 977 raise ValueError('Tensor %s may not be fed.' % subfeed_t) Val, subfeed_t.name, this is almost free as we need a forward pass for backprop anyway batch_losses.append(batch_loss) train_loss = np.mean(batch_los, train_accuracy = [], val_loss, valid_accuracy) the error is: /opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, which has shape '(?, X_train_flat.shape[0]