i want to apply genetic algorithm for this program, i need the the same output using genetic algorithm can some one help me?

package ticketsOrders; import java.util.ArrayList; import java.util.List; public class randompermutationsofthegenes { public static void main(String[] args) { List<Integer> all_genes = new ArrayList<Integer>(); all_genes.add(1); all_genes.add(2); all_genes.add(3); all_genes.add(4); all_genes.add(5); all_genes.add(6); all_genes.add(7); //initial input of all tickets List<Integer> temp_gene = new ArrayList<Integer>(all_genes); System.out.println(“initial input of tickets=”+temp_gene); for (int i = 0; i < 3; i++) { //here we use…

Are CNN, LSTM, GRU and transformer AGI or computational intelligence tools?

Will CNN, LSTM, GRU and transformer be better classified as Computational Intelligence (CI) tools or Artificial General Intelligence (AGI) tools? The term CI arose back when some codes like neural networks, GA, PSO were considered doing magical stuff. These days CI tools do not appear very magical. Researchers want codes to exude AGI. Do the…

What are some approaches to handle uncertainty in transition and observation probability in POMDP?

What are some common approaches to estimate a (transition or observation) probability, when the probabilities are not exactly known? When realizing a POMDP model, the state model needs additional information in terms of transition and observation probability. Often these probabilities are not known and an equal distribution is also not given. How can we proceed?

Can dropout layers not influence LSTM training at all

I am working on a project that requires time-series prediction (regression) and I use LSTM network with first 1D conv layer in Keras/TF-gpu as follows: model = Sequential() model.add(Conv1D(filters=60, activation=’relu’, input_shape=(x_train.shape[1], len(features_used)), kernel_size=5, padding=’causal’, strides=1)) model.add(CuDNNLSTM(units=128, return_sequences=True)) model.add(CuDNNLSTM(units=128)) model.add(Dense(units=1)) As an effect my model is clearly overfitting: So I decided to add dropout layers, first…

Can dropout layers not influence LSTM training at all

I am working on a project that requires time-series prediction (regression) and I use LSTM network with first 1D conv layer in Keras/TF-gpu as follows: model = Sequential() model.add(Conv1D(filters=60, activation=’relu’, input_shape=(x_train.shape[1], len(features_used)), kernel_size=5, padding=’causal’, strides=1)) model.add(CuDNNLSTM(units=128, return_sequences=True)) model.add(CuDNNLSTM(units=128)) model.add(Dense(units=1)) As an effect my model is clearly overfitting: So I decided to add dropout layers, first…

Can dropout layers not influence LSTM training at all

I am working on a project that requires time-series prediction (regression) and I use LSTM network with first 1D conv layer in Keras/TF-gpu as follows: model = Sequential() model.add(Conv1D(filters=60, activation=’relu’, input_shape=(x_train.shape[1], len(features_used)), kernel_size=5, padding=’causal’, strides=1)) model.add(CuDNNLSTM(units=128, return_sequences=True)) model.add(CuDNNLSTM(units=128)) model.add(Dense(units=1)) As an effect my model is clearly overfitting: So I decided to add dropout layers, first…