### Convert input dataset given in hex addresses to int

I have created an LSTM Neural Network which take as input the following format in an .csv file sinewave 0.841470985 0.873736397 0.90255357 0.927808777 0.949402346 0.967249058 0.98127848 0.991435244 How can I write some code so it can take as input hex addresses and convert them to int ? eg the following .xlsx file containing 400.000 samples…

### How to represent and work with the feature matrix for graph convolutional network (GCN) if the number of features for each node is different?

I have a question regarding features representation for graph convolutional neural network. For my case, all nodes have a different number of features, and for now, I don’t really understand how should I work with these constraints. I can not just reduce the number of features or add meaningless features in order to make the…

### Are their ‘anticipative’ AI algorithms?

Can AI algorithms be anticipative (or maybe extrapolate results). For example as a kid we are never expressly told not to jump from a great height, yet we don’t do so, probably by seeing how jumping lower heights (from a couch or bed) can have some hurting effects on the body. So far, as far…

### Tensorflow throwing out of bounds error with keras tokenizer

I am new to ML and tensorflow and trying to train and use a standard text generation model. When I go to train the model I get this error: Train for 155 steps Epoch 1/5 2/155 […………………………] – ETA: 4:49 – loss: 2.5786 ————————————————————————— InvalidArgumentError Traceback (most recent call last) <ipython-input-133-d70c02ff4270> in <module>() —-> 1…

### Padding all non-square input image matrices dynamically in a training set

I have input images of dimensions (16,8) which obviously aren’t square matrices. I am training this dataset on a CNN. I want to pad zeroes using tf.pad() so that all image matrices in my dataset become (16,16) since it would easier keeping track of the dimensions as these input matrices propagate forward through the convolution…

### Improve prediction with LSTMs when data have no particular trend (complex)

I have a deep learning problem, I am working with the CMAPSS dataset, which contains data simulating the degradation of several aircraft engines. The aim is to predict from data collected on a machine in full operation, the remaining useful time at this machine. My problem is the following when the features (sensor data) have…

### Improve prediction with LSTMs when data have no particular trend (complex)

I have a deep learning problem, I am working with the CMAPSS dataset, which contains data simulating the degradation of several aircraft engines. The aim is to predict from data collected on a machine in full operation, the remaining useful time at this machine. My problem is the following when the features (sensor data) have…

### Improve prediction with LSTMs when data have no particular trend (complex)

I have a deep learning problem, I am working with the CMAPSS dataset, which contains data simulating the degradation of several aircraft engines. The aim is to predict from data collected on a machine in full operation, the remaining useful time at this machine. My problem is the following when the features (sensor data) have…

### What are the current tools and techniques for image segmentation in order of pragmatism?

To explain what I mean I’ll depict the two extremes and something in the middle. 1) Most pragmatic: If you need to just segment a few images for a design project, forget AI. Go into Adobe Photoshop and hand select the outline of the object you need to extract. 2) Middle ground: If you need…

### Why is exp used in encoder of VAE instead of using the value of standard deviation alone?

There’s one VAE example here: https://towardsdatascience.com/teaching-a-variational-autoencoder-vae-to-draw-mnist-characters-978675c95776 And the source code of encoder: https://gist.github.com/FelixMohr/29e1d5b1f3fd1b6374dfd3b68c2cdbac#file-vae-py The author is using exp (natural exponential) for calculating values of the embedding vector: $z = Mean + Random \times e^{StandardDeviation}$ z = mn + tf.multiply(epsilon, tf.exp(sd)) It’s not related to the code (practical programming), but why using natural exponential instead of:…