What are evolutionary algorithms for topology and weights evolving of ANN (TWEANN) other than NEAT?

I wonder, if there are other than NEAT approaches to evolving architectures and weights of artificial neural networks? To be more specific: I am looking for projects/frameworks/libraries that use evolutionary/genetic algorithms to simultanousely evolve both topology and train weights of ANNs other than NEAT approach. By ‘other’ I mean similar to NEAT but not based…

How can I implement derivative of Softmax funtion for matrices in Python?

I have trouble understanding how to implement derivative of Softmax function. Here is what I tried: def Softmax(x): e_x = np.exp(x – np.max(x)) return e_x / e_x.sum() def d_Softmax(X): x=Softmax(X) s=x.reshape(-1,1) return (np.diagflat(s) – np.dot(s, s.T)) I am not sure if it works as it should. Normal softmax function when taking [m x n] matrix…

the role of probability in supervised learning

In the literature and textbooks, one often sees supervised learning expressed as a conditional probability, e.g., $\ \ \ \ \ \rho(\vec{y}|\vec{x},\vec{\theta})$ where $\vec{\theta}$ denotes a learned set of network parameters, $\vec{x}$ is an arbitrary input, and $\vec{y}$ is an arbitrary output. If we assume we have already learned $\vec{\theta}$, then, in words, $\rho(\vec{y}|\vec{x},\vec{\theta})$ is…

Is the temperature equal to epsilon in Reinforcement Learning?

This is a piece of code from my homework. # action policy: implements epsilon greedy and softmax def select_action(self, state, epsilon): qval = self.qtable[state] prob = [] if (self.softmax): # use Softmax distribution prob = sp.softmax(qval / epsilon) #print(prob) else: # assign equal value to all actions prob = np.ones(self.actions) * epsilon / (self.actions -1)…