Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, topology and rules.[1] It is most commonly applied in artificial life, general game playing[2] and evolutionary robotics. The main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network's performance at a task. For example, the outcome of a game (i.e. whether one player won or lost) can be easily measured without providing labeled examples of desired strategies. Neuroevolution is commonly used as part of the reinforcement learning paradigm, and it can be contrasted with conventional deep learning techniques that use gradient descent on a neural network with a fixed topology.
Neural correlates of consciousness
The neural correlates of consciousness (NCC) constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept. Neuroscientists use empirical approaches to discover neural correlates of subjective phenomena; that is, neural changes which necessarily and regularly correlate with a specific experience. The set should be minimal because, under the assumption that the brain is sufficient to give rise to any given conscious experience, the question is which of its components is necessary to produce it.
Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. This article aims to provide an overview of the most definitive models of neuro-biological computation as well as the tools commonly used to construct and analyze them.
Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior.[1] The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows:
Connectionism is an approach in the fields of cognitive science that hopes to explain mental phenomena using artificial neural networks (ANN). Connectionism presents a cognitive theory based on simultaneously occurring, distributed signal activity via connections that can be represented numerically, where learning occurs by modifying connection strengths based on experience.
In mathematics and computer algebra, automatic differentiation (AD), also called algorithmic differentiation, computational differentiation, auto-differentiation, or simply autodiff, is a set of techniques to evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations and elementary functions. By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
In information theory, the entropy of a random variable is the average level of information, surprise, or uncertainty inherent in the variable's possible outcomes. The concept of information entropy was introduced by Claude Shannon in his 1948 paper A Mathematical Theory of Communication, and is sometimes called Shannon entropy in his honour. As an example, consider a biased coin with probability p of landing on heads and probability 1 − p of landing on tails. The maximum surprise is for p = 1/2, when there is no reason to expect one outcome over another, and in this case a coin flip has an entropy of one bit. The minimum surprise is when p = 0 or p = 1, when the event is known and the entropy is zero bits. Other values of p give different entropies between zero and one bits.
Memory transfer was a biological process proposed by James V. McConnell and others in the 1960s. Memory transfer proposes a chemical basis for memory termed memory RNA which can be passed down through flesh instead of an intact nervous system. Since RNA encodes information[1] living cells produce and modify RNA in reaction to external events, it might also be used in neurons to record stimuli.[2][3][4] This explained the results of McConnell's experiments in which planarians retained memory of acquired information after regeneration. Memory transfer through memory RNA is not currently a well-accepted explanation and McConnell's experiments proved to be largely irreproducible.[5]
Statistical learning theory is a framework for machine learning drawing from the fields of statistics and functional analysis. Statistical learning theory deals with the problem of finding a predictive function based on data. Statistical learning theory has led to successful applications in fields such as computer vision, speech recognition, and bioinformatics.
Unipolar brush cells (UBCs) are a class of excitatory glutamatergic interneuron found in the granular layer of the cerebellar cortex and also in the granule cell domain of the cochlear nucleus.
Synaptic plasticity refers to a chemical synapse's ability to undergo changes in strength. Synaptic plasticity is typically input-specific, meaning that the activity in a particular neuron alters the efficacy of a synaptic connection between that neuron and its target. However, in the case of heterosynaptic plasticity, the activity of a particular neuron leads to input unspecific changes in the strength of synaptic connections from other unactivated neurons. A number of distinct forms of heterosynaptic plasticity have been found in a variety of brain regions and organisms. These different forms of heterosynaptic plasticity contribute to a variety of neural processes including associative learning, the development of neural circuits, and homeostasis of synaptic input.
Neuronal noise or neural noise refers to the random intrinsic electrical fluctuations within neuronal networks. These fluctuations are not associated with encoding a response to internal or external stimuli and can be from one to two orders of magnitude. Most noise commonly occurs below a voltage-threshold that is needed for an action potential to occur, but sometimes it can be present in the form of an action potential; for example, stochastic oscillations in pacemaker neurons in suprachiasmatic nucleus are partially responsible for the organization of circadian rhythms.
Self-organization, also called spontaneous order, is a process where some form of overall order arises from local interactions between parts of an initially disordered system. The process can be spontaneous when sufficient energy is available, not needing control by any external agent. It is often triggered by seemingly random fluctuations, amplified by positive feedback. The resulting organization is wholly decentralized, distributed over all the components of the system. As such, the organization is typically robust and able to survive or self-repair substantial perturbation. Chaos theory discusses self-organization in terms of islands of predictability in a sea of chaotic unpredictability.
Agent architecture in computer science is a blueprint for software agents and intelligent control systems, depicting the arrangement of components. The architectures implemented by intelligent agents are referred to as cognitive architectures.[1] The term agent is a conceptual idea, but not defined precisely. It consists of facts, set of goals and sometimes a plan library.[2]
Microbial intelligence (known as bacterial intelligence) is the intelligence shown by microorganisms. The concept encompasses complex adaptive behavior shown by single cells, and altruistic or cooperative behavior in populations of like or unlike cells mediated by chemical signalling that induces physiological or behavioral changes in cells and influences colony structures.[1]
Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial.[1] The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.[2]
An emergent algorithm is an algorithm that exhibits emergent behavior. In essence an emergent algorithm implements a set of simple building block behaviors that when combined exhibit more complex behaviors. One example of this is the implementation of fuzzy motion controllers used to adapt robot movement in response to environmental obstacles.[1]
Universal approximation theorem
In the mathematical theory of artificial neural networks, universal approximation theorems are results[1] that establish the density of an algorithmically generated class of functions within a given function space of interest. Typically, these results concern the approximation capabilities of the feedforward architecture on the space of continuous functions between two Euclidean spaces, and the approximation is with respect to the compact convergence topology. However, there are also a variety of results between non-Euclidean spaces[2] and other commonly used architectures and, more generally, algorithmically generated sets of functions, such as the convolutional neural network (CNN) architecture,[3][4] radial basis-functions,[5] or neural networks with specific properties.[6] Most universal approximation theorems can be parsed into two classes. The first quantifies the approximation capabilities of neural networks with an arbitrary number of artificial neurons (arbitrary width case) and the second focuses on the case with an arbitrary number of hidden layers, each containing a limited number of artificial neurons (arbitrary depth c
Baddeley's model of working memory
Baddeley's model of working memory is a model of human memory proposed by Alan Baddeley and Graham Hitch in 1974, in an attempt to present a more accurate model of primary memory (often referred to as short-term memory). Working memory splits primary memory into multiple components, rather than considering it to be a single, unified construct.[1]
The free energy principle is a formal statement that explains how living and non-living systems remain in non-equilibrium steady-states by restricting themselves to a limited number of states. It establishes that systems minimise a free energy function of their internal states (not to be confused with thermodynamic free energy), which entail beliefs about hidden states in their environment. The implicit minimisation of free energy is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception in neuroscience,[1] where it is also known as active inference.
Knowledge representation and reasoning
Knowledge representation and reasoning (KR², KR&R) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology[1] about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets.
A neural circuit is a population of neurons interconnected by synapses to carry out a specific function when activated. Neural circuits interconnect to one another to form large scale brain networks. Biological neural networks have inspired the design of artificial neural networks, but artificial neural networks are usually not strict copies of their biological counterparts.
In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem, especially with incomplete or imperfect information or limited computation capacity.[1][2] Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems.[3]
Warren Sturgis McCulloch (November 16, 1898 – September 24, 1969) was an American neurophysiologist and cybernetician, known for his work on the foundation for certain brain theories and his contribution to the cybernetics movement.[1] Along with Walter Pitts, McCulloch created computational models based on mathematical algorithms called threshold logic which split the inquiry into two distinct approaches, one approach focused on biological processes in the brain and the other focused on the application of neural networks to artificial intelligence.[2]
Summation, which includes both spatial and temporal summation, is the process that determines whether or not an action potential will be generated by the combined effects of excitatory and inhibitory signals, both from multiple simultaneous inputs, and from repeated inputs. Depending on the sum total of many individual inputs, summation may or may not reach the threshold voltage to trigger an action potential.
In computer science, a software agent is a computer program that acts for a user or other program in a relationship of agency, which derives from the Latin agere (to do): an agreement to act on one's behalf. Such action on behalf of implies the authority to decide which, if any, action is appropriate.[1][2] Agents are colloquially known as bots, from robot. They may be embodied, as when execution is paired with a robot body, or as software such as a chatbot executing on a phone (e.g. Siri) or other computing device. Software agents may be autonomous or work together with other agents or people. Software agents interacting with people (e.g. chatbots, human-robot interaction environments) may possess human-like qualities such as natural language understanding and speech, personality or embody humanoid form (see Asimo).
Mind uploading, also known as whole brain emulation (WBE), is the hypothetical futuristic process of scanning a physical structure of the brain accurately enough to create an emulation of the mental state and copying it to a computer in a digital form. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.
The Tetris effect occurs when people devote so much time and attention to an activity that it begins to pattern their thoughts, mental images, and dreams. It takes its name from the video game Tetris.
In neuroscience, synaptic scaling (or homeostatic scaling) is a form of homeostatic plasticity, in which the brain responds to chronically elevated activity in a neural circuit with negative feedback, allowing individual neurons to reduce their overall action potential firing rate.[1] Where Hebbian plasticity mechanisms modify neural synaptic connections selectively, synaptic scaling normalizes all neural synaptic connections[2] by decreasing the strength of each synapse by the same factor (multiplicative change), so that the relative synaptic weighting of each synapse is preserved.[1]
Network science is an academic field which studies complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks, considering distinct elements or actors represented by nodes and the connections between the elements or actors as links. The field draws on theories and methods including graph theory from mathematics, statistical mechanics from physics, data mining and information visualization from computer science, inferential modeling from statistics, and social structure from sociology. The United States National Research Council defines network science as the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena.
Systems neuroscience is a subdiscipline of neuroscience and systems biology that studies the structure and function of neural circuits and systems. Systems neuroscience encompasses a number of areas of study concerned with how nerve cells behave when connected together to form neural pathways, neural circuits, and larger brain networks. At this level of analysis, neuroscientists study how different neural circuits analyze sensory information, form perceptions of the external world, make decisions, and execute movements. Researchers in systems neuroscience are concerned with the relation between molecular and cellular approaches to understanding brain structure and function, as well as with the study of high-level mental functions such as language, memory, and self-awareness (which are the purview of behavioral and cognitive neuroscience). Systems neuroscientists typically employ techniques for understanding networks of neurons as they are seen to function, by way of electrophysiology using either single-unit recording or multi-electrode recording, functional magnetic resonance imaging (fMRI), and PET scans. The term is commonly used in an educational framework: a common sequenc
The reward system is a group of neural structures responsible for incentive salience, associative learning, and positively-valenced emotions, particularly ones involving pleasure as a core component. Reward is the attractive and motivational property of a stimulus that induces appetitive behavior, also known as approach behavior, and consummatory behavior. A rewarding stimulus has been described as any stimulus, object, event, activity, or situation that has the potential to make us approach and consume it is by definition a reward. In operant conditioning, rewarding stimuli function as positive reinforcers; however, the converse statement also holds true: positive reinforcers are rewarding.
Nervous tissue, also called neural tissue, is the main tissue component of the nervous system. The nervous system regulates and controls bodily functions and activity. It consists of two parts: the central nervous system (CNS) comprising the brain and spinal cord, and the peripheral nervous system (PNS) comprising the branching peripheral nerves. It is composed of neurons, also known as nerve cells, which receive and transmit impulses, and neuroglia, also known as glial cells or glia, which assist the propagation of the nerve impulse as well as provide nutrients to the neurons.
Classical cable theory uses mathematical models to calculate the electric current along passive neurites, particularly the dendrites that receive synaptic inputs at different sites and times. Estimates are made by modeling dendrites and axons as cylinders composed of segments with capacitances and resistances combined in parallel. The capacitance of a neuronal fiber comes about because electrostatic forces are acting through the very thin lipid bilayer. The resistance in series along the fiber is due to the axoplasm's significant resistance to movement of electric charge.
Associative memory may refer to:
Metaplasticity is a term originally coined by W.C. Abraham and M.F. Bear to refer to the plasticity of synaptic plasticity.[1] Until that time synaptic plasticity had referred to the plastic nature of individual synapses. However this new form referred to the plasticity of the plasticity itself, thus the term meta-plasticity. The idea is that the synapse's previous history of activity determines its current plasticity. This may play a role in some of the underlying mechanisms thought to be important in memory and learning such as long-term potentiation (LTP), long-term depression (LTD) and so forth. These mechanisms depend on current synaptic state, as set by ongoing extrinsic influences such as the level of synaptic inhibition, the activity of modulatory afferents such as catecholamines, and the pool of hormones affecting the synapses under study. Recently, it has become clear that the prior history of synaptic activity is an additional variable that influences the synaptic state, and thereby the degree, of LTP or LTD produced by a given experimental protocol. In a sense, then, synaptic plasticity is governed by an activity-dependent plasticity of the synaptic state; such
Retrograde signaling in biology is the process where a signal travels backwards from a target source to its original source. For example, the nucleus of a cell is the original source for creating signaling proteins. During retrograde signaling, instead of signals leaving the nucleus, they are sent to the nucleus.[1] In cell biology, this type of signaling typically occurs between the mitochondria or chloroplast and the nucleus. Signaling molecules from the mitochondria or chloroplast act on the nucleus to affect nuclear gene expression. In this regard, the chloroplast or mitochondria act as a sensor for internal external stimuli which activate a signaling pathway.[2]
The principal components of a collection of points in a real coordinate space are a sequence of unit vectors, where the -th vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest.
Error-driven learning is a sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to minimize some error feedback. It is a type of reinforcement learning.
Leabra stands for local, error-driven and associative, biologically realistic algorithm. It is a model of learning which is a balance between Hebbian and error-driven learning with other network-derived characteristics. This model is used to mathematically predict outcomes based on inputs and previous learning influences. This model is heavily influenced by and contributes to neural network designs and models. This algorithm is the default algorithm in emergent (successor of PDP ) when making a new project, and is extensively used in various simulations.
Bidirectional associative memory
Bidirectional associative memory (BAM) is a type of recurrent neural network. BAM was introduced by Bart Kosko in 1988.[1] There are two types of associative memory, auto-associative and hetero-associative. BAM is hetero-associative, meaning given a pattern it can return another pattern which is potentially of a different size. It is similar to the Hopfield network in that they are both forms of associative memory. However, Hopfield nets return patterns of the same size.
Autoassociative memory, also known as auto-association memory or an autoassociation network, is any type of memory that is able to retrieve a piece of data from only a tiny sample of itself. They are very effective in de-noising or removing interference from the input and can be used to determine whether the given input is “known” or “unknown”.
A cognitive model is an approximation to animal cognitive processes (predominantly human) for the purposes of comprehension and prediction. There are many types of cognitive models, and they can range from box-and-arrow diagrams to a set of equations to software programs that interact with the same tools that humans use to complete tasks (e.g., computer mouse and keyboard).[1][page needed]
A Hopfield network (or Ising model of a neural network or Ising–Lenz–Little model) is a form of recurrent artificial neural network and a type of spin glass system popularised by John Hopfield in 1982[1] as described earlier by Little in 1974[2] based on Ernst Ising's work with Wilhelm Lenz on the Ising model[3]. Hopfield networks serve as content-addressable (associative) memory systems with binary threshold nodes, or with continuous variables[4]. Hopfield networks also provide a model for understanding human memory[5][6].
In artificial intelligence, an intelligent agent (IA) is anything which perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or may use knowledge. They may be simple or complex — a thermostat is considered an example of an intelligent agent, as is a human being, as is any system that meets the definition, such as a firm, a state, or a biome.
Synaptic tagging, or the synaptic tagging hypothesis, was first proposed in 1997 by Uwe Frey and Richard G. Morris; it seeks to explain how neural signaling at a particular synapse creates a target for subsequent plasticity-related product (PRP) trafficking essential for sustained LTP and LTD. Although the molecular identity of the tags remains unknown, it has been established that they form as a result of high or low frequency stimulation, interact with incoming PRPs, and have a limited lifespan.[1]
A cortical column, also called hypercolumn, macrocolumn,[1] functional column[2] or sometimes cortical module,[3] is a group of neurons in the cortex of the brain that can be successively penetrated by a probe inserted perpendicularly to the cortical surface, and which have nearly identical receptive fields.[citation needed] Neurons within a minicolumn (microcolumn) encode similar features, whereas a hypercolumn denotes a unit containing a full set of values for any given set of receptive field parameters.[4] A cortical module is defined as either synonymous with a hypercolumn (Mountcastle) or as a tissue block of multiple overlapping hypercolumns.[5]
Belief propagation, also known as sum-product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node (or variable), conditional on any observed nodes (or variables). Belief propagation is commonly used in artificial intelligence and information theory and has demonstrated empirical success in numerous applications including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability.[1]
In mathematics, a series is, roughly speaking, a description of the operation of adding infinitely many quantities, one after the other, to a given starting quantity.[1] The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures (such as in combinatorics) through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance.
A Capsule Neural Network (CapsNet) is a machine learning system that is a type of artificial neural network (ANN) that can be used to better model hierarchical relationships. The approach is an attempt to more closely mimic biological neural organization.[1]
Extinction is a behavioral phenomenon observed in both operantly conditioned and classically conditioned behavior, which manifests itself by fading of non-reinforced conditioned response over time. When operant behavior that has been previously reinforced no longer produces reinforcing consequences the behavior gradually stops occurring.[1] In classical conditioning, when a conditioned stimulus is presented alone, so that it no longer predicts the coming of the unconditioned stimulus, conditioned responding gradually stops. For example, after Pavlov's dog was conditioned to salivate at the sound of a metronome, it eventually stopped salivating to the metronome after the metronome had been sounded repeatedly but no food came. Many anxiety disorders such as post traumatic stress disorder are believed to reflect, at least in part, a failure to extinguish conditioned fear.[2]
Habituation is a form of non-associative learning in which an innate (non-reinforced) response to a stimulus decreases after repeated or prolonged presentations of that stimulus.[1] Responses that habituate include those that involve the intact organism (e.g., full-body startle response) or those that involve only components of the organism (e.g., habituation of neurotransmitter release from in vitro Aplysia sensory neurons). The broad ubiquity of habituation across all biologic phyla has resulted in it being called the simplest, most universal form of learning...as fundamental a characteristic of life as DNA.[2] Functionally-speaking, by diminishing the response to an inconsequential stimulus, habituation is thought to free-up cognitive resources to other stimuli that are associated with biologically important events (i.e., punishment/reward). For example, organisms may habituate to repeated sudden loud noises when they learn these have no consequences.[3] A progressive decline of a behavior in a habituation procedure may also reflect nonspecific effects such as fatigue, which must be ruled out when the interest is in habituation.[4] Habituation is
A complex system is a system composed of many components which may interact with each other. Examples of complex systems are Earth's global climate, organisms, the human brain, infrastructure such as power grid, transportation or communication systems, social and economic organizations, an ecosystem, a living cell, and ultimately the entire universe.
Attention is the behavioral and cognitive process of selectively concentrating on a discrete aspect of information, whether considered subjective or objective, while ignoring other perceivable information. William James (1890) wrote that Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. Attention has also been described as the allocation of limited cognitive processing resources. Attention is manifested by an attentional bottleneck, in term of the amount of data the brain can process each second; for example, in human vision, only less than 1% of the visual input data can enter the bottleneck, leading to inattentional blindness.
Neural binding is the neuroscientific aspect of what is commonly known as the binding problem: the interdisciplinary difficulty of creating a comprehensive and verifiable model for the unity of consciousness. Binding refers to the integration of highly diverse neural information in the forming of one's cohesive experience. The neural binding hypothesis states that neural signals are paired through synchronized oscillations of neuronal activity that combine and recombine to allow for a wide variety of responses to context-dependent stimuli. These dynamic neural networks are thought to account for the flexibility and nuanced response of the brain to various situations. The coupling of these networks is transient, on the order of milliseconds, and allows for rapid activity.
The echo state network (ESN)[1][2] is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behaviour is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.
Sparse distributed memory (SDM) is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988 while he was at NASA Ames Research Center. It is a generalized random-access memory (RAM) for long (e.g., 1,000 bit) binary words. These words serve as both addresses to and data for the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it, as measured by the number of mismatched bits (i.e., the Hamming distance between memory addresses).[1]
Pandemonium architecture arose in response to the inability of template matching theories to offer a biologically plausible explanation of the image constancy phenomena. Contemporary researchers praise this architecture for its elegancy and creativity; that the idea of having multiple independent systems working in parallel to address the image constancy phenomena of pattern recognition is powerful yet simple. The basic idea of the pandemonium architecture is that a pattern is first perceived in its parts before the whole.
Bidirectional recurrent neural networks
Bidirectional recurrent neural networks (BRNN) connect two hidden layers of opposite directions to the same output. With this form of generative deep learning, the output layer can get information from past (backwards) and future (forward) states simultaneously. Invented in 1997 by Schuster and Paliwal,[1] BRNNs were introduced to increase the amount of input information available to the network. For example, multilayer perceptron (MLPs) and time delay neural network (TDNNs) have limitations on the input data flexibility, as they require their input data to be fixed. Standard recurrent neural network (RNNs) also have restrictions as the future input information cannot be reached from the current state. On the contrary, BRNNs do not require their input data to be fixed. Moreover, their future input information is reachable from the current state. [2]
Inhibitory postsynaptic potential
An inhibitory postsynaptic potential (IPSP) is a kind of synaptic potential that makes a postsynaptic neuron less likely to generate an action potential.[1] IPSP were first investigated in motorneurons by David P. C. Lloyd, John Eccles and Rodolfo Llinás in the 1950s and 1960s.[2][3] The opposite of an inhibitory postsynaptic potential is an excitatory postsynaptic potential (EPSP), which is a synaptic potential that makes a postsynaptic neuron more likely to generate an action potential. IPSPs can take place at all chemical synapses, which use the secretion of neurotransmitters to create cell to cell signalling. Inhibitory presynaptic neurons release neurotransmitters that then bind to the postsynaptic receptors; this induces a change in the permeability of the postsynaptic neuronal membrane to particular ions. An electric current that changes the postsynaptic membrane potential to create a more negative postsynaptic potential is generated, i.e. the postsynaptic membrane potential becomes more negative than the resting membrane potential, and this is called hyperpolarisation. To generate an action potential, the postsynaptic membrane must depolarize—th
Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either by mechanisms within individual neurons or by interactions between neurons. In individual neurons, oscillations can appear either as oscillations in membrane potential or as rhythmic patterns of action potentials, which then produce oscillatory activation of post-synaptic neurons. At the level of neural ensembles, synchronized activity of large numbers of neurons can give rise to macroscopic oscillations, which can be observed in an electroencephalogram. Oscillatory activity in groups of neurons generally arises from feedback connections between the neurons that result in the synchronization of their firing patterns. The interaction between neurons can give rise to oscillations at a different frequency than the firing frequency of individual neurons. A well-known example of macroscopic neural oscillations is alpha activity.
An engram is a unit of cognitive information imprinted in a physical substance, theorized to be the means by which memories are stored[1] as biophysical or biochemical[2] changes in the brain or other biological tissue, in response to external stimuli.
Recall in memory refers to the mental process of retrieval of information from the past. Along with encoding and storage, it is one of the three core processes of memory. There are three main types of recall: free recall, cued recall and serial recall. Psychologists test these forms of recall as a way to study the memory processes of humans[1] and animals.[2] Two main theories of the process of recall are the two-stage theory and the theory of encoding specificity.
Multiple trace theory is a memory consolidation model advanced as an alternative model to strength theory. It posits that each time some information is presented to a person, it is neurally encoded in a unique memory trace composed of a combination of its attributes.[1] Further support for this theory came in the 1960s from empirical findings that people could remember specific attributes about an object without remembering the object itself.[2] The mode in which the information is presented and subsequently encoded can be flexibly incorporated into the model. This memory trace is unique from all others resembling it due to differences in some aspects of the item's attributes, and all memory traces incorporated since birth are combined into a multiple-trace representation in the brain.[3] In memory research, a mathematical formulation of this theory can successfully explain empirical phenomena observed in recognition and recall tasks.
Common coding theory is a cognitive psychology theory describing how perceptual representations (e.g. of things we can see and hear) and motor representations (e.g. of hand actions) are linked. The theory claims that there is a shared representation (a common code) for both perception and action. More important, seeing an event activates the action associated with that event, and performing an action activates the associated perceptual event.[1]
Independent component analysis
In signal processing, independent component analysis (ICA) is a computational method for separating a multivariate signal into additive subcomponents. This is done by assuming that the subcomponents are non-Gaussian signals and that they are statistically independent from each other. ICA is a special case of blind source separation. A common example application is the cocktail party problem of listening in on one person's speech in a noisy room.
Central pattern generators (CPGs) are biological neural circuits that produce rhythmic outputs in the absence of rhythmic input.[1][2][3] They are the source of the tightly-coupled patterns of neural activity that drive rhythmic and stereotyped motor behaviors like walking, swimming, breathing, or chewing. The ability to function without input from higher brain areas still requires modulatory inputs, and their outputs are not fixed. Flexibility in response to sensory input is a fundamental quality of CPG-driven behavior.[1][2] To be classified as a rhythmic generator, a CPG requires:
The term predictive coding is used in several disciplines (including signal-processing technologies and law) in loosely-related or unrelated senses.
The limbic system, also known as the paleomammalian cortex, is a set of brain structures located on both sides of the thalamus, immediately beneath the medial temporal lobe of the cerebrum primarily in the forebrain.
Neural decoding is a neuroscience field concerned with the hypothetical reconstruction of sensory and other stimuli from information that has already been encoded and represented in the brain by networks of neurons.[1] Reconstruction refers to the ability of the researcher to predict what sensory stimuli the subject is receiving based purely on neuron action potentials. Therefore, the main goal of neural decoding is to characterize how the electrical activity of neurons elicit activity and responses in the brain.[2]
Global workspace theory (GWT) is a simple cognitive architecture that has been developed to account qualitatively for a large set of matched pairs of conscious and unconscious processes. It was proposed by Bernard Baars (1988, 1997, 2002). Brain interpretations and computational simulations of GWT are the focus of current research.
Neural coding (or Neural representation) is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among the electrical activity of the neurons in the ensemble.[1][2] Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is thought that neurons can encode both digital and analog information.[3]
A neural pathway is the connection formed by axons that project from neurons to make synapses onto neurons in another location, to enable a signal to be sent from one region of the nervous system to another. Neurons are connected by a single axon, or by a bundle of axons known as a nerve tract, or fasciculus. Shorter neural pathways are found within grey matter in the brain, whereas longer projections, made up of myelinated axons, constitute white matter.
A nerve net consists of interconnected neurons lacking a brain or any form of cephalization. While organisms with bilateral body symmetry are normally associated with a central nervous system, organisms with radial symmetry are associated with nerve nets. Nerve nets can be found in members of the Cnidaria, Ctenophora, and Echinodermata phyla, all of which are found in marine environments. Nerve nets can provide animals with the ability to sense objects through the use of the sensory neurons within the nerve net.
Involuntary memory, also known as involuntary explicit memory, involuntary conscious memory, involuntary aware memory, madeleine moment, mind pops and most commonly, involuntary autobiographical memory, is a sub-component of memory that occurs when cues encountered in everyday life evoke recollections of the past without conscious effort. Voluntary memory, its binary opposite, is characterized by a deliberate effort to recall the past.
LeNet is a convolutional neural network structure proposed by Yann LeCun et al. in 1989. In general, LeNet refers to LeNet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing.
In computer science, online machine learning is a method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time, e.g., stock price prediction. Online learning algorithms may be prone to catastrophic interference, a problem that can be addressed by incremental learning approaches.
Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.
Instrumental convergence is the hypothetical tendency for most sufficiently intelligent agents to pursue potentially unbounded instrumental goals provided that their ultimate goals are themselves unlimited.
Neuroevolution of augmenting topologies
NeuroEvolution of Augmenting Topologies (NEAT) is a genetic algorithm (GA) for the generation of evolving artificial neural networks (a neuroevolution technique) developed by Ken Stanley in 2002 while at The University of Texas at Austin. It alters both the weighting parameters and structures of networks, attempting to find a balance between the fitness of evolved solutions and their diversity. It is based on applying three key techniques: tracking genes with history markers to allow crossover among topologies, applying speciation (the evolution of species) to preserve innovations, and developing topologies incrementally from simple initial structures (complexifying).
Gradient descent is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as gradient ascent.
New mysterianism—or commonly just mysterianism—is a philosophical position proposing that the hard problem of consciousness cannot be resolved by humans. The unresolvable problem is how to explain the existence of qualia (individual instances of subjective, conscious experience). In terms of the various schools of philosophy of mind, mysterianism is a form of nonreductive physicalism. Some mysterians state their case uncompromisingly (Colin McGinn has said that consciousness is a mystery that human intelligence will never unravel); others believe merely that consciousness is not within the grasp of present human understanding, but may be comprehensible to future advances of science and technology.
Metamemory or Socratic awareness, a type of metacognition, is both the introspective knowledge of one's own memory capabilities (and strategies that can aid memory) and the processes involved in memory self-monitoring.[1] This self-awareness of memory has important implications for how people learn and use memories. When studying, for example, students make judgments of whether they have successfully learned the assigned material and use these decisions, known as judgments of learning, to allocate study time.[2]
Evolution is change in the heritable characteristics of biological populations over successive generations. These characteristics are the expressions of genes that are passed on from parent to offspring during reproduction. Different characteristics tend to exist within any given population as a result of mutation, genetic recombination and other sources of genetic variation. Evolution occurs when evolutionary processes such as natural selection and genetic drift act on this variation, resulting in certain characteristics becoming more common or rare within a population. It is this process of evolution that has given rise to biodiversity at every level of biological organisation, including the levels of species, individual organisms and molecules.
Development of the nervous system
The development of the nervous system, or neural development, or neurodevelopment, refers to the processes that generate, shape, and reshape the nervous system of animals, from the earliest stages of embryonic development to adulthood. The field of neural development draws on both neuroscience and developmental biology to describe and provide insight into the cellular and molecular mechanisms by which complex nervous systems develop, from nematodes and fruit flies to mammals.
A connectome is a comprehensive map of neural connections in the brain, and may be thought of as its wiring diagram. More broadly, a connectome would include the mapping of all neural connections within an organism's nervous system.
Programmable matter is matter which has the ability to change its physical properties (shape, density, moduli, conductivity, optical properties, etc.) in a programmable fashion, based upon user input or autonomous sensing. Programmable matter is thus linked to the concept of a material which inherently has the ability to perform information processing.
Maximum entropy thermodynamics
In physics, maximum entropy thermodynamics (colloquially, MaxEnt thermodynamics) views equilibrium thermodynamics and statistical mechanics as inference processes. More specifically, MaxEnt applies inference techniques rooted in Shannon information theory, Bayesian probability, and the principle of maximum entropy. These techniques are relevant to any situation requiring prediction from incomplete or insufficient data (e.g., image reconstruction, signal processing, spectral analysis, and inverse problems). MaxEnt thermodynamics began with two papers by Edwin T. Jaynes published in the 1957 Physical Review.[1][2]
3D Virtual Creature Evolution, abbreviated to 3DVCE, is an artificial evolution simulation program created by Lee Graham. The website is currently down. Its purpose is to visualize and research common themes in body plans and strategies to achieve a fitness function of the artificial organisms generated and maintained by the system in their given environment. The program was inspired by Karl Sims’ 1994 artificial evolution program, Evolved Virtual Creatures. The program is run through volunteers who download the program from the home website and return information from completed simulations. It is currently available on Windows and in some cases Linux.
Mixture of experts (MoE) refers to a machine learning technique where multiple experts (learners) are used to divide the problem space into homogeneous regions.[1] An example from the computer vision domain is combining a neural network model for human detection with another for pose estimation. If the output is conditioned on multiple levels of probabilistic gating functions, the mixture is called a hierarchical mixture of experts.[2]
Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al.[1] The GRU is like a long short-term memory (LSTM) with a forget gate,[2] but has fewer parameters than LSTM, as it lacks an output gate.[3] GRU's performance on certain tasks of polyphonic music modeling, speech signal modeling and natural language processing was found to be similar to that of LSTM.[4][5] GRUs have been shown to exhibit better performance on certain smaller and less frequent datasets.[6][7]
Synaptic gating is the ability of neural circuits to gate inputs by either suppressing or facilitating specific synaptic activity. Selective inhibition of certain synapses has been studied thoroughly, and recent studies have supported the existence of permissively gated synaptic transmission. In general, synaptic gating involves a mechanism of central control over neuronal output. It includes a sort of gatekeeper neuron, which has the ability to influence transmission of information to selected targets independently of the parts of the synapse upon which it exerts its action.
Holographic associative memory
Holographic associative memory (HAM) is an information storage and retrieval system based on the principles of holography. Holograms are made by using two beams of light, called a reference beam and an object beam. They produce a pattern on the film that contains them both. Afterwards, by reproducing the reference beam, the hologram recreates a visual image of the original object. In theory, one could use the object beam to do the same thing: reproduce the original reference beam. In HAM, the pieces of information act like the two beams. Each can be used to retrieve the other from the pattern. It can be thought of as an artificial neural network which mimics the way the brain uses information. The information is presented in abstract form by a complex vector which may be expressed directly by a waveform possessing frequency and magnitude. This waveform is analogous to electrochemical impulses believed to transmit information between biological neuron cells.
Adaptive resonance theory (ART) is a theory developed by Stephen Grossberg and Gail Carpenter on aspects of how the brain processes information. It describes a number of neural network models which use supervised and unsupervised learning methods, and address problems such as pattern recognition and prediction.
In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation,[1] a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators.
Graded potentials are changes in membrane potential that vary in size, as opposed to being all-or-none. They include diverse potentials such as receptor potentials, electrotonic potentials, subthreshold membrane potential oscillations, slow-wave potential, pacemaker potentials, and synaptic potentials, which scale with the magnitude of the stimulus. They arise from the summation of the individual actions of ligand-gated ion channel proteins, and decrease over time and space. They do not typically involve voltage-gated sodium and potassium channels. These impulses are incremental and may be excitatory or inhibitory. They occur at the postsynaptic dendrite in response to presynaptic neuron firing and release of neurotransmitter, or may occur in skeletal, smooth, or cardiac muscle in response to nerve input. The magnitude of a graded potential is determined by the strength of the stimulus.
The soliton hypothesis in neuroscience is a model that claims to explain how action potentials are initiated and conducted along axons based on a thermodynamic theory of nerve pulse propagation. It proposes that the signals travel along the cell's membrane in the form of certain kinds of solitary sound pulses that can be modeled as solitons. The model is proposed as an alternative to the Hodgkin–Huxley model in which action potentials: voltage-gated ion channels in the membrane open and allow sodium ions to enter the cell. The resulting decrease in membrane potential opens nearby voltage-gated sodium channels, thus propagating the action potential. The transmembrane potential is restored by delayed opening of potassium channels. Soliton hypothesis proponents assert that energy is mainly conserved during propagation except dissipation losses; Measured temperature changes are completely inconsistent with the Hodgkin-Huxley model.
Spike-timing-dependent plasticity
Spike-timing-dependent plasticity (STDP) is a biological process that adjusts the strength of connections between neurons in the brain. The process adjusts the connection strengths based on the relative timing of a particular neuron's output and input action potentials (or spikes). The STDP process partially explains the activity-dependent development of nervous systems, especially with regard to long-term potentiation and long-term depression.
Early long-term potentiation (E-LTP) is the first phase of long-term potentiation (LTP), a well-studied form of synaptic plasticity, and consists of an increase in synaptic strength.[1] LTP could be produced by repetitive stimulation of the presynaptic terminals, and it is believed to play a role in memory function in the hippocampus, amygdala and other cortical brain structures in mammals.[2][3]
In neuroethology and the study of learning, anti-Hebbian learning describes a particular class of learning rule by which synaptic plasticity can be controlled. These rules are based on a reversal of Hebb's postulate, and therefore can be simplistically understood as dictating reduction of the strength of synaptic connectivity between neurons following a scenario in which a neuron directly contributes to production of an action potential in another neuron.
Dendrites, also dendrons, are branched protoplasmic extensions of a nerve cell that propagate the electrochemical stimulation received from other neural cells to the cell body, or soma, of the neuron from which the dendrites project. Electrical stimulation is transmitted onto dendrites by upstream neurons via synapses which are located at various points throughout the dendritic tree. Dendrites play a critical role in integrating these synaptic inputs and in determining the extent to which action potentials are produced by the neuron. Dendritic arborization, also known as dendritic branching, is a multi-step biological process by which neurons form new dendritic trees and branches to create new synapses. The morphology of dendrites such as branch density and grouping patterns are highly correlated to the function of the neuron. Malformation of dendrites is also tightly correlated to impaired nervous system function. Some disorders that are associated with the malformation of dendrites are autism, depression, schizophrenia, Down syndrome and anxiety.
A dendritic spine is a small membranous protrusion from a neuron's dendrite that typically receives input from a single axon at the synapse. Dendritic spines serve as a storage site for synaptic strength and help transmit electrical signals to the neuron's cell body. Most spines have a bulbous head, and a thin neck that connects the head of the spine to the shaft of the dendrite. The dendrites of a single neuron can contain hundreds to thousands of spines. In addition to spines providing an anatomical substrate for memory storage and synaptic transmission, they may also serve to increase the number of possible contacts between neurons. It has also been suggested that changes in the activity of neurons have a positive effect on spine morphology.
In machine learning, backpropagation is a widely used algorithm for training feedforward neural networks. Generalizations of backpropagation exist for other artificial neural networks (ANNs), and for functions generally. These classes of algorithms are all referred to generically as backpropagation. In fitting a neural network, backpropagation computes the gradient of the loss function with respect to the weights of the network for a single input–output example, and does so efficiently, unlike a naive direct computation of the gradient with respect to each weight individually. This efficiency makes it feasible to use gradient methods for training multilayer networks, updating weights to minimize loss; gradient descent, or variants such as stochastic gradient descent, are commonly used. The backpropagation algorithm works by computing the gradient of the loss function with respect to each weight by the chain rule, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this is an example of dynamic programming.
The synaptotropic hypothesis, also called the synaptotrophic hypothesis, is a neurobiological hypothesis of neuronal growth and synapse formation. The hypothesis was first formulated by J.E. Vaughn in 1988,[1] and remains a focus of current research efforts.[2] The synaptotropic hypothesis proposes that input from a presynaptic to a postsynaptic cell (and maturation of excitatory synaptic inputs) eventually can change the course of synapse formation at dendritic and axonal arbors. This synapse formation is required for the development of neuronal structure in the functioning brain.[2]
Neuroplasticity, also known as neural plasticity, or brain plasticity, is the ability of neural networks in the brain to change through growth and reorganization. These changes range from individual neuron pathways making new connections, to systematic adjustments like cortical remapping. Examples of neuroplasticity include circuit and network changes that result from learning a new ability, environmental influences, practice, and psychological stress.[1][2][3][4][5][6]
Coincidence detection in neurobiology
Coincidence detection in the context of neurobiology is a process by which a neuron or a neural circuit can encode information by detecting the occurrence of temporally close but spatially distributed input signals. Coincidence detectors influence neuronal information processing by reducing temporal jitter,[1] reducing spontaneous activity, and forming associations between separate neural events. This concept has led to a greater understanding of neural processes and the formation of computational maps in the brain.
The generalized Hebbian algorithm (GHA), also known in the literature as Sanger's rule, is a linear feedforward neural network model for unsupervised learning with applications primarily in principal components analysis. First defined in 1989,[1] it is similar to Oja's rule in its formulation and stability, except it can be applied to networks with multiple outputs. The name originates because of the similarity between the algorithm and a hypothesis made by Donald Hebb[2] about the way in which synaptic strengths in the brain are modified in response to experience, i.e., that changes are proportional to the correlation between the firing of pre- and post-synaptic neurons.[3]
In neuroscience and computer science, synaptic weight refers to the strength or amplitude of a connection between two nodes, corresponding in biology to the amount of influence the firing of one neuron has on another. The term is typically used in artificial and biological neural network research.[1]
In physiology, an action potential (AP) occurs when the membrane potential of a specific cell location rapidly rises and falls: this depolarization then causes adjacent locations to similarly depolarize. Action potentials occur in several types of animal cells, called excitable cells, which include neurons, muscle cells, endocrine cells and in some plant cells.
Oja's learning rule, or simply Oja's rule, named after Finnish computer scientist Erkki Oja, is a model of how neurons in the brain or in artificial neural networks change connection strength, or learn, over time. It is a modification of the standard Hebb's Rule (see Hebbian learning) that, through multiplicative normalization, solves all stability problems and generates an algorithm for principal components analysis. This is a computational form of an effect which is believed to happen in biological neurons.
A neuron or nerve cell is an electrically excitable cell that communicates with other cells via specialized connections called synapses. It is the main component of nervous tissue in all animals except sponges and placozoa. Plants and fungi do not have nerve cells.
An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in an artificial neural network.[1] The artificial neuron receives one or more inputs (representing excitatory postsynaptic potentials and inhibitory postsynaptic potentials at neural dendrites) and sums them to produce an output (or .mw-parser-output .vanchor>:target~.vanchor-text{background-color:#b1d2ff}activation, representing a neuron's action potential which is transmitted along its axon). Usually each input is separately weighted, and the sum is passed through a non-linear function known as an activation function or transfer function[clarification needed]. The transfer functions usually have a sigmoid shape, but they may also take the form of other non-linear functions, piecewise linear functions, or step functions. They are also often monotonically increasing, continuous, differentiable and bounded. The thresholding function has inspired building logic gates referred to as threshold logic; applicable to building logic circuits resembling brain processing. For example, new devices such as memristors have been e
Neuroscience is the scientific study of the nervous system. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developmental biology, cytology, computer science and mathematical modeling to understand the fundamental and emergent properties of neurons and neural circuits. The understanding of the biological basis of learning, memory, behavior, perception, and consciousness has been described by Eric Kandel as the ultimate challenge of the biological sciences.
Biological neuron models, also known as a spiking neuron models, are mathematical descriptions of the properties of certain cells in the nervous system that generate sharp electrical potentials across their cell membrane, roughly one millisecond in duration, called action potentials or spikes. Since spikes are transmitted along the axon and synapses from the sending neuron to many other neurons, spiking neurons are considered to be a major information processing unit of the nervous system. Spiking neuron models can be divided into different categories: the most detailed mathematical models are biophysical neuron models that describe the membrane voltage as a function of the input current and the activation of ion channels. Mathematically simpler are integrate-and-fire models that describe the membrane voltage as a function of the input current and predict the spike times without a description of the biophysical processes that shape the time course of an action potential. Even more abstract models only predict output spikes as a function of the stimulation where the stimulation can occur through sensory input or pharmacologically. This article provides a short overview of different
Artificial neural networks (ANNs), usually simply called neural networks (NNs), are computing systems inspired by the biological neural networks that constitute animal brains.
In deep learning, a convolutional neural network is a class of artificial neural network, most commonly applied to analyze visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation equivariant responses known as feature maps. Counter-intuitively, most convolutional neural networks are only equivariant, as opposed to invariant, to translation. They have applications in image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain-computer interfaces, and financial time series.
The neocognitron is a hierarchical, multilayered artificial neural network proposed by Kunihiko Fukushima in 1979.[1] It has been used for Japanese handwritten character recognition and other pattern recognition tasks, and served as the inspiration for convolutional neural networks.[2]
Harmonic analysis is a branch of mathematics concerned with the representation of functions or signals as the superposition of basic waves, and the study of and generalization of the notions of Fourier series and Fourier transforms. In the past two centuries, it has become a vast subject with applications in areas as diverse as number theory, representation theory, signal processing, quantum mechanics, tidal analysis and neuroscience.
In neuroscience, synaptic plasticity is the ability of synapses to strengthen or weaken over time, in response to increases or decreases in their activity.[1] Since memories are postulated to be represented by vastly interconnected neural circuits in the brain, synaptic plasticity is one of the important neurochemical foundations of learning and memory (see Hebbian theory).
Neural backpropagation is the phenomenon in which after the action potential of a neuron creates a voltage spike down the axon (normal propagation) another impulse is generated from the soma and propagates toward to the apical portions of the dendritic arbor or dendrites, from which much of the original input current originated. In addition to active backpropagation of the action potential, there is also passive electrotonic spread. While there is ample evidence to prove the existence of backpropagating action potentials, the function of such action potentials and the extent to which they invade the most distal dendrites remains highly controversial.
Neurotransmission is the process by which signaling molecules called neurotransmitters are released by the axon terminal of a neuron, and bind to and react with the receptors on the dendrites of another neuron a short distance away. A similar process occurs in retrograde neurotransmission, where the dendrites of the postsynaptic neuron release retrograde neurotransmitters that signal through receptors that are located on the axon terminal of the presynaptic neuron, mainly at GABAergic and glutamatergic synapses.
Dendrodendritic synapses are connections between the dendrites of two different neurons. This is in contrast to the more common axodendritic synapse (chemical synapse) where the axon sends signals and the dendrite receives them. Dendrodendritic synapses are activated in a similar fashion to axodendritic synapses in respects to using a chemical synapse. These chemical synapses receive a depolarizing signal from an incoming action potential which results in an influx of calcium ions that permit release of Neurotransmitters to propagate the signal the post synaptic cell. There is also evidence of bi-directionality in signaling at dendrodendritic synapses. Ordinarily, one of the dendrites will display inhibitory effects while the other will display excitatory effects.[1] The actual signaling mechanism utilizes Na and Ca2 pumps in a similar manner to those found in axodendritic synapses.[2]
An electrical synapse is a mechanical and electrically conductive link between two neighboring neurons that is formed at a narrow gap between the pre- and postsynaptic neurons known as a gap junction. At gap junctions, such cells approach within about 3.8 nm of each other, a much shorter distance than the 20- to 40-nanometer distance that separates cells at chemical synapse. In many animals, electrical synapse-based systems co-exist with chemical synapses.
Unsupervised learning (UL) is a type of algorithm that learns patterns from untagged data. The hope is that, through mimicry, the machine is forced to build a compact internal representation of its world and then generate imaginative content. In contrast to supervised learning (SL) where data is tagged by a human, e.g. as car or fish etc, UL exhibits self-organization that captures patterns as neuronal predilections or probability densities. The other levels in the supervision spectrum are reinforcement learning where the machine is given only a numerical performance score as its guidance, and semi-supervised learning where a smaller portion of the data is tagged. Two broad methods in UL are Neural Networks and Probabilistic Methods.
A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional representation of a higher dimensional data set while preserving the topological structure of the data. For example, a data set with p variables measured in n observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional map such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze.
Memory allocation is a process that determines which specific synapses and neurons in a neural network will store a given memory.[1][2][3] Although multiple neurons can receive a stimulus, only a subset of the neurons will induce the necessary plasticity for memory encoding. The selection of this subset of neurons is termed neuronal allocation. Similarly, multiple synapses can be activated by a given set of inputs, but specific mechanisms determine which synapses actually go on to encode the memory, and this process is referred to as synaptic allocation. Memory allocation was first discovered in the lateral amygdala by Sheena Josselyn and colleagues in Alcino J. Silva's laboratory.[4]
Competitive learning is a form of unsupervised learning in artificial neural networks, in which nodes compete for the right to respond to a subset of the input data.[1] A variant of Hebbian learning, competitive learning works by increasing the specialization of each node in the network. It is well suited to finding clusters within data.
Neural gas is an artificial neural network, inspired by the self-organizing map and introduced in 1991 by Thomas Martinetz and Klaus Schulten.[1] The neural gas is a simple algorithm for finding optimal data representations based on feature vectors. The algorithm was coined neural gas because of the dynamics of the feature vectors during the adaptation process, which distribute themselves like a gas within the data space. It is applied where data compression or vector quantization is an issue, for example speech recognition,[2] image processing[3] or pattern recognition. As a robustly converging alternative to the k-means clustering it is also used for cluster analysis.[4]
Pattern recognition is the automated recognition of patterns and regularities in data. It has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Pattern recognition has its origins in statistics and engineering; some modern approaches to pattern recognition include the use of machine learning, due to the increased availability of big data and a new abundance of processing power. However, these activities can be viewed as two facets of the same field of application, and together they have undergone substantial development over the past few decades. A modern definition of pattern recognition is:
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover and selection.
In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.
A group mind, group ego, mind coalescence, or gestalt intelligence in science fiction is a plot device in which multiple minds, or consciousnesses, are linked into a single, collective consciousness or intelligence.[1][2] The first alien hive society was depicted in H. G. Wells's The First Men in the Moon (1901) while the use of human hive minds in literature goes back at least as far as David H. Keller's The Human Termites (published in Wonder Stories in 1929) and Olaf Stapledon's science fiction novel Last and First Men (1930),[3][4] which is the first known use of the term group mind in science fiction.[5][2] The use of the phrase hive mind, however, was first recorded in 1943 in use in bee keeping and its first known use in sci-fi was James H. Schmitz's Second Night of Summer (1950).[6][7] A group mind might be formed by any fictional plot device that facilitates brain to brain communication, such as telepathy.
A cellular automaton is a discrete model of computation studied in automata theory. Cellular automata are also called cellular spaces, tessellation automata, homogeneous structures, cellular structures, tessellation structures, and iterative arrays. Cellular automata have found application in various areas, including physics, theoretical biology and microstructure modeling.
A New Kind of Science is a book by Stephen Wolfram, published by his company Wolfram Research under the imprint Wolfram Media in 2002. It contains an empirical and systematic study of computational systems such as cellular automata. Wolfram calls these systems simple programs and argues that the scientific philosophy and methods appropriate for the study of simple programs are relevant to other fields of science.
In philosophy, systems theory, science, and art, emergence occurs when an entity is observed to have properties its parts do not have on their own, properties or behaviors which emerge only when the parts interact in a wider whole.
Boids is an artificial life program, developed by Craig Reynolds in 1986, which simulates the flocking behaviour of birds. His paper on this topic was published in 1987 in the proceedings of the ACM SIGGRAPH conference. The name boid corresponds to a shortened version of bird-oid object, which refers to a bird-like object. Incidentally, boid is also a New York Metropolitan dialect pronunciation for bird.
Artificial life is a field of study wherein researchers examine systems related to natural life, its processes, and its evolution, through the use of simulations with computer models, robotics, and biochemistry. The discipline was named by Christopher Langton, an American theoretical biologist, in 1986. In 1987 Langton organized the first conference on the field, in Los Alamos, New Mexico. There are three main kinds of alife, named for their approaches: soft, from software; hard, from hardware; and wet, from biochemistry. Artificial life researchers study traditional biology by trying to recreate aspects of biological phenomena.
Mathematical and theoretical biology
Mathematical and theoretical biology or, biomathematics, is a branch of biology which employs theoretical analysis, mathematical models and abstractions of the living organisms to investigate the principles that govern the structure, development and behavior of the systems, as opposed to experimental biology which deals with the conduction of experiments to prove and validate the scientific theories. The field is sometimes called mathematical biology or biomathematics to stress the mathematical side, or theoretical biology to stress the biological side. Theoretical biology focuses more on the development of theoretical principles for biology while mathematical biology focuses on the use of mathematical tools to study biological systems, even though the two terms are sometimes interchanged.
The global brain is a neuroscience-inspired and futurological vision of the planetary information and communications technology network that interconnects all humans and their technological artifacts. As this network stores ever more information, takes over ever more functions of coordination and communication from traditional organizations, and becomes increasingly intelligent, it increasingly plays the role of a brain for the planet Earth.
Living systems are open self-organizing life forms that interact with their environment. These systems are maintained by flows of information, energy and matter.
Viable system theory (VST) concerns cybernetic processes in relation to the development/evolution of dynamical systems. They are considered to be living systems in the sense that they are complex and adaptive, can learn, and are capable of maintaining an autonomous existence, at least within the confines of their constraints. These attributes involve the maintenance of internal stability through adaptation to changing environments. One can distinguish between two strands such theory: formal systems and principally non-formal system. Formal viable system theory is normally referred to as viability theory, and provides a mathematical approach to explore the dynamics of complex systems set within the context of control theory. In contrast, principally non-formal viable system theory is concerned with descriptive approaches to the study of viability through the processes of control and communication, though these theories may have mathematical descriptions associated with them.
Dynamic network analysis (DNA) is an emergent scientific field that brings together traditional social network analysis (SNA), link analysis (LA), social simulation and multi-agent systems (MAS) within network science and network theory.
An artificial neural network's learning rule or learning process is a method, mathematical logic or algorithm which improves the network's performance and/or training time. Usually, this rule is applied repeatedly over the network. It is done by updating the weights and bias levels of a network when a network is simulated in a specific data environment.[1] A learning rule may accept existing conditions (weights and biases) of the network and will compare the expected result and actual result of the network to give new and improved values for weights and bias.[2] Depending on the complexity of actual model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations.
A Boltzmann machine is a type of stochastic recurrent neural network. It is a Markov random field. It was translated from statistical physics for use in cognitive science. The Boltzmann machine is based on a stochastic spin-glass model with an external field, i.e., a Sherrington–Kirkpatrick model that is a stochastic Ising Model and applied to machine learning.
A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs.
An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data. The encoding is validated and refined by attempting to regenerate the input from the encoding. The autoencoder learns a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore insignificant data (“noise”).
The information bottleneck method is a technique in information theory introduced by Naftali Tishby, Fernando C. Pereira, and William Bialek.[1] It is designed for finding the best tradeoff between accuracy and complexity (compression) when summarizing (e.g. clustering) a random variable X, given a joint probability distribution p(X,Y) between X and an observed relevant variable Y - and self-described as providing a surprisingly rich framework for discussing a variety of problems in signal processing and learning.[1]
The Helmholtz machine (named after Hermann von Helmholtz and his concept of Helmholtz free energy) is a type of artificial neural network that can account for the hidden structure of a set of data by being trained to create a generative model of the original set of data.[1] The hope is that by learning economical representations of the data, the underlying structure of the generative model should reasonably approximate the hidden structure of the data set. A Helmholtz machine contains two networks, a bottom-up recognition network that takes the data as input and produces a distribution over hidden variables, and a top-down generative network that generates values of the hidden variables and the data itself.
A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.
One-shot learning is an object categorization problem, found mostly in computer vision. Whereas most machine learning based object categorization algorithms require training on hundreds or thousands of samples/images and very large datasets, one-shot learning aims to learn information about object categories from one, or only a few, training samples/images.
Language of thought hypothesis
The language of thought hypothesis (LOTH),[1] sometimes known as thought ordered mental expression (TOME),[2] is a view in linguistics, philosophy of mind and cognitive science, forwarded by American philosopher Jerry Fodor. It describes the nature of thought as possessing language-like or compositional structure (sometimes known as mentalese). On this view, simple concepts combine in systematic ways (akin to the rules of grammar in language) to build thoughts. In its most basic form, the theory states that thought, like language, has syntax.
Complexity characterises the behaviour of a system or model whose components interact in multiple ways and follow local rules, meaning there is no reasonable higher instruction to define the various possible interactions.[1]
Biological organisation is the hierarchy of complex biological structures and systems that define life using a reductionistic approach. The traditional hierarchy, as detailed below, extends from atoms to biospheres. The higher levels of this scheme are often referred to as an ecological organisation concept, or as the field, hierarchical ecology.
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a lake.
Systems theory is the interdisciplinary study of systems, which are cohesive groups of interrelated, interdependent parts that can be natural or human-made. Every system is bounded by space and time, influenced by its environment, defined by its structure and purpose, and expressed through its functioning. A system may be more than the sum of its parts if it expresses synergy or emergent behavior.
Types of artificial neural networks
There are many types of artificial neural networks (ANN).
Chemical synapses are biological junctions through which neurons' signals can be sent to each other and to non-neuronal cells such as those in muscles or glands. Chemical synapses allow neurons to form circuits within the central nervous system. They are crucial to the biological computations that underlie perception and thought. They allow the nervous system to connect to and control other systems of the body.
Non-spiking neurons are neurons that are located in the central and peripheral nervous systems and function as intermediary relays for sensory-motor neurons. They do not exhibit the characteristic spiking behavior of action potential generating neurons.
The memory-prediction framework is a theory of brain function created by Jeff Hawkins and described in his 2004 book On Intelligence. This theory concerns the role of the mammalian neocortex and its associations with the hippocampi and the thalamus in matching sensory inputs to stored memory patterns and how this process leads to predictions of what will happen in the future.
Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematical models, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.[1][2][3][4]
CoDi is a cellular automaton (CA) model for spiking neural networks (SNNs). CoDi is an acronym for Collect and Distribute, referring to the signals and spikes in a neural network.
Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.
Natural computing,[1][2] also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials (e.g., molecules) to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.
The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.
A digital organism is a self-replicating computer program that mutates and evolves. Digital organisms are used as a tool to study the dynamics of Darwinian evolution, and to test or verify specific hypotheses or mathematical models of evolution. The study of digital organisms is closely related to the area of artificial life.
Organic computing is computing that behaves and interacts with humans in an organic manner. The term organic is used to describe the system's behavior, and does not imply that they are constructed from organic materials. It is based on the insight that we will soon be surrounded by large collections of autonomous systems, which are equipped with sensors and actuators, aware of their environment, communicate freely, and organize themselves in order to perform the actions and services that seem to be required.
Autonomic computing (AC) refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. Initiated by IBM in 2001, this initiative ultimately aimed to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth.[1]
Artificial general intelligence
Artificial general intelligence (AGI) is the hypothetical[1] ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,[2][3][4] full AI,[5] or general intelligent action.[6] Some academic sources reserve the term strong AI for computer programs that can experience sentience, self-awareness and consciousness.[7] As of the late 2010s, AI is speculated to be decades away from AGI.[8][9]
Superintelligence: Paths, Dangers, Strategies
Superintelligence: Paths, Dangers, Strategies is a 2014 book by the Swedish philosopher Nick Bostrom from the University of Oxford. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists,[2] and the outcome could be an existential catastrophe for humans.[3]
Transfer learning (TL) is a research problem in machine learning (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem.[1] For example, knowledge gained while learning to recognize cars could apply when trying to recognize trucks. This area of research bears some relation to the long history of psychological literature on transfer of learning, although practical ties between the two fields are limited. From the practical standpoint, reusing or transferring information from previously learned tasks for the learning of new tasks has the potential to significantly improve the sample efficiency of a reinforcement learning agent.[2]
Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian (in particular, human) brain.
Large width limits of neural networks
Artificial neural networks are a class of models used in machine learning, and inspired by biological neural networks. They are the core component of modern deep learning algorithms. Computation in artificial neural networks is usually organized into sequential layers of artificial neurons. The number of neurons in a layer is called the layer width. Theoretical analysis of artificial neural networks sometimes considers the limiting case that layer width becomes large or infinite. This limit enables simple analytic statements to be made about neural network predictions, training dynamics, generalization, and loss surfaces. This wide layer limit is also of practical interest, since finite width neural networks often perform strictly better as layer width is increased.
In computer programming, gene expression programming (GEP) is an evolutionary algorithm that creates computer programs or models. These computer programs are complex tree structures that learn and adapt by changing their sizes, shapes, and composition, much like a living organism. And like living organisms, the computer programs of GEP are also encoded in simple linear chromosomes of fixed length. Thus, GEP is a genotype–phenotype system, benefiting from a simple genome to keep and transmit the genetic information and a complex phenotype to explore the environment and adapt to it.
Extreme learning machines are feedforward neural networks for classification, regression, clustering, sparse approximation, compression and feature learning with a single layer or multiple layers of hidden nodes, where the parameters of hidden nodes need not be tuned. These hidden nodes can be randomly assigned and never updated, or can be inherited from their ancestors without being changed. In most cases, the output weights of hidden nodes are usually learned in a single step, which essentially amounts to learning a linear model. The name extreme learning machine (ELM) was given to such models by its main inventor Guang-Bin Huang.
Swarm behaviour, or swarming, is a collective behaviour exhibited by entities, particularly animals, of similar size which aggregate together, perhaps milling about the same spot or perhaps moving en masse or migrating in some direction. It is a highly interdisciplinary topic. As a term, swarming is applied particularly to insects, but can also be applied to any other entity or animal that exhibits swarm behaviour. The term flocking or murmuration can refer specifically to swarm behaviour in birds, herding to refer to swarm behaviour in tetrapods, and shoaling or schooling to refer to swarm behaviour in fish. Phytoplankton also gather in huge swarms called blooms, although these organisms are algae and are not self-propelled the way animals are. By extension, the term swarm is applied also to inanimate entities which exhibit parallel behaviours, as in a robot swarm, an earthquake swarm, or a swarm of stars.
A complex adaptive system is a system that is complex in that it is a dynamic network of interactions, but the behavior of the ensemble may not be predictable according to the behavior of the components. It is adaptive in that the individual and collective behavior mutate and self-organize corresponding to the change-initiating micro-event or collection of events.[1][2][3] It is a complex macroscopic collection of relatively similar and partially connected micro-structures formed in order to adapt to the changing environment and increase their survivability as a macro-structure.[1][2][4] The Complex Adaptive Systems approach builds on replicator dynamics.[5]
Artificial consciousness[1] (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics. The aim of the theory of artificial consciousness is to Define that which would have to be synthesized were consciousness to be found in an engineered artifact (Aleksander 1995).
A genetic operator is an operator used in genetic algorithms to guide the algorithm towards a solution to a given problem. There are three main types of operators (mutation, crossover and selection), which must work in conjunction with one another in order for the algorithm to be successful. Genetic operators are used to create and maintain genetic diversity (mutation operator), combine existing solutions (also known as chromosomes) into new solutions (crossover) and select between solutions (selection).[1] In his book discussing the use of genetic programming for the optimization of complex problems, computer scientist John Koza has also identified an 'inversion' or 'permutation' operator; however, the effectiveness of this operator has never been conclusively demonstrated and this operator is rarely discussed.[2][3]
Memory consolidation is a category of processes that stabilize a memory trace after its initial acquisition.[1] A memory trace is a change in the nervous system caused by memorizing something. Consolidation is distinguished into two specific processes. The first, synaptic consolidation, which is thought to correspond to late-phase long-term potentiation,[2] occurs on a small scale in the synaptic connections and neural circuits within the first few hours after learning. The second process is systems consolidation, occurring on a much larger scale in the brain, rendering hippocampus-dependent memories independent of the hippocampus over a period of weeks to years. Recently, a third process has become the focus of research, reconsolidation, in which previously consolidated memories can be made labile again through reactivation of the memory trace.[3][4]
In neuroscience, long-term potentiation (LTP) is a persistent strengthening of synapses based on recent patterns of activity. These are patterns of synaptic activity that produce a long-lasting increase in signal transmission between two neurons. The opposite of LTP is long-term depression, which produces a long-lasting decrease in synaptic strength.
Interactive evolutionary computation
Interactive evolutionary computation (IEC) or aesthetic selection is a general term for methods of evolutionary computation that use human evaluation. Usually human evaluation is necessary when the form of fitness function is not known (for example, visual appeal or attractiveness; as in Dawkins, 1986[1]) or the result of optimization should fit a particular user preference (for example, taste of coffee or color set of the user interface).
Acclimatisation is the process by which the nervous system fails to respond to a stimulus, as a result of the repeated stimulation of a transmission across a synapse. Acclimatisation is believed to occur when the synaptic knob of the presynaptic neuron runs out of vesicles containing neurotransmitters due to overuse over a short period of time. A synapse that has undergone acclimatisation is said to be fatigued. [1]
Glia, also called glial cells or neuroglia, are non-neuronal cells in the central nervous system and the peripheral nervous system that do not produce electrical impulses. They maintain homeostasis, form myelin in the peripheral nervous system, and provide support and protection for neurons. In the central nervous system, glial cells include oligodendrocytes, astrocytes, ependymal cells, and microglia, and in the peripheral nervous system glial cells include Schwann cells and satellite cells. They have four main functions: (1) to surround neurons and hold them in place; (2) to supply nutrients and oxygen to neurons; (3) to insulate one neuron from another; (4) to destroy pathogens and remove dead neurons. They also play a role in neurotransmission and synaptic connections, and in physiological processes like breathing. While glia were thought to outnumber neurons by a ratio of 10:1, recent studies using newer methods and reappraisal of historical quantitative evidence suggests an overall ratio of less than 1:1, with substantial variation between different brain tissues.
A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled in artificial neural networks as weights between nodes. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.
A binding neuron (BN) is an abstract concept of processing of input impulses in a generic neuron based on their temporal coherence and the level of neuronal inhibition. Mathematically, the concept may be implemented by most neuronal models including the well-known leaky integrate-and-fire model. The BN concept originated in 1996 and 1998 papers by A. K. Vidybida,[1][2]
Bayesian approaches to brain function
Bayesian approaches to brain function investigate the capacity of the nervous system to operate in situations of uncertainty in a fashion that is close to the optimal prescribed by Bayesian statistics.[1][2] This term is used in behavioural sciences and neuroscience and studies associated with this term often strive to explain the brain's cognitive abilities based on statistical principles. It is frequently assumed that the nervous system maintains internal probabilistic models that are updated by neural processing of sensory information using methods approximating those of Bayesian probability.[3][4]
No description available.
Computational neurogenetic modeling
Computational neurogenetic modeling (CNGM) is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biology, as well as engineering.
In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists because most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.
Connectomics is the production and study of connectomes: comprehensive maps of connections within an organism's nervous system, typically its brain or eye. Because these structures are extremely complex, methods within this field use a high-throughput application of neural imaging and histological techniques in order to increase the speed, efficiency, and resolution of maps of the multitude of neural connections in a nervous system. While the principal focus of such a project is the brain, any neural connections could theoretically be mapped by connectomics, including, for example, neuromuscular junctions.[1] This study is sometimes referred to by its previous name of hodology.
Winner-take-all is a computational principle applied in computational models of neural networks by which neurons in a layer compete with each other for activation. In the classical form, only the neuron with the highest activation stays active while all other neurons shut down; however, other variations allow more than one neuron to be active, for example the soft winner take-all, by which a power function is applied to the neurons.
Psycholinguistics/Connectionist Models - Wikiversity
Connectionist models of the mind (a subclass of which is neural networks) can be used to model a number of different behaviors, including language acquisition. They consist of a number of different nodes that interact via weighted connections that can be adjusted through by the system through different ways, the most common being backpropagation of error.
Phase resetting in neurons is a behavior observed in different biological oscillators and plays a role in creating neural synchronization as well as different processes within the body. Phase resetting in neurons is when the dynamical behavior of an oscillation is shifted. This occurs when a stimulus perturbs the phase within an oscillatory cycle and a change in period occurs. The periods of these oscillations can vary depending on the biological system, with examples such as: (1) neural responses can change within a millisecond to quickly relay information; (2) In cardiac and respiratory changes that occur throughout the day, could be within seconds; (3) circadian rhythms may vary throughout a series of days; (4) rhythms such as hibernation may have periods that are measured in years. This activity pattern of neurons is a phenomenon seen in various neural circuits throughout the body and is seen in single neuron models and within clusters of neurons. Many of these models utilize phase response (resetting) curves where the oscillation of a neuron is perturbed and the effect the perturbation has on the phase cycle of a neuron is measured.
In computer science and machine learning, cellular neural networks (CNN) or cellular nonlinear networks (CNN) are a parallel computing paradigm similar to neural networks, with the difference that communication is allowed between neighbouring units only. Typical applications include image processing, analyzing 3D surfaces, solving partial differential equations, reducing non-visual problems to geometric maps, modelling biological vision and other sensory-motor organs.[1]
A cognitive map is a type of mental representation which serves an individual to acquire, code, store, recall, and decode information about the relative locations and attributes of phenomena in their everyday or metaphorical spatial environment. The concept was introduced by Edward Tolman in 1948. The concept was used to explain the behavior of rats that appeared to learn the spatial layout of a maze, and subsequently the concept was applied to other animals, including humans. The term was later generalized by some researchers, especially in the field of operations research, to refer to a kind of semantic network representing an individual's personal knowledge or schemas.
BCM theory, BCM synaptic modification, or the BCM rule, named for Elie Bienenstock, Leon Cooper, and Paul Munro, is a physical theory of learning in the visual cortex developed in 1981. The BCM model proposes a sliding threshold for long-term potentiation (LTP) or long-term depression (LTD) induction, and states that synaptic plasticity is stabilized by a dynamic adaptation of the time-averaged postsynaptic activity. According to the BCM model, when a pre-synaptic neuron fires, the post-synaptic neurons will tend to undergo LTP if it is in a high-activity state (e.g., is firing at high frequency, and/or has high internal calcium concentrations), or LTD if it is in a lower-activity state (e.g., firing in low frequency, low internal calcium concentrations).[1] This theory is often used to explain how cortical neurons can undergo both LTP or LTD depending on different conditioning stimulus protocols applied to pre-synaptic neurons (usually high-frequency stimulation, or HFS, for LTP, or low-frequency stimulation, LFS, for LTD).[2]
The cocktail party effect is the phenomenon of the brain's ability to focus one's auditory attention on a particular stimulus while filtering out a range of other stimuli, such as when a partygoer can focus on a single conversation in a noisy room. Listeners have the ability to both segregate different stimuli into different streams, and subsequently decide which streams are most pertinent to them. Thus, it has been proposed that one's sensory memory subconsciously parses all stimuli and identifies discrete pieces of information by classifying them by salience. This effect is what allows most people to tune into a single voice and tune out all others. This phenomenon is often described in terms of selective attention or selective hearing. It may also describe a similar phenomenon that occurs when one may immediately detect words of importance originating from unattended stimuli, for instance hearing one's name among a wide range of auditory input.
Shunting inhibition, also known as divisive inhibition, is a form of postsynaptic potential inhibition that can be represented mathematically as reducing the excitatory potential by division, rather than linear subtraction.[1] The term shunting is used because of the synaptic conductance short-circuit currents that are generated at adjacent excitatory synapses. If a shunting inhibitory synapse is activated, the input resistance is reduced locally. The amplitude of subsequent excitatory postsynaptic potential (EPSP) is reduced by this, in accordance with Ohm's Law.[2] This simple scenario arises if the inhibitory synaptic reversal potential is identical to the resting potential.[citation needed]
Spreading activation is a method for searching associative networks, biological and artificial neural networks, or semantic networks. The search process is initiated by labeling a set of source nodes (e.g. concepts in a semantic network) with weights or activation and then iteratively propagating or spreading that activation out to other nodes linked to the source nodes. Most often these weights are real values that decay as activation propagates through the network. When the weights are discrete this process is often referred to as marker passing. Activation may originate from alternate paths, identified by distinct markers, and terminate when two alternate paths reach the same node. However brain studies show that several different brain areas play an important role in semantic processing.[1]
Working memory is a cognitive system with a limited capacity that can hold information temporarily.[1] Working memory is important for reasoning and the guidance of decision-making and behavior.[2][3] Working memory is often used synonymously with short-term memory, but some theorists consider the two forms of memory distinct, assuming that working memory allows for the manipulation of stored information, whereas short-term memory only refers to the short-term storage of information.[2][4] Working memory is a theoretical concept central to cognitive psychology, neuropsychology, and neuroscience.
Theta waves generate the theta rhythm, a neural oscillation in the brain that underlies various aspects of cognition and behavior, including learning, memory, and spatial navigation in many animals.[1][2] It can be recorded using various electrophysiological methods, such as electroencephalogram (EEG), recorded either from inside the brain or from electrodes attached to the scalp.
Unconscious inference (German: unbewusster Schluss), also referred to as unconscious conclusion,[1] is a term of perceptual psychology coined in 1867 by the German physicist and polymath Hermann von Helmholtz to describe an involuntary, pre-rational and reflex-like mechanism which is part of the formation of visual impressions. While precursory notions have been identified in the writings of Thomas Hobbes, Robert Hooke, and Francis North[2] (especially in connection with auditory perception) as well as in Francis Bacon's Novum Organum,[3] Helmholtz's theory was long ignored or even dismissed by philosophy and psychology.[4] It has since received new attention from modern research, and the work of recent scholars has approached Helmholtz's view.
Semantic memory is one of the two types of explicit memory (or declarative memory) (our memory of facts or events that is explicitly stored and retrieved).[1] Semantic memory refers to general world knowledge that we have accumulated throughout our lives.[2] This general knowledge (facts, ideas, meaning and concepts) is intertwined in experience and dependent on culture. Semantic memory is distinct from episodic memory, which is our memory of experiences and specific events that occur during our lives, from which we can recreate at any given point.[3] For instance, semantic memory might contain information about what a cat is, whereas episodic memory might contain a specific memory of petting a particular cat. We can learn about new concepts by applying our knowledge learned from things in the past.[4] The counterpart to declarative or explicit memory is nondeclarative memory or implicit memory.[5]
Stimulus modality, also called sensory modality, is one aspect of a stimulus or what is perceived after a stimulus. For example, the temperature modality is registered after heat or cold stimulate a receptor. Some sensory modalities include: light, sound, temperature, taste, pressure, and smell. The type and location of the sensory receptor activated by the stimulus plays the primary role in coding the sensation. All sensory modalities work together to heighten stimuli sensation when necessary.[1]
The forgetting curve hypothesizes the decline of memory retention in time. This curve shows how information is lost over time when there is no attempt to retain it. A related concept is the strength of memory that refers to the durability that memory traces in the brain. The stronger the memory, the longer period of time that a person is able to recall it. A typical graph of the forgetting curve purports to show that humans tend to halve their memory of newly learned knowledge in a matter of days or weeks unless they consciously review the learned material.
Epigenetics in learning and memory
While the cellular and molecular mechanisms of learning and memory have long been a central focus of neuroscience, it is only in recent years that attention has turned to the epigenetic mechanisms behind the dynamic changes in gene transcription responsible for memory formation and maintenance. Epigenetic gene regulation often involves the physical marking (chemical modification) of DNA or associated proteins to cause or allow long-lasting changes in gene activity. Epigenetic mechanisms such as DNA methylation and histone modifications (methylation, acetylation, and deacetylation) have been shown to play an important role in learning and memory.[1]
Sensitization is a non-associative learning process in which repeated administration of a stimulus results in the progressive amplification of a response.[1] Sensitization often is characterized by an enhancement of response to a whole class of stimuli in addition to the one that is repeated. For example, repetition of a painful stimulus may make one more responsive to a loud noise.
Mind-wandering (sometimes referred to as task unrelated thought, or, colloquially, autopilot) is the experience of thoughts not remaining on a single topic for a long period of time, particularly when people are engaged in an attention-demanding task.[1]
The salience (also called saliency) of an item is the state or quality by which it stands out from its neighbors. Saliency detection is considered to be a key attentional mechanism that facilitates learning and survival by enabling organisms to focus their limited perceptual and cognitive resources on the most pertinent subset of the available sensory data.
The wake-sleep algorithm is an unsupervised learning algorithm for a stochastic multilayer neural network. The algorithm adjusts the parameters so as to produce a good density estimator. There are two learning phases, the “wake” phase and the “sleep” phase, which are performed alternately. It was first designed as a model for brain functioning using variational Bayesian learning. After that, the algorithm was adapted to machine learning. It can be viewed as a way to train a Helmholtz Machine. It can also be used in Deep Belief Networks (DBN).
An attractor network is a type of recurrent dynamical network, that evolves toward a stable pattern over time. Nodes in the attractor network converge toward a pattern that may either be fixed-point (a single state), cyclic (with regularly recurring states), chaotic (locally but not globally unstable) or random (stochastic).[1] Attractor networks have largely been used in computational neuroscience to model neuronal processes such as associative memory[2] and motor behavior, as well as in biologically inspired methods of machine learning. An attractor network contains a set of n nodes, which can be represented as vectors in a d-dimensional space where n>d. Over time, the network state tends toward one of a set of predefined states on a d-manifold; these are the attractors.
In artificial intelligence, artificial immune systems (AIS) are a class of computationally intelligent, rule-based machine learning systems inspired by the principles and processes of the vertebrate immune system. The algorithms are typically modeled after the immune system's characteristics of learning and memory for use in problem-solving.
Homosynaptic plasticity is one type of synaptic plasticity. Homosynaptic plasticity is input-specific, meaning changes in synapse strength occur only at post-synaptic targets specifically stimulated by a pre-synaptic target. Therefore, the spread of the signal from the pre-synaptic cell is localized.
A dual-task paradigm is a procedure in experimental neuropsychology that requires an individual to perform two tasks simultaneously, in order to compare performance with single-task conditions. When performance scores on one and/or both tasks are lower when they are done simultaneously compared to separately, these two tasks interfere with each other, and it is assumed that both tasks compete for the same class of information processing resources in the brain.
Memory processes describes ways to classify memories, based on duration, nature and retrieval of information.
Biased competition theory advocates the idea that each object in the visual field competes for cortical representation and cognitive processing. [1] This theory suggests that the process of visual processing can be biased by other mental processes such as bottom-up and top-down systems which prioritize certain features of an object or whole items for attention and further processing. Biased competition theory is, simply stated, the competition of objects for processing. This competition can be biased, often toward the object that is currently attended in the visual field, or alternatively toward the object most relevant to behavior.
Plant cognition or plant gnosophysiology[1] is the study of the mental capacities of plants.[2] It explores the idea that plants are capable of responding to and learning from stimuli in their surroundings in order to choose and make decisions that are most appropriate to ensure survival. Over recent years, experimental evidence for the cognitive nature of plants has grown rapidly and has revealed the extent to which plants can use senses and cognition to respond to their environments.[3] Some researchers claim that plants process information in similar ways as animal nervous systems.[4][5]
An oscillatory neural network (ONN) is an artificial neural network that uses coupled oscillators as neurons. Oscillatory neural networks are closely linked to the Kuramoto model, and are inspired by the phenomenon of neural oscillations in the brain. Oscillatory neural networks have been trained to recognize images.[1] An oscillatory autoencoder has also been demonstrated, which uses a combination of oscillators and rate-coded neurons.[2]
Emergent evolution is the hypothesis that, in the course of evolution, some entirely new properties, such as mind and consciousness, appear at certain critical points, usually because of an unpredictable rearrangement of the already existing entities. The term was originated by the psychologist C. Lloyd Morgan in 1922 in his Gifford Lectures at St. Andrews, which would later be published as the 1923 book Emergent Evolution.[1][2]
Von Neumann universal constructor
John von Neumann's universal constructor is a self-replicating machine in a cellular automata (CA) environment. It was designed in the 1940s, without the use of a computer. The fundamental details of the machine were published in von Neumann's book Theory of Self-Reproducing Automata, completed in 1966 by Arthur W. Burks after von Neumann's death. While typically not as well known as von Neumann's other work, it is regarded as foundational for automata theory, complex systems, and artificial life. Indeed, Nobel Laureate Sydney Brenner considered Von Neumann's work on self-reproducing automata central to biological theory as well, allowing us to discipline our thoughts about machines, both natural and artificial.
In artificial intelligence (AI) and philosophy, the AI control problem is the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Its study is motivated by the notion that humanity will have to solve the control problem before any superintelligence is created, as a poorly designed superintelligence might rationally decide to seize control over its environment and refuse to permit its creators to modify it after launch.[1] In addition, some scholars argue that solutions to the control problem, alongside other advances in AI safety engineering,[2] might also find applications in existing non-superintelligent AI.[3]
Irreducible complexity (IC) is the argument that certain biological systems cannot have evolved by successive small modifications to pre-existing functional systems through natural selection, because no less complex system would function. Irreducible complexity has become central to the creationist concept of intelligent design, but the scientific community regards intelligent design as pseudoscience and rejects the concept of irreducible complexity. Irreducible complexity is one of two main arguments used by intelligent-design proponents, alongside specified complexity.
In neuroscience, the critical brain hypothesis states that certain biological neuronal networks work near phase transitions.[1][2][3][4] Experimental recordings from large groups of neurons have shown bursts of activity, so-called neuronal avalanches, with sizes that follow a power law distribution. These results, and subsequent replication on a number of settings, led to the hypothesis that the collective dynamics of large neuronal networks in the brain operates close to the critical point of a phase transition.[5] According to this hypothesis, the activity of the brain would be continuously transitioning between two phases, one in which activity will rapidly reduce and die, and another where activity will build up and amplify over time.[5] In criticality, the brain capacity for information processing is enhanced,[5][6][7][8] so subcritical, critical and slightly supercritical branching process of thoughts could describe how human and animal minds function.[1]
An unorganized machine is a concept mentioned in a 1948 report in which Alan Turing suggested that the infant human cortex was what he called an unorganised machine.[1][2]
Collective intelligence (CI) is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making. The term appears in sociobiology, political science and in context of mass peer review and crowdsourcing applications. It may involve consensus, social capital and formalisms such as voting systems, social media and other means of quantifying mass activity. Collective IQ is a measure of collective intelligence, although it is often used interchangeably with the term collective intelligence. Collective intelligence has also been attributed to bacteria and animals.
A superorganism or supraorganism is a group of synergetically interacting organisms of the same species. A community of synergetically interacting organisms of different species is called a holobiont.
Distributed cognition is an approach to cognitive science research that was developed by cognitive anthropologist Edwin Hutchins during the 1990s.
Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.
A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. Superintelligence may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
Existential risk from artificial general intelligence
Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe.[1][2][3] It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes superintelligent, then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[4]
Neural modeling field (NMF) is a mathematical framework for machine learning which combines ideas from neural networks, fuzzy logic, and model based recognition. It has also been referred to as modeling fields, modeling fields theory (MFT), Maximum likelihood artificial neural networks (MLANS).[1][2][3][4] [5][6] This framework has been developed by Leonid Perlovsky at the AFRL. NMF is interpreted as a mathematical description of mind's mechanisms, including concepts, emotions, instincts, imagination, thinking, and understanding. NMF is a multi-level, hetero-hierarchical system. At each level in NMF there are concept-models encapsulating the knowledge; they generate so-called top-down signals, interacting with input, bottom-up signals. These interactions are governed by dynamic equations, which drive concept-model learning, adaptation, and formation of new concept-models for better correspondence to the input, bottom-up signals.
Brain simulation is the concept of creating a functioning computer model of a brain or part of a brain.[1] Brain simulation projects intend to contribute to a complete understanding of the brain, and eventually also assist the process of treating and diagnosing brain diseases.[2][3]
The neuron doctrine is the concept that the nervous system is made up of discrete individual cells, a discovery due to decisive neuro-anatomical work of Santiago Ramón y Cajal and later presented by, among others, H. Waldeyer-Hartz. The term neuron was itself coined by Waldeyer as a way of identifying the cells in question. The neuron doctrine, as it became known, served to position neurons as special cases under the broader cell theory evolved some decades earlier. He appropriated the concept not from his own research but from the disparate observation of the histological work of Albert von Kölliker, Camillo Golgi, Franz Nissl, Santiago Ramón y Cajal, Auguste Forel and others.
Activity-dependent plasticity is a form of functional and structural neuroplasticity that arises from the use of cognitive functions and personal experience;[1] hence, it is the biological basis for learning and the formation of new memories.[1][2] Activity-dependent plasticity is a form of neuroplasticity that arises from intrinsic or endogenous activity, as opposed to forms of neuroplasticity that arise from extrinsic or exogenous factors, such as electrical brain stimulation- or drug-induced neuroplasticity.[1] The brain's ability to remodel itself forms the basis of the brain's capacity to retain memories, improve motor function, and enhance comprehension and speech amongst other things. It is this trait to retain and form memories that is associated with neural plasticity and therefore many of the functions individuals perform on a daily basis.[3] This plasticity occurs as a result of changes in gene expression which are triggered by signaling cascades that are activated by various signaling molecules (e.g., calcium, dopamine, and glutamate, among many others) during increased neuronal activity.[4]
Memory is the faculty of the brain by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remembered, it would be impossible for language, relationships, or personal identity to develop. Memory loss is usually described as forgetfulness or amnesia.
Sentience is the capacity to experience feelings and sensations.[1] The word was first coined by philosophers in the 1630s for the concept of an ability to feel, derived from Latin sentientem (a feeling),[2] to distinguish it from the ability to think (reason).[citation needed] In modern Western philosophy, sentience is the ability to experience sensations. In different Asian religions, the word 'sentience' has been used to translate a variety of concepts. In science fiction, the word sentience is sometimes used interchangeably with sapience, self-awareness, or consciousness.[3]
Animal consciousness, or animal awareness, is the quality or state of self-awareness within a non-human animal, or of being aware of an external object or something within itself. In humans, consciousness has been defined as: sentience, awareness, subjectivity, qualia, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind. Despite the difficulty in definition, many philosophers believe there is a broadly shared underlying intuition about what consciousness is.
Philosophy of artificial intelligence
The philosophy of artificial intelligence is a branch of the philosophy of technology that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will.[1] Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures; see artificial life) so the discipline is of considerable interest to philosophers.[2] These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community's dismissal of philosophy is detrimental.[3]
Glossary of artificial intelligence
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
Cognitive psychology is the scientific study of mental processes such as attention, language use, memory, perception, problem solving, creativity, and reasoning.
The following outline is provided as an overview of and topical guide to thought (thinking):
Mathematical universe hypothesis
In physics and cosmology, the mathematical universe hypothesis (MUH), also known as the ultimate ensemble theory and struogony (from mathematical structure, Latin: struō), is a speculative theory of everything (TOE) proposed by cosmologist Max Tegmark.[1][2]
Feature integration theory is a theory of attention developed in 1980 by Anne Treisman and Garry Gelade that suggests that when perceiving a stimulus, features are registered early, automatically, and in parallel, while objects are identified separately and at a later stage in processing. The theory has been one of the most influential psychological models of human visual attention.
The binding problem is a term used at the interface between neuroscience, cognitive science and philosophy of mind that has multiple meanings.
The hard problem of consciousness is the problem of explaining why and how we have qualia[note 1] or phenomenal experiences.[2]
Synthetic intelligence (SI) is an alternative/opposite term for artificial intelligence emphasizing that the intelligence of machines need not be an imitation or in any way artificial; it can be a genuine form of intelligence.[1][2] John Haugeland proposes an analogy with simulated diamonds and synthetic diamonds—only the synthetic diamond is truly a diamond.[1] Synthetic means that which is produced by synthesis, combining parts to form a whole; colloquially, a man-made version of that which has arisen naturally. A synthetic intelligence would therefore be or appear man-made, but not a simulation.
Soft computing is a set of algorithms, including neural networks, fuzzy logic, and genetic algorithms. These algorithms are tolerant of imprecision, uncertainty, partial truth and approximation. It is contrasted with hard computing: algorithms which finds provably correct and optimal solutions to problems.
Simulated reality is the hypothesis that reality could be simulated—for example by quantum computer simulation—to a degree indistinguishable from true reality. It could contain conscious minds that may or may not know that they live inside a simulation. This is quite different from the current, technologically achievable concept of virtual reality, which is easily distinguished from the experience of actuality. Simulated reality, by contrast, would be hard or impossible to separate from true reality. There has been much debate over this topic, ranging from philosophical discourse to practical applications in computing.
Integrated information theory (IIT) attempts to provide a framework capable of explaining why some physical systems are conscious, why they feel the particular way they do in particular states, and what it would take for other physical systems to be conscious. In principle, once the theory is mature and has been tested extensively in controlled conditions, the IIT framework may be capable of providing a concrete inference about whether any physical system is conscious, to what degree it is conscious, and what particular experience it is having. In IIT, a system's consciousness is conjectured to be identical to its causal properties. Therefore it should be possible to account for the conscious experience of a physical system by unfolding its complete causal powers.