- 更多网络例句与网络函数相关的网络例句 [注:此内容来源于网络,仅供参考]
-
A methodology of noise-like key generation is presented. Chaotic process of tentmap function is used as a deterministic generation of noise-like key, image storage and retrieval are completed in a algebraic form. A mathematics model of noise-like chaotic coding memory is constructed, meanwhile the basic mechanism of circulant convolution and circulant correlation in image information storage and retrieval are demonstrated.
在详细阐明数字序列和字母序列的1-D映射函数设计方法的基础上,利用构造出的1-D映射的稳定周期环和不稳定极限环完成了存储和联想记忆功能,为了识别有关的输入信息,提出了映射函数直接控制的方法;在此基础上,提出了一种1-D映射型混沌神经网络模型,该网络本身不当作一个"黑箱"处理,网络的参数值由网络中实现的映射函数来确定,该网络模型具有联想记忆、容错性、模式识别和奇异滤波等一系列智能信息处理的基本功能。
-
BP neural network function approximation of the source!
BP神经网络函数逼近的源程序!
-
A function approximator like an ANN can be viewed as a black box, and when it comes to FANN, this is more or less all you will need to know.
像一个神经网络函数逼近可以被看作是一个黑盒子,当它来到范,这是或多或少所有您需要知道的。
-
If we get symbolic network function, it is easy to seek frequency characteristic than to solve network equation on each sampling point.
如果我们能得到网络的符号网络函数,再利用它来求幅频特性等,将比对每一采样点都解一次网络方程要快得多。
-
Since the character of network capability can also be influenced by the processing function of hidden neurons, a Chebyshev orthogonal neural network, in which the Chebyshev function is chosen for the hidden unit processing function, is proposed and the optimization algorithm under norm‖x‖〓 is investigated.
由于网络隐层神经元处理函数对网络性能也有重要影响,故将Chebyshev函数选为隐单元的处理函数而提出Chebyshev正交神经网络,并讨论了一致范数‖x‖〓意义下的优化算法。
-
Through above study, the theoretical analysis of the inherent fault tolerance of feed-forward neural networks has been extended from hard-limit activation functions to differentiable activation functions; 2. By using above method, fault tolerance of discrete Hopfield feedback neural networks have been also analyzed. It means that the above method can be applied not only for feed-forward neural networks but also for feedback neural networks; 3. For the feed-forward neural network with the Sigmoid activation function, Chebyshev Inequality method was presented.
这项研究将以前仅限于对硬限幅作用函数前向神经网络固有容错性能的理论分析推广到具有可微作用函数的前向神经网络; 2 针对离散型Hopfield反馈神经网络,利用上面提出的方法,对其同样进行了容错性分析,得出许多有用的结论和计算公式,说明上面提出的方法不仅适用于前向神经网络,同时也适用于反馈神经网络; 3 针对具有Sigmoid作用函数的前向神经网络提出了一种切比雪夫不等式法。
-
The relationship between the topology and convegence of networks is approached and the reason causing overtraining and overfitting is analysed. Then the redundant weights and nodes in the network are defined by using improved activity function and a new algorithm called the BPNSO for determining the network structure for any application is proposed, which use dual optimization objective function combining the mean squre error of the network output with network complexity penalty function.
文章对网络拓扑结构与网络收敛性和稳定性的关系进行了探讨,分析了网络出现Overtraining或Overfitting的原因和解决的目标,提出了神经网络结构优化的策略,改进了活化函数,定义了冗余权值和冗余节点的概念,为辨识和删除冗余权值和冗余节点,提出采用优化目标函数中增加网络复杂度罚函数的双重优化目标函数,并对罚因子提出了基于学习误差和权值的自适应动态调整公式,论文定义了这种BP网络结构优化方法为BPNSO,并给出了完整的计算流程。
-
It employs the complex-valued step function to build neural model, then constructs network structure and decides the number of neurons; it adopts complex-valued Hebbian rules and inner product way to calculate the net weight matrix; then it analyzes the convergence of the network by taking advantage of the definition of energy function. In the image pre-processing stage, 2-dimensional Fourier Transform and Euler's formula are used, respectively, to transfer a gray-scaled image into phase information needed by the complex-valued neural network when stores the traffic sign. In the image post-processing stage, phase inverse transformation is used to change the phase image to gray-scaled image.
采用复数阶跃函数构建了神经元模型,建立了复数Hopfield神经网络结构,并且确定了网络中神经元的个数;借助复数Hebbian学习规则和复数内积法给出了网络权值的确定方法;借用复数网络能量函数的定义,说明了复数识别网络的收敛性;在前期的图像数据处理部分,分别使用离散二维傅里叶变换和欧拉公式,给出了将灰度图像数据转化为复数网络所需要的相位信息的方法,实现了网络对路牌的记忆存储;在后期的图像还原部分利用相位逆变换的方法,实现了相位图到灰度图转化。
-
Bring forward a kind of learning algorithm fitting on the base of spline function, because the spline function has good flexibility and quadratic smoothing property, the search problem of network weight functions training can be converted into the resoluting extreme value problem of the plurality function through using spline function to represent network input and link weight functions, thus the PNN can be trained with the help of the existing neural network learning algorithms;④build a kind of learning algorithm base on the alternate of Walsh function, the complexity of network computing can be decreased through using the complete orthogonality of Walsh function corollary.
应用结果表明该算法收敛速度快,稳定性好;③提出了一种基于样条函数拟和的学习算法,由于样条函数具有很好的柔韧性和二次光滑性,将网络输入函数和连接权函数用样条函数的形式表示,可把网络权函数训练的函数寻优问题转换为多元函数求极值的问题,从而可借助于现有的神经网络学习算法训练过程神经元网络;④建立了一种基于Walsh函数变换的学习算法,利用Walsh函数系的完备正交性,可大大降低网络计算的复杂度。
-
Based on the characteristics of CMAC neural network and fuzzy control,the novel controller of fuzzy CMAC neural network that reflects the fuzziness and continuity of human cerebella is discussed.
针对 CMAC神经网络和模糊控制的特性,给出了一种能反映人脑认知的模糊性和连续性的模糊 CMAC神经网络控制器,该控制器采用高斯函数作为模糊隶属函数,利用神经网络实现模糊推理并可对隶属函数进行实时调整,从而使其具有学习和自适应能力。
- 更多网络解释与网络函数相关的网络解释 [注:此内容来源于网络,仅供参考]
-
hierarchical Boolean function:阶层式布尔函数
阶层式人工类神经网络 hierarchical artificial neural network | 阶层式布尔函数 hierarchical Boolean function | 阶层式编目结构 hierarchical cataloging structure
-
partially computable function:部分可计算函数
部分可计算 partially computable | 部分可计算函数 partially computable function | 部分连接网络 partially connected network
-
frustration network:窘组网络
frustration function 窘组函数 | frustration network 窘组网络 | frustrating configuration 窘组位形
-
frustration function:窘组函数
frustration plaquette 窘组嵌板 | frustration function 窘组函数 | frustration network 窘组网络
-
transfer function:传递函数
网络结构部分主要设置隐层(Hidelay)神经元的个数、传递函数(Transfer Function)以及网络训练算法(Algorithm). 经过样本数据(Training Data)训练过的网络,即可进行仿真测试了. MATLAB脚本程序可以在MATLAB环境下调试,
-
ergodic matrix strong matrix irreducible polynomial:单向函数
网络安全:ergodic matrix | 单向函数:ergodic matrix strong matrix irreducible polynomial | 遍历性:Ergodic
-
network database language:网络数据库语言
network constant 电路常数 | network database language 网络数据库语言 | network function 网络函数
-
vector potential function:矢量位函数
势函数聚类:potential function clustering | 矢量位函数:vector potential function | 神经网络:Gaussian potential function n etwork
-
simple energy function:简化能量函数
神经网络:Gaussian potential function n etwork | 简化能量函数:simple energy function | 应变能方程:strain energy function
-
tanh:双曲正切函数
隐藏神经元使用"双曲正切"函数 (tanh) 作为其激活函数,而输出神经元使用"sigmoid"函数作为其激活函数. 这两个函数都是非线性连续函数,允许神经网络在输入神经元和输出神经元之间建立非线性关系模型.