ICode9

精准搜索请尝试: 精确搜索
  • 深度学习:循环神经网络(下)2022-09-11 09:30:46

    一些经典的RNN模型... 1、门控循环神经网络 ⭐ 门控循环神经网络可以更好地捕获时间步距离很长的序列上的依赖关系。 重置门有助于捕获序列中的短期依赖关系。 更新门有助于捕获序列中的长期依赖关系。 重置门打开时,门控循环单元包含基本循环神经网络;更新门打开时,门控循环单元可

  • The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks2022-08-14 13:01:56

    目录概动机算法一些实验结果MNIST + LeNetCIFAR-10 + Conv + DropoutCIFAR-10 + VGG|ResNet + lr decay + augmentation Frankle J. and Carbin M. The lottery ticket hypothesis: finding sparse, trainable neural networks. In International Conference on Learning Repres

  • 有向图计数与 GGF / 2022.8.10 闲话 II2022-08-10 19:30:08

    预告: DAG 计数 . 强连通图计数 . 定义序列 \(\{a_n\}\) 的图论生成函数(GGF)为 \[\mathbf A(z)=\sum_{n}\dfrac{a_n}{2^{\binom j2}}\dfrac{z^n}{n!} \](按理来说应该是二元的 \(\mathbf A(z,w)\),但是应用全是 \(w=1\) 就省了) 下面所有 EGF 是大写 Roman 体(\(\TeX\) 中 \mathrm),所

  • Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networ2022-05-26 12:33:06

    目录概主要内容Attention network细节代码 Xiao J., Ye H., He X., Zhang H., Wu F. and Chua T. Attentional factorization machines: learning the weight of feature interactions via attention networks. In International Joint Conference on Artificial Intelligence (I

  • Global Context-Aware Progressive Aggregation Network for Salient Object Detection Notes2021-11-09 04:31:07

    Global Context-Aware Progressive Aggregation Network for Salient Object Detection Facts due to the pyramid-like CNNs structure, high-level features help locate the salient objects roughly, low-level features help refine boundaries. traditional methods li

  • Codeforces Round #730 (Div. 2) 题解2021-07-08 14:04:39

    Codeforces Round #730 (Div. 2) 题解 Problem A Exciting Bets 本题有\(t\)组数据。 给出两个数\(a,b\),进行一次操作可以同时将两个数增加或减少\(1\),设经过\(k\)次操作后的两个数为\(a',b'\)。 求出让\(gcd(a',b')\)最大值,并求出此最大值条件下最下操作次数\(k\)。 注意:如果最

  • DNN-BP学习笔记2021-01-06 16:04:05

    已知:\(a^l = \sigma(z^l) = \sigma(W^la^{l-1} + b^l)\) 定义二次损失函数(当然也可以是其他损失函数): \(J(W,b) = \frac{1}{2}||a^L-y||^2\) 目标:求解每一层的W,b。 首先,输出层第L层有: \[a^L = \sigma(z^L) = \sigma(W^La^{L-1} + b^L)\\ J(W,b) = \frac{1}{2}||a^L-y||^2 = \fra

  • 线性回归2020-02-06 16:50:54

    模型假设 \[ h(x) = \omega ^T x + b \] 损失函数 针对一个样例 \((x_i, y_i)\) 而言,损失为 \(\frac{1}{2}(\omega ^T x_i + b - y_i)^2\),记之为 \(J_i(\omega , b)\),下面对 \(\omega, \ b\) 求导: \(dJ_i = d(\frac{1}{2}(\omega ^T x_i + b - y_i)^2)\) \(\ \ \ \ \ = (

  • Simplicial principal component analysis for density functions in Bayes spaces2019-10-10 21:01:59

    目录 问题 \(\mathcal{B}^2(I)\) \(\mathcal{B}^2(I)\)上的PCA Hron K, Menafoglio A, Templ M, et al. Simplicial principal component analysis for density functions in Bayes spaces[J]. Computational Statistics & Data Analysis, 2016: 330-350. 问题 我们知道一

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有