Softmax encounters large computing cost when the output vocabulary size is very large. Some feasible approaches will be explained under the circumstance of skip-gram pretraining task.

Reasoning the relations between objects and their properties is a hallmark of intelligence. Here are some notes about the relational reasoning neural networks.

This is an introduction of variant Transformers.[1]

FLOPs is a measure of model complexity in deep learning.

FLOPs

Floating point operations (FLOPs) measures the complexity of neural models.

Assume convolution is implemented as a sliding window and that the nonlinearity function is computed for free.

For convolutional kernels, we have:

where $H$, $W$, and $C_\textrm{in}$ are height, width, and number of channels of the input feature map, $K$ is the kernel width (assumed to be symmetric), and $C_\textrm{out}$ is the number of output channels.

For MLP layers, we have:

where $I$ is the input dimensionality and $O$ is the output dimensionality.

• FLOPs is abbreviation of floating operations which includes mul/add/div,…,etc.
• MACs stands for multiply-accumulate operation that performs $a \leftarrow a + (b \times c)$.

References

[1] Molchanov, Pavlo, et al. “Pruning convolutional neural networks for resource efficient inference.” arXiv preprint arXiv:1611.06440 (2016).

Calculate the # of trainable parameters by hand.

Dynamic Programming (DP) is ubiquitous in NLP, such as Minimum Edit Distance, Viterbi Decoding, forward/backward algorithm, CKY algorithm, etc.

Some implementation magic.

The main aim of conv op is to extract useful features for downstream tasks. And different filters could intuitionally extract different aspect of features via backprop during training. Afterward, all the extracted features are combined to make decisions.