Image

Notes about image processing.

Ideas

Increment-class classification

How to make n_class in classification incresable without re-train all networks?

Make classifiers more like human thinking ?

  • The basic problem is always “0 or 1”, isn’t it?
  • This unit may could be assembled to make more complex networks.

Convolution

Implementation of convolution

提取灰度特征和边缘特征的卷积核:

# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))

# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]

# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]

im2col

An example for im2col


Math

Math notes.

Greek alphabet

Name Lowercase Uppercase
alpha \(\alpha\) \(A\)
beta \(\beta\) \(B\)
gamma \(\gamma\) \(\Gamma\)
delta \(\delta\) \(\Delta\)
epsilon \(\epsilon\) \(E\)
zeta \(\zeta\) \(Z\)
eta \(\eta\) \(H\)
theta \(\theta\) \(\Theta\)
iota \(\iota\) \(I\)
kappa \(\kappa\) \(K\)
lambda \(\lambda\) \(\Lambda\)
mu \(\mu\) \(M\)
nu \(\nu\) \(N\)
xi \(\xi\) \(\Xi\)
omicron \(\omicron\) \(O\)
pi \(\pi\) \(\Pi\)
rho \(\rho\) \(P\)
sigma \(\sigma\) \(\Sigma\)
tau \(\tau\) \(T\)
upsilon \(\upsilon\) \(\Upsilon\)
phi \(\phi\) \(\Phi\)
chi \(\chi\) \(X\)
psi \(\psi\) \(\Psi\)
omega \(\omega\) \(\Omega\)


Datasets

Image datasets Minist

Calculus

I can’t stand my poor math anymore, and start to re-learn math from calculus.

This notes will record some formulas and anything intersting associates with calculus.

Trigonometry

ASTC method

ASTC method

Trig Identities

$$ \begin{array}{l} cos^2(x) + sin^2(x) =1 \\
1 + tan^2(x) = sec^2(x) \end{array} $$

$$ \begin{array}{ll} sin(A+B) & = & sin(A)cos(B) + cos(A)sin(B) \\
cos(A+B) & = & cos(A)cos(B) - sin(A)sin(B) \\
sin(2x) & = & 2 sin(x) cos(x) \\
cos(2x) & = & 2 cos^2(x) - 1 = 1 - 2 sin^2(x) \end{array} $$


Data Augmentation

Data augmentation is the process of increasing the size of a dataset by transforming it in ways that a neural network is unlikely to learn by itself.

This article will introduce:

  • Common data augmentation methods.
  • Image augmentation with imgaug.
  • Popular tools for data augmentation.


Linear Algebra

Linear Algebra notes.

Matrix multiplication

The origin of matrix multiplication

参考:数学家最初发明行列式和矩阵是为了解决什么问题? - 马同学的回答 - 知乎

最初目的:解线性方程组

举例:\(YC_rC_b \to RGB\)

  • 黑白电视到彩色电视
    • 兼容问题
    • \(Y\): 灰度图
$$ \begin{cases} 0.299R + 0.587G + 0.114B = Y \\ 0.500R - 0.419G - 0.081B + 128 = C_r \\ -0.169R - 0.331G + 0.500B + 128 = C_b \end{cases} $$


Metrics

Sometimes it’s hard to tell the differences between precision, accuracy, recall and so on especially for newbees like me.

But let’s try to distinguish them with stories. In this article, you will see some common used metrics, including:

  • Metrics for binary classification: accuracy, precision, reacall, f1-score and so on


Loss Function

This article will introdcue:

  • expected risk
  • some common loss function

Loss function in classification

The goal of classification problem, or many machine learning porblem, is given training sets \(\{(x^{(i)}, y^{(i)}); i=1,\cdots,m\}\), to find a good predictor \(f\) so that \(f(x^{(i)})\) is a good estimate of \(y^{(i)}\).

Why we need a loss function?

We need a loss function to measure how “close” of estimate value \(\hat y^{(i)}\) and the target value \(y^{(i)}\) and we usually optimize our model by minimizing the loss.


Python

Some python notes.

Package notes

Jupyter Notebook

Jupyter kernels

# List kernels
jupyter kernelspec list

# Add python kernel to jupyter
# Name is like an id. This command can also be use to change dispaly name of an existed kernel.
/path/to/kernel/env/bin/python -m ipykernel install --prefix=/path/to/jupyter/env --name 'python-my-env' --display-name 'Python x - Display name'

# Remove kernels (Or just remove the whole directory listed with the command above)
jupyter kernelspec remove <jupyter-kernel-name>

Tricks

  • !pwd, 执行
  • np.dot??

Auto reload external python modules

IPython extension to reload modules before executing user code.

%load_ext autoreload
%autoreload 2


Optimization

This article will first introduce gradient descent, and then go through most of popular optimization methods, such as:

  • SGD
  • RMSprop
  • Adam