Video Communication Course 2022 (Part5)

Nick Pai
5 min readJun 19, 2023

--

The following information represents the comprehensive notes from the ‘’Video Communication Course 2022 (Part5)’’. These notes provide the fundamental concepts, principles, and practical applications covered in the course. They serve as a valuable resource for students and professionals seeking to enhance their knowledge and skills in the field of video communication.

Part5 contains the following sections

  1. Compression
  2. DPCM (Differential Pulse Code Modulation 脈波差量編碼調變)
  3. Data Histogram
  4. Compression Model
  5. Natural coding
  6. Huffman coding
  7. Hamming coding
  8. Coding Redundancy

Compression

Reducing the size of image data files. Retain necessary information

  • Lossless
    — No data are lost
    — The original uncompressed image can be recreated exactly.
    — e.g. Run-Length, Huffman, Lempel Ziv
  • Lossy
    — Allow a loss in the actual image data
    — The original uncompressed image cannot be exactly recreated from the compressed file.
    — e.g. JPEG(image), MPEG(video), MP3(audio)
Data Compression Methods
A will has better reconstruction result than B.

Criteria

  • Compression Ratio (CR)
    CR = Input Size / Output Size = Uncompressed Size / Compressed Size
  • Bits Per Pixel (BPP)
    BPP = Number of Bits / Number of Pixels

What’s the relationship between CR and BPP?

Relationship between CR and BBP can relate to if 1 pixel needs 8 bits, 8 bit divide by CR (which value equal to 10 as shown above) can be BPP value (which value equal to 0.8 as shown above)

For example, a 256*256 image, 256 level grayscale

Compression Ratio / Bits Per Pixel

DPCM (Differential Pulse Code Modulation 脈波差量編碼調變)

a.k.a motion JPEG

  • DPCM find the different of frame to compress video.
  • DPCM can achieve high compression-ratio.
  • Remove temporal redundancy 移除時間上的冗餘資訊
DPCM

DPCM & Individual Encode Comparison

DPCM

  • 高壓縮比 High compression ratio (CR)
  • 高複雜度 High complexity
  • 移除時間上的冗餘資訊 remove temporal redundancy

Individual Encode

  • 低壓縮比 Low compression ratio (CR)
  • 低複雜度 Low complexity
  • 保留時間上的冗餘資訊, 代表說可以在任意地方開始或停止,keep temporal redundancy (, which means it can begin or stop at any position.)

Data Histogram

Why Data Compression?

In order to meet an operation requirement under an existing system performance constraint, such as limited bandwidth.

Multimedia Communication 多媒體通訊
= Data Compression + Networking + System Integration

Compression Model

Source encoder

  • remove input redundancies
  • e.g. Huffman code

Channel encoder

  • increase noise immunity(抗噪性)of the source encoder’s output
  • e.g. Hamming code

The data output by the source encoder is very sensitive to the noise during transmission, because almost all repetitive things have been removed, and any error caused by noise will have a great impact on the information to be transmitted.

Error on compress image represent a large field of original image.

Natural coding

Use same bit length to represent characters.

e.g. A = 00, B = 01, C = 10, D = 11

Huffman coding

Lossless coding, source dependent.
High frequency characters has shorter bit length.

  • A = 00, B = 01, C = 000, D = 001, E = 010
  1. 對每個character給定一個值 e.g. 出現的頻率frequency
    Assign weights to each character.
  2. 將最小的兩個值加總,得到一個新的node
    Merge two lightest weights into one root node with sum of weights.
  3. 重複步驟2直到得到最終的node(值會等於所有character的加總) Repeat until one tree is left
  4. 對所有的分支進行編碼,左邊枝幹值為0,右邊枝幹值為1
    Encode the tree from root to the leaf (for each node, assign 0 to the left, 1 to the right)
A = 0, B = 10, C = 110, D = 1110, E = 1111
Encoding
Decoding

Hamming coding

Channel coding, Linear. In order to do one-bit error correction.

Hamming code default it only have 1 or no error. More than 2 errors it can not handle.

When we compressed data by other algorithm (e.g. Huffman…), but in order to protect data, use Hamming code will expend the data length. But when data transmit into RAM, because RAM is super fast, so the expanded length is not a problem.

The longer a piece of data is, the more parity codes are needed to protect it.
Or when the probability of error (error rate) is higher, the parity code needs to be used more frequently for protection.

For example:
A piece of data has n = 1024 bits to transmit.

  • The error rate is 20%, which means that about 5 bits will be wrong, so k = 4 parity bits should be selected.
  • The error rate is 5%, which means that about 20 bits will be wrong, so k = 5 parity bits should be selected.
  • The error rate is 2%, which means that about 50 bits will be wrong, so k = 6 parity bits should be selected.
  • The error rate is 1%, which means that about 100 bits will be wrong, so k = 7 parity bits should be selected.

Example

Send a binary data using Hamming code <01001110>

Suppose the Hamming code received today is <100111011110>

Use parity code to check which bit has an error

Check if the corrected result <100110011110> is correct, if correct, the parity code will be 0.

Thank you for taking the time to read this article, and I sincerely hope that the information provided proves to be valuable to you. Whether you are a student, professional, or simply someone interested in video communication, it is my utmost wish that these notes enhance your understanding and contribute to your success in this field. Thank you once again, and best of luck on your journey in the world of video communication!

--

--