当前位置: 动力学知识库 > 问答 > 编程问答 >

algorithm - Compressibility Example

问题描述:

From my algorithms textbook:

The annual county horse race is bringing in three thoroughbreds who have never competed against one another. Excited, you study their past 200 races and summarize these as probability distributions over four outcomes: first (“first place”), second, third, and other.

 Outcome Aurora Whirlwind Phantasm

first 0.15 0.30 0.20

second 0.10 0.05 0.30

third 0.70 0.25 0.30

other 0.05 0.40 0.20

Which horse is the most predictable? One quantitative approach to this question is to look at compressibility. Write down the history of each horse as a string of 200 values (first, second, third, other). The total number of bits needed to encode these track-record strings can then be computed using Huffman’s algorithm. This works out to 290 bits for Aurora, 380 for Whirlwind, and 420 for Phantasm (check it!). Aurora has the shortest encoding and is therefore in a strong sense the most predictable.

How did they get 420 for Phantasm? I keep getting 400 bytes, as so:

Combine first, other = 0.4, combine second, third = 0.6. End up with 2 bits encoding each position.

Is there something I've misunderstood about the Huffman encoding algorithm?

Textbook available here: http://www.cs.berkeley.edu/~vazirani/algorithms.html (page 156).

网友答案:

I think you're right: Phantasm's 200 outcomes can be represented using 400 bits (not bytes). 290 for Aurora and 380 for Whirlwind are correct.

The correct Huffman code is generated in the following manner:

  1. Combine the two least probable outcomes: 0.2 and 0.2. Get 0.4.
  2. Combine the next two least probable outcomes: 0.3 and 0.3. Get 0.6.
  3. Combine 0.4 and 0.6. Get 1.0.

You would get 420 bits if you did this instead:

  1. Combine the two least probable outcomes: 0.2 and 0.2. Get 0.4.
  2. Combine 0.4 and 0.3. (Wrong!) Get 0.7.
  3. Combine 0.7 and 0.3. Get 1.0
分享给朋友:
您可能感兴趣的文章:
随机阅读: