Transcript:

Arithmetic coding is an elegant and powerful compression technique, and it represents the current state of the art in lossless compression. So the basic idea behind arithmetic coding is you have your sequence of source symbols to be encoded the message that you’re going to encode and you associate to this sequence a sub interval of the interval from 0 to 1 so here’s our nice little interval from 0 to 1 and we associate this with a sub interval. Say this one right here is from from say. A to B and then you choose a number in that sub interval. Maybe we choose this one. That has a nice short binary expansion, so perhaps the binary expansion is 0.01 101 and I’ll put subscript 2 for the base 2 or binary expansion of this number right here, and then you encode your original input sequence as this binary sequence, so you encode this as 0 1 1 0 1 in this case, so that’s the basic idea behind arithmetic coding. Now in this video, we’ll talk a little bit about some of the properties of arithmetic coding and in the next couple of videos, we’ll go through some simple concrete examples to illustrate exactly what this looks like, and then we’ll formulate the algorithm more generally and examine some of its properties, So we know that. Huffman codes are optimal, but they’re only optimal as symbol codes and any symbol code suffers from certain inefficiencies due to the fact that it has to have integer valued code word lengths, right in a symbol code. You are, you are coding each of these source symbols with a code word. So maybe this one gets encoded as 0 1 Maybe this one is. I don’t know 1 0 1 Maybe this one is, maybe it’s the same as the first, and that’s 0 1 and another code word for that in another code word for that. But each of these code words obviously has to have an integer valued length and as a result of that, a symbol code can have up to 1 bit of inefficiency per source symbol encoded. So if you’re encoding N source symbols, you could have up to N bits of inefficiency, so you know, you can have up to 1 You can be up to 1 bit more than the entropy for each code word. Now, one way to get around this that we talked about is to group the source symbols into blocks, and instead of having a code word for each source symbol. You have a code word for each block of source symbols, and in fact, we we saw that as the block size increases as the block size goes to infinity. You can make your expected or your The the. On average, the expected encoded lengths per source symbol approach the entropy. So you can get as close as you want to the entropy, but to do that, you have to take the block size. You have to have increasing block sizes and unfortunately, as a practical method, this does not work because the number of source symbols in your new alphabet in D, you know, in the block alphabet is increasing exponentially as your block size increases, so for something like Huffman coding where you’re explicitly constructing the code, that’s just not going to work because you’re going to have exponentially. Many, you know, the number is going to be increasing exponentially on the other hand. Arithmetic coding handles this in a simple and computationally efficient way, so it gets around the integer valued code word length problem in a way that allows you to encode the entire sequence the entire message and still get within a few bits of the entropy. You can still get within a few bits of the ideal and coded lengths of the whole thing, so for before we were getting within like one bit, you know, for each source symbol or one bit for each block, but now we’re getting within a few bits for the whole message. So this is a big win So arithmetic coding can can get much can achieve much greater compression and Huffman coding for this reason, so arithmetic coding is is near optimal, but the key is that it is efficient. It’s it’s fast, computationally efficient. It scales up to two large messages so that you can get near optimal even on large messages now. Another very attractive feature of arithmetic coding is that it can accommodate essentially any customized probabilistic model of the source in a very natural way. So so far, we’ve just been talking about IID. Sources, but in more realistic situations, you’re going to have more complicated sources. They’re not going to be IID, and so you want to have a realistic probabilistic model for that source and arithmetic arithmetic coding allows you to take basically any statistical model you like and combine it with arithmetic coding in essentially a plug-and-play manner, and this is because arithmetic coding treats the probabilistic model, basically as a black box, and in fact, the model your model can even be adaptive, so as you are encoding the message as you’re as you’re seeing so the source symbols, your model can be adapting on-the-fly, so this is this is, especially this is an especially nice feature of arithmetic coding with respect to real sources like language and images and audio and and things like that that have lots of structure and for which you want to have a more complicated statistical model than just an iid model so we could summarize this property as we can write. This separates arithmetic coding, separates modeling from coding and this is a very nice feature to have separates the modeling from the coding, all right, so those are some nice features of arithmetic coding and next we’ll work through a couple concrete examples to see how it works.