![]() |
| ||||||||||||||||
Research interests:Artificial Intelligence, Machine Learning, Data Compression, Natural Language Modelling | |||||||||||||||||
Bio:Does my monitoring talk bring you here? I'm currently looking at relationship between large alphabet data compression/modelling/prediction/learning with binary data sequences. As a result of several years education in theoretical computer science, everything to me is naturally binarised. Also data compression is often studied for large but theoretically convenient 'universal' classes of binary sequences and as a second step, if at all, adapted to larger alphabet. Many asymptotic compression results are derived for binary alphabet and have strong theoretical guarantee only for small alphabet. However, when it comes to practical data compression or modelling, exploiting the original structure of the data can crucially improve performance. Hence data compression and modelling for concrete tasks is usually done on the original or suitable represented data, as opposed to binary data. My interest lies in investigating the relationship between these two paradigms and experimentally compare approaches that used in binary data compression and large alphabet data modelling. | |||||||||||||||||

