|| Flavio Teixeira has joined the START-project! He will solve all my dictionary learning problems
and remind me to water the plant. Also I have created a monster uaahahahaha dictionary
learning algorithm that is fast and does not need to know the sparsity level or the number
of atoms. Needless to say that we will never be able to prove
anything. However the really bad part is that it also does sensible things with image patches,
meaning real data, so there is no way to avoid writing a paper anymore.
|| I corrected and resubmitted the ITKrM (aka nightmare) paper,
which now also features pretty pictures.
According to plan all of my time goes into preparing the lecture - well ok secretly
I'm comparing the ITKrM algorithm to its cheap version which means watching
a lot of jumping points.
|| Went to the
Mathematics of Signal Processing Workshop in Bonn, nice! Then I tried to decide
which of my nicely working algorithms should be turned into a paper or rather
which aspect should go to which paper to stay below 30 pages and how much math do I really need to add as
opposed to how much math would I like to add. Luckily I then found a suitable excuse
not to decide anything in the preparation of the time-frequency lecture :D.
|| Utter frustration, I have tons algorithms that work
nicely but I can't prove it. Also I'm in panic because
I have to get 6 papers accepted in 5 years to keep my job.
At the current rate I'm not going to make it.
|| Habemus postdocem! If we manage to actually lift
the administrational burden, Flavio Teixeira will join the
START-project in spring. The advertising talk for masked
dictionary learning in Berlin
sold well - too bad the product is not finished yet.
On hind-sight the trip counts as pretty disastrous,
since I paid the hotel for a single but as it turned out a
week later I was definitely not alone in the
room when counting also the bed bugs.
|| I hate writing papers. It is so much easier to watch the points
in my simulations jump around and improve the algorithms
than to actually hammer out all the nasty little details why
the masked dictionary learning stuff
is locally converging. Obviously after the happy, enthusiastic phase,
Michi and I are now experiencing some
set-backs with our analysis operator learning -
time to ask Andi to teach us how to use the supercomputer.
Good news is we potentially bagged a researcher at the
medical university, who will give us real world problems
and free coffee! Finally anybody interested in our AOL or masked DL
stuff has the chance to see it/us in
|| Andi Kofler will join the dictionary learning crowd for 4 months with the
goals of a) making dictionary learning faster and b) finding out whether
he wants to get a real job or do a PhD with me. The codes seminar
is organised, luckily teaching stuff I don't know much about
is getting easier every time I do it. Soon I'll be ready
for PDEs .... hahaha no way.
|| I set a new personal record in the category number of means
of transport to arrive at a scientific event, that is, 9 to
cover the distance Innsbruck -
where again I was allowed to entertain people with a talk called
'dictionary learning - fast and dirty'
and on top of that lost my fear of the machine learning community, who
successfully convinced me that I only need to be afraid of the computer vision
community. Then I went on holidays and afterwards into panic about
having to organise the seminar on codes.
|| I managed to write the final report of the Schroedinger project.
Obviously this involved a lot of procrastination. To atone I thought
I'll turn the latex file into a template. So here an absolutely not
official latex template for the
FWF final project report (Endbericht),
I cannot disclaim enough, but hope someone will find it useful.
Also I went to a workshop in
where I was allowed to
entertain people with a talk called 'dictionary learning - fast and dirty'
|| If I were a rational person I would start to deal with
the small problems of the project in order to get it prolonged and then go
for the big ones. On the other hand if I were rational I would have gotten
a real job 5 years ago, so I'm going straight for the big bastard problem, ie.
the global minimum. All help welcome.
As threatened I gave a talk about the
ITKrM (aka nightmare) paper
in Cambridge and I think they plan to put the slides online.
|| Started with the Finnish flu and with the decision to
stay in Innsbruck (sigh there goes TU Munich), meaning that this
is the first but not the last month of the
START-project, proper (longer)
It also means that I am again looking for a
postdoc and 1-2
In the mean-time Michi Sandbichler has agreed to temporarily act as my
first minion and we will work on analysis operator learning.
After running the simulations for the nightmare paper I decided
that the corresponding figures would not be enlightening because you
actually can't test the message of the theorems, so here the
figure-free but didactically
improved submitted version -
code for itkrm and simulation results on request. Last thing, as
part of our ongoing efforts to save the world
Valeriya Naumova and I
started to learn dictionaries from incomplete data and in
synthetic experiments it even works :D.
|| Last month of the Schroedinger project. Since my brain is
still blocked with the Innsbruck - Munich decision don't expect
mega scientific progress but there is the chance to see a talk
about the first part of the nightmare paper at the minisymposium
'Learning Subspaces' at
AIP in Helsinki.
If you can't make it to Helsinki, don't despair, you have the chance to see
a similar talk at
|| While Matlab is running simulations to make nice pictures,
I made a more readable version of the nightmare paper,
For all those out there interested in dictionary learning the
alternative introduction to
dictionary learning, written
for the bulletin of the Austrian Mathematical Society, is now available
and gives an overview over theory until 2014.
And finally a note to all associate editors, I consider my review
quota for this year filled and so until the end of 2015 will not
feel guilty for rejecting uninteresting reviews. You might
still get lucky with theory of sparse recovery (compressed sensingless)
or theory of dictionary learning.
||Ohoh, I've been lucky again, so now I have to decide between
Innsbruck and TU Munich, that will be a hard one. In any case even thinking
about the decision has to be postponed in order to finish the nightmare paper.
Here is a
first try, but I'm not 100%
convinced by the structure. The inner mathematician
and inner engineer are fighting, where to put the proofs and whether to
do some work intense simulations to include enlightening figures.
||I've been a good girl because I've reviewed 3 conference
and 3 journal papers (don't tell Gitta or I'll immediately
get a new one), because I've written an introduction to dictionary
learning for the journal of the Austrian Mathematical Society
and because I've augmented the web-page with a
Student Area and prepared
part of the seminar. But actually I've been a bad girl because
I only did the good things in order to avoid finishing
the nightmare paper.
||Darkness, depression, makes you miss the cosy wet island.
Luckily there is a light at the end of the constant tunnel,
I decided them all! Finishing the paper will still be postponed
indefinitely, since I have been spammed with reviews.
||Hello world from Innsbruck! I have overcome huge obstacles, ie.
the instructions of the ZID on how to access my webfolder,
but finally there it is: a homepage at my institution.
I also decided that new institution, new country means new
news section and that the old news will go into an old news section.
Btw, I realise that this is a somewhat lengthy entry but that is to
force the ugly warning flag of the university down.
Good news: the non-tight dl paper was accepted to JMLR,
so soon you can get it for free from there,
If you prefer a mathy style where proofs are not
banished to the appendix go here,
but beware the bugs.
Bad news: Still haven't decided all the constants.