|| Michi hit the arXiv submit button!
If you can't bear to leave my homepage to go to arXiv,
the analysis operator learning nightmare paper is also available
here and we even
have a toolbox to play around with!
took place on the 10th.
Nobody complained so I'd say success! More good news, both submitted SPARS
abstracts are accepted so I'm going to Lisbon
to cheerlead Flavio and
Michi with their posters.
Eventful month because Marie
finished her MSc thesis and so I now have a second PhD student.
As happens in this case Michi the older kid got
jealous and started to actually write the paper, let's see when we finish.
The nightmare lecture is over. In the end I learned a lot. Other than
that - can't remember what I did in February. Well it is a short month,
subtract school holidays and it gets even shorter. Probably I was
tinkering around with theory for the nightmare paper to motivate my
|| Things, I never expected to happen, happened.
1) I've almost survived the nightmare lecture - doing much better at the end
talking about pca and clustering, ie. stuff I know something about.
2) The project proposal is finished.
3) After wading knee-deep in bugblood and submitting millions of
jobs to the cluster, Valeriya and I finished the nightmare paper on
dictionary learning from incomplete data,
also featuring a toolbox.
To celebrate the START project page
is going online!
|| To increase my stress level from unbearable to life-threatening, I wisely
decided to contribute to writing another project proposal. Conclusion:
I shall learn to say no, I shall learn to say no, I shall learn to say no.
|| Marie Pali joined the START project for a 3.5 month internship
to have a look if she can bear to do a 3.5 year PhD with us.
She is working on extending recovery results for dictionary learning
to non-homogenous coefficient models and providing me with chocolate
And Michi and I are organising the next edition of the Donau-Isar-Inn
Workshop (WDI2). Have look at the
|| Due to the nightmare lecture, research, paper writing and life in general
have been suspended until further notice... well ok the first two
have been reduced to brainless low level
activities like feeding simulations to the cluster to make pretty pictures
and the third to sleeping.
||Postholiday disaster: Michi managed to convince me
that our simple analysis operater learning algorithm can't be made to
work on real data - too unstable. Glimpse of hope, he managed to do
something smart but more complicated to make it work. I think he also
just wants to avoid writing the paper, which is completely understandable.
After going to the Dagstuhl Seminar:
Foundations of Unsupervised Learning
my remaining holiday relaxation was efficiently annihilated
by the realisation that I have one week to prepare another lecture
in the famous series "today I learn, tomorrow I teach".
|| The nightmare paper with
the fastest (working) dictionary learning algorithm in the west has been accepted!
The nightmare paper is dead! Long live the nightmare paper(s)! The only way
progress could be slower would be if I started deleting lines - time for holidays.
To ensure a smooth transition I first
went to the iTWIST workshop
where I was allowed to ramble on for 45 minutes.
|| Fascinating how time is flying when you are writing up results or rather not writing up results.
I am behind all schedules, my excuse is that I was forced to also include a
real data simulation in the nightmare paper.
At this point thanks to Deanna Needell
for sending me fantastic Fabio, to be found on page 17!
|| Praise the lord - Michi has finally acceded to writing up our stuff about analysis
operator learning, so there's hope for a paper once I have done some translation
from Shakespearean to modern English. Actually I'm now in triple paper hell
because Valeriya and I solved our last algorithmic issues and have given each other
deadlines for the various chapters.
|| I'm writing up the dictionary learning with replacement etc. stuff, I'm not enjoying myself, I'm grumpy, I won't make
a longer entry.
|| Flavio Teixeira has joined the START-project! He will solve all my dictionary learning problems
and remind me to water the plant. Also I have created a monster uaahahahaha dictionary
learning algorithm that is fast and does not need to know the sparsity level or the number
of atoms. Needless to say that we will never be able to prove
anything. However the really bad part is that it also does sensible things with image patches,
meaning real data, so there is no way to avoid writing a paper anymore.
|| I corrected and resubmitted the ITKrM (aka nightmare) paper,
which now also features pretty pictures.
According to plan all of my time goes into preparing the lecture - well ok secretly
I'm comparing the ITKrM algorithm to its cheap version which means watching
a lot of jumping points.
|| Went to the
Mathematics of Signal Processing Workshop in Bonn, nice! Then I tried to decide
which of my nicely working algorithms should be turned into a paper or rather
which aspect should go to which paper to stay below 30 pages and how much math do I really need to add as
opposed to how much math would I like to add. Luckily I then found a suitable excuse
not to decide anything in the preparation of the time-frequency lecture :D.
|| Utter frustration, I have tons algorithms that work
nicely but I can't prove it. Also I'm in panic because
I have to get 6 papers accepted in 5 years to keep my job.
At the current rate I'm not going to make it.
|| Habemus postdocem! If we manage to actually lift
the administrational burden, Flavio Teixeira will join the
START-project in spring. The advertising talk for masked
dictionary learning in Berlin
sold well - too bad the product is not finished yet.
On hind-sight the trip counts as pretty disastrous,
since I paid the hotel for a single but as it turned out a
week later I was definitely not alone in the
room when counting also the bed bugs.
|| I hate writing papers. It is so much easier to watch the points
in my simulations jump around and improve the algorithms
than to actually hammer out all the nasty little details why
the masked dictionary learning stuff
is locally converging. Obviously after the happy, enthusiastic phase,
Michi and I are now experiencing some
set-backs with our analysis operator learning -
time to ask Andi to teach us how to use the supercomputer.
Good news is we potentially bagged a researcher at the
medical university, who will give us real world problems
and free coffee! Finally anybody interested in our AOL or masked DL
stuff has the chance to see it/us in
|| Andi Kofler will join the dictionary learning crowd for 4 months with the
goals of a) making dictionary learning faster and b) finding out whether
he wants to get a real job or do a PhD with me. The codes seminar
is organised, luckily teaching stuff I don't know much about
is getting easier every time I do it. Soon I'll be ready
for PDEs .... hahaha no way.
|| I set a new personal record in the category number of means
of transport to arrive at a scientific event, that is, 9 to
cover the distance Innsbruck -
where again I was allowed to entertain people with a talk called
'dictionary learning - fast and dirty'
and on top of that lost my fear of the machine learning community, who
successfully convinced me that I only need to be afraid of the computer vision
community. Then I went on holidays and afterwards into panic about
having to organise the seminar on codes.
|| I managed to write the final report of the Schroedinger project.
Obviously this involved a lot of procrastination. To atone I thought
I'll turn the latex file into a template. So here an absolutely not
official latex template for the
FWF final project report (Endbericht),
I cannot disclaim enough, but hope someone will find it useful.
Also I went to a workshop in
where I was allowed to
entertain people with a talk called 'dictionary learning - fast and dirty'
|| If I were a rational person I would start to deal with
the small problems of the project in order to get it prolonged and then go
for the big ones. On the other hand if I were rational I would have gotten
a real job 5 years ago, so I'm going straight for the big bastard problem, ie.
the global minimum. All help welcome.
As threatened I gave a talk about the
ITKrM (aka nightmare) paper
in Cambridge and I think they plan to put the slides online.
|| Started with the Finnish flu and with the decision to
stay in Innsbruck (sigh there goes TU Munich), meaning that this
is the first but not the last month of the
START-project, proper (longer)
It also means that I am again looking for a
postdoc and 1-2
In the mean-time Michi Sandbichler has agreed to temporarily act as my
first minion and we will work on analysis operator learning.
After running the simulations for the nightmare paper I decided
that the corresponding figures would not be enlightening because you
actually can't test the message of the theorems, so here the
figure-free but didactically
improved submitted version -
code for itkrm and simulation results on request. Last thing, as
part of our ongoing efforts to save the world
Valeriya Naumova and I
started to learn dictionaries from incomplete data and in
synthetic experiments it even works :D.
|| Last month of the Schroedinger project. Since my brain is
still blocked with the Innsbruck - Munich decision don't expect
mega scientific progress but there is the chance to see a talk
about the first part of the nightmare paper at the minisymposium
'Learning Subspaces' at
AIP in Helsinki.
If you can't make it to Helsinki, don't despair, you have the chance to see
a similar talk at
|| While Matlab is running simulations to make nice pictures,
I made a more readable version of the nightmare paper,
For all those out there interested in dictionary learning the
alternative introduction to
dictionary learning, written
for the bulletin of the Austrian Mathematical Society, is now available
and gives an overview over theory until 2014.
And finally a note to all associate editors, I consider my review
quota for this year filled and so until the end of 2015 will not
feel guilty for rejecting uninteresting reviews. You might
still get lucky with theory of sparse recovery (compressed sensingless)
or theory of dictionary learning.
||Ohoh, I've been lucky again, so now I have to decide between
Innsbruck and TU Munich, that will be a hard one. In any case even thinking
about the decision has to be postponed in order to finish the nightmare paper.
Here is a
first try, but I'm not 100%
convinced by the structure. The inner mathematician
and inner engineer are fighting, where to put the proofs and whether to
do some work intense simulations to include enlightening figures.
||I've been a good girl because I've reviewed 3 conference
and 3 journal papers (don't tell Gitta or I'll immediately
get a new one), because I've written an introduction to dictionary
learning for the journal of the Austrian Mathematical Society
and because I've augmented the web-page with a
Student Area and prepared
part of the seminar. But actually I've been a bad girl because
I only did the good things in order to avoid finishing
the nightmare paper.
||Darkness, depression, makes you miss the cosy wet island.
Luckily there is a light at the end of the constant tunnel,
I decided them all! Finishing the paper will still be postponed
indefinitely, since I have been spammed with reviews.
||Hello world from Innsbruck! I have overcome huge obstacles, ie.
the instructions of the ZID on how to access my webfolder,
but finally there it is: a homepage at my institution.
I also decided that new institution, new country means new
news section and that the old news will go into an old news section.
Btw, I realise that this is a somewhat lengthy entry but that is to
force the ugly warning flag of the university down.
Good news: the non-tight dl paper was accepted to JMLR,
so soon you can get it for free from there,
If you prefer a mathy style where proofs are not
banished to the appendix go here,
but beware the bugs.
Bad news: Still haven't decided all the constants.