Solved: ImageJ / Fiji error: “IllegalArgumentException: adding a container to a container on a different GraphicsDevice”

For my master thesis I am experimenting with ImageJ / Fiji. While working with some image registration algorithms I happend to run in a strange bug I couldn’t explain to me:

Workaround / Fix:

If you happen to have this bug make sure all instances of ImageJ / Fiji windows are on the same monitor. This bug only occured to me when my Fiji toolbar window was on my main monitor and the plugin windows (Image Sequence loader and Linear Stack Allignment with SIFT) were on my secondary screen.


If you want to reproduce this issue, use a system with the above mentioned spec (see stacktrace).

  1. Open Fiji on the main display
  2. Use “File / Import / Image Sequence…” to load at least two images
  3. Execute “Plugin / Registration / Linear Stack Allignment with SIFT” on the images
  4. When the registration is performed, move all Fiji windows but the Fiji toolbar to a second display
  5. Close all the Fiji windows but keep the Fiji toolbar open
  6. Repeat step 2) and 3)
  7. Exception gets thrown


Related bug in the ImageJ forum

Bug on the Oracle bug tracker


Didaktics of media: Learning Environment – Recap

This will be the last article in this category, as the course nears its end. I will have a short recap on my experience in this course and how it changed my learning behavior. This is the logical follow up to my post Learning Environment: Status Quo.


Experience in this course

This course differs from all my other courses because this is the first course I took outside my “normal” curriculum. So all fellow students had a different background than me, probably already knew the professor etc.

Yet the course was one of the best I had in University. I think I learned a lot in this course and had contact and insight to many topics which I probably would never had deemed worth investigating. Although the professor is strict and the workload is not deniable, I am happy I chose this course.


In the mentioned article I listed five methods I use when learning:

  1. Information consumption
  2. Training
  3. Discussion
  4. Information curation
  5. Reflection

1 – Information consumption

I still rely heavily on my ability to consume and process an huge amount of information. So no changes here.

2 – Training

As I mentioned in the linked post, doing is better than listening. In the time of the course I hadn’t had that much training (of any kind, not even my martial art). So I hesitate to write something about this part. The only thing I can think of is the third learning unit (the MOOC), where I had a lot of hands on work to do (see my result here: Instagram)

3 – Discussion

As in the section before, not that much changed here: I went to some meetups and also had good high quality discussions with colleagues. But nothing drastically differed by participating the course.

4 – Information curation

Not that much changed here as well: Only 26 upvotes on good questions or answers on for example.

5 – Reflection

Of the five sections, the “Reflection” section is the one with the most changes, I would think. I wrote 17 posts for this course alone, that’s more than for all other topics in that time span. It feels good to recapitulate the learned topics and to put whole sentences on the blog.


The tools section changed as well. For example I reduced my amount of Feedly and Facebook to 30 minutes a day (StayFocusd for Chrome). The other mentioned tools didn’t change that much. I blogged a little bit more than usual (which is good) and used Stackoverflow and Evernote a bit less.

Something I did add to my toolbox (based on this course) is Pluralsight, a very good training site. I really recommend is warmly. Something I also use since the course is DuoLingo, mostly on my mobile phone. Currently I learn Swedish with DuoLingo and binge watch every thing actor model related on Plurasight.

Didaktics of media: Learning unit four – Report

This blog post is one part of the series “learning diary” for a course at my university.

Goal of this report is to take and rate a online learning unit. For the fourth and final learning unit we should take a mobile learning unit. I used the app Learning Japanese (Google Play Store) for my mobile phone. As an experiment we have been asked to create a video for the review:

Short synopsis of the unit:

Learning Japanese is a language learning software for the Android platform. It consists of three main components: The Kana learning (Kana being the two basic alphabets of the Japanese language), an expression trainer (Dates, Time and numbers) as well as a vocabulary trainer.

The app focuses on Kana learning, as all other components build on the ability to read Kana. Based on that skill the “daily expression” tab provides lists of numbers, dates and times as well as tests for these sections. Somewhat easy to overlook is a small icon on the top right corner of the numbers section on which you can enter numbers, which get translated to the corresponding sign (including a sound file).

The third section is reserved for the vocabulary list of the Japanese-Language-Proficiency Test (JLPT). This test is similar to TOEFL or IELTS but for the Japanese language. The needed vocabulary for the five level of this test is listed in this app and can be trained and tested.

Reflection on your personal learning experience

All in all, I’m disappointed. I started (and stopped…) to learn Japanese several years ago and hoped this app could maybe reinforce my small knowledge of the language. Even better I hoped it could ignite my desire to finally learn the language again.

Unfortunately, it didn’t ignite anything. The app may be suited to learners of Japanese which had a more recent encounter with the language but it is in my opinion not helpful for beginners or people looking to refresh their knowledge.

My learning experience with this as (again, basically as a beginner) was frustrating. There is no learning concept at all, just different components like the syllables to memorize.

Did you achieve the stated learning goals?

No, I did not achieve the stated learning goal. As stated before my learning experience was not as good as expected. I think I’m not the right target audience for this app. Maybe with a hint at the description page of the app I would have chosen another app.

Did the given time suffice?

There is not statement how long it does take to memorize all Kana or the provided vocabulary and expressions. Taking the 90 minutes in considerations the mobile learning units should take, I would say that the time is not sufficient.

Even if the learner had some training in the past, the 90 minutes aren’t enough to memorize even one of the Kana (not to speak of the vocabulary and the expressions). I managed to remember the first set of Hiragana (the first of the Kana) after roughly four hours of use, with mediocre results on the tests.

Comment on the design – what was good, what could be made better?

Two main points regarding the design are worth mentioning, when discussing this app: Usability and the used learning method.

Starting with the usability I need to add that I’m not a user interface designer or an expert on this topic. Yet, in my opinion the app lacks usability at certain points:

  • No introduction. It would be nice to have a brief introduction in the app. “What is the intended order of the lessons?”, “What can I do on the different lessons?”, “What is Kana?”
  • No information about location. When the app is started, the first screen is the Gojûon (“Fifty Sounds”, the Kana training area, see Figure 1). Most of the screen is empty, but the information that you are located at the Gojûon chapter is missing (at the top would be a good place). This is true for all menu elements. In the actual trainings this is no issue as there is a title on these screens.
  • Usage: The hamburger menu (Figure 1, top left) is a more or less known element for menus in many apps. Yet it resides unremarkable on the menu screen and I didn’t expect it to contain the whole control flow of the app.
  • Another annoying design issue with these app is the need to download every single sound file manually. A sound file for every single syllable on the Kana table (Figure 2) gets downloaded on the first click on these syllable. This is usually fast (less than one second), yet breaks the “flow”. It would be better to include these file directly in the installation or (if the initial app size is of concern) provide a “download all sound files” button.



Figure 1: Start screen


Figure 2: Kana screen


The second (and more important) design issue of these app in my opinion is the used learning method. As I mentioned earlier I had some training in Japanese several years ago. The learning method here in this app is best described as drill: “Look at these 200 foreign characters and remember them”.

The method I learned the Kana with was invented by James W. Heisig (J. W. Heisig/ K. Gresbrand – Die Kana lernen und behalten). It ties every single sign (and the modifiers) to a story which stays memorable longer. Just by looking at the Kana in this app table I remembered some of the stories from that book.

Probably the app can’t incorporate these methods because of intellectual property reasons (which is understandable) but the given learning method for this app wasn’t helpful for me at all. If the designer of the app would have tried to make the learning process a little bit less tedious (stories, small lessons etc.), maybe I would be more positive about the app.

Positive and negative aspects of the contents of the unit


  • Stroke order: On thing this app shines is the stroke order. For a “good” Kana sign, the strokes of a sign should be done in a certain order (if done so, some of the smaller hooks at the signs are more natural). Most Kana books show the stroke order with small numbers next to each stroke, but the app actually draws the sign.
  • JLPT vocabulary. For every learner of Japanese who wants to have a certificate of his language skill, the vocab list is very important (grammar is in general easy in Japanese, so vocabulary is slightly more important). The app provides a solid test tool (random tests, reappearance of wrong vocabulary, multiple choice for English to Japanese, solution to every answer) so for more proficient learner this app is a good choice.


  • Usability/Design:
    • No introduction
    • General control flow in the menus
    • The need to download every single sound file afterwards. Very annoying
  • Almost unusable if no (or very few) prior Japanese language knowledge exists


Grade the course on a scale of 0-10.

All in all, I would grade this app with a 5 out of 10. Most of the positive and negative points had already been listed in the previous chapter, but I really want to stress the point of the learning method for Kana (see “Comment on the design”). The method used in this app wasn’t helpful at all to learn the Kana (for me). I’m pretty sure one can learn the Kana this way, but it’s harder than necessary.

Other comments

The app got a bad grade from me. My main reasons for that are due to the fact that I’m not the right target audience. So if someone with more prior knowledge in Japanese rates this app, he could give a better grade. Also, the fact that this app is free should be taken in considerations, as other sources to learn Japanese like books (the mentioned one for example) aren’t free.

Didaktics of media: Storytelling

This blog post is one part of the series “learning diary” for a course at my university.


Based on the title for that unit I assume we will learn something about storytelling in the context of didactic.

What have I learned:

People remember stories. If information is wrapped up in a story, it increases the possibility that one remember the information as well.

A story consists of a setting, a plot, a character, a narrator with a point of view, (words, pictures, sounds) and most importantly listeners or readers.

Scheherazade from 1001 Nights is one of the earliest cliffhanger provider.

Wandering storytellers

  • Minstrels, Jester (like street musicians)
  • Troubadour (for the higher in the hierarchy)
  • Minnesinger (in Germany, especially Walther von der Vogelweide)


  • Secular song books. They not only told stories and song but wrote them down.

Icelandic Kvöldvaka

Because of the long winter all family members would gather in one room and one person would be chosen to read a storybook for all members. Usually a child or a guest did the reading.

There is more:

Songs, Music, Poems, Theater, Dance, Political satire


  • Story or part of story
  • Narrator and points of view
  • Time flow
  • fictional or true story
  • structure
    • Story / plot / figures
    • Most imortant: reception: What happens in the listener
    • Representation

Digital story telling

  • Point of view
  • a dramatic question
  • emotional content
  • the gift of your voice
  • the power of the soundtrack
  • economy
  • pacing

Green Eggs and Ham from Dr.Seuss only has 50 words!

Non linear structure

  • Nodes and branaches
  • attributes for the branches
  • circles
  • multiple stories


Elixir for aspiring Erlang developers

For the “Independent Coursework” of the University of Applied Sciences Berlin I created the following presentation:


Target audience are students of the bachelors degree Computer Science of the University.

If anything is unclear (or god forbid, wrong) drop me a mail or a tweet

Didaktics of media: Learning unit three – Report

This blog post is one part of the series “learning diary” for a course at my university.

Goal of this report is to take and rate a online learning unit. For the third learning unit we should take either a webinar or a MOOC. I participated in the course Design 1o1 Redux – Part 2: My House from the Iversity platform.

Short synopsis of the unit:

The course “Design 1o1 Redux – Part 2: My House” was the second of three courses on the platform Iversity created by the Design 1o1 Community. The target group was everyone who is interested in learning some basic design attitudes.

For eight weeks every day an email with a task was sent to all participants (I only attended one week). The mail (and the corresponding page on the Iversity platform) consisted of a short video (~15 seconds) with an abstract content. The content (mashups of drawing and images) was accomplished by a short introductory text.

The task for every day was written in a *.pdf file which consisted of four to six pages. Additional to the task there have been added some background information, learning goals, reasoning (“Why do we do this?”) and questions to interest one further.

All tasks were related to “My House” and in the one week I attended to the topic “My Room”. The overall goal of this week was to create some pictures in a Minecraft-like world and upload these pictures to Instagram (see them here).

After the first week all participants were asked to grade seven fellow participants based on six categories and three sub categories each. The grading criteria had been predefined. Two weeks after the start of this course every grading participant received their own grading.

My received grading

Reflection on your personal learning experience

I choose the course to learn something I didn’t had the faintest clue of. I hadn’t had contact with any design related topics beside software design. Therefore, I’ve been curious what I’ll have to do as well what I’ll learn.

The first task had been trivial (installing a Minecraft-like game, finding a good place for our room and make three screenshots which should be uploaded). Yet even on this trivial task I learned to transport ideas to other environments. It took me a while to find a “good” place for my room, mostly because I didn’t know what I was looking for, until I found it. My perfect place needed to be at the shore and a village needed to be in sight.

I learned to transport my room in another world. This world hadn’t had everything I needed but I found the abstract concept I was looking for.

The second task was to measure our own room and transport this in the Minecraft-like world. The participants were asked to take on of the blocks from the world and define a measurement for all dimensions. Also we were asked to make an image with our rooms overall proportions. I created that one in Excel, using different border styles to create a folding instruction of my room.

I learned about myself that I’m missing some key skills in image manipulations. And that I’m a very used to Excel. Also I choose to take the default measurement of Minecraft (1m x 1m x 1m) which wasn’t a good Idea.

For the third task we created a “colour palette” of all blocks we wanted to use in the construction of our room. This palette then should be used to create our boundaries or walls of the room.

I learned that I had no clue about colours and how they work together.

The fourth task was an abstraction task. The participants were tasked to take an object of their room (a lamp, a chair, etc. …) and abstract this object in the Minecraft-like world, honouring the measurements of the second task and not using different material other than the palette from the fourth task.

I learned to abstract my room… Given my colour palette and the measurements my abstractions needed to be… very abstract. Two blocks were my desk; another block my PC…
My room, with abstract desks, a couch and my pc on the desk

The final task was to “step into the Minecraft-like room”. With some kind of image editing we were tasked to integrate us in our newly created room. I replaced the Minecraft arm one always can see in the game with a picture of my own arm.

I learned how to make my Minecraft world more “real” with replacing my hand. That way the real world and the Minecraft world had been connected in some way.

After one week of this course I was asked to rate seven other participants. After seeing the other contributions, I found out, I really am an uncreative guy. There have been extremely creative images on the submitted pages, both from a technical view (how they created the images, the quality of the implementation) as well as the conception view (interesting ideas, very abstract topics).

I learned that there are many different ways to interpret the task. I’ve seen some good contributions which had been so essentially different to mine, although we all had the same task.

Did you achieve the stated learning goals?

The learning goal was to “Develop some basic design attitudes for better understanding the mechanics of today’s world”. I think this goal (or better, the achievement of the goal) is very hard to measure. I definitely learned some design ideas. Yet I wouldn’t dare to claim that I now understand the mechanics of today’s world better.

On the other hand, this is the stated learning goal for the whole eight-week course, so maybe I would achieve the goal in the next seven weeks. But honestly I doubt that. The tasks are too abstract for me to get connected to any real world mechanics.

Nonetheless I learned some things about myself and my abilities. At least that was a good thing.

Did the given time suffice?

The time frame which is given by the “Didaktics of Media” course for our e-learning assessments is around 90 minutes. I did not manage to get this course done in that time frame.
The assumed workload of the Iversity course “Design 1o1 Redux – Part 2: My House” itself is between four to seven hours. I needed 5 hours and 30 minutes to complete all tasks plus the time needed to evaluate my peers (additional 30 minutes). The assumed time from the Iversity course is sufficient to finish the course (albeit other participants maybe took longer, judging by their high quality contributions.

Comment on the design – what was good, what could be made better?

The design of the course was satisfying. The many small tasks and the intermediate feedback (other people in the course “like” your Instagram images) had been a nice touch. The tasks build up one another and this motivates you to not skip a task.

Also the theme was nicely integrated in all tasks. All task revolved around “My Room” (and with the next weeks-tasks) the overall course will take the “My House” topic in consideration.

One of the drawbacks of the design had been the *.pdf files. They have been to big (or the plugin used to display them was to slow). The site often froze for several seconds if the *.pdf was used.

Positive and negative aspects of the contents of the unit


  • The tasks have been described in detail (what to do, where to upload, how to tag)
  • The allocated time was sufficient
  • The tasks have been very divers (building, abstraction, image manipulation, drawing…)


  • The *.pdf watcher in the platform was slow (maybe to big)
  • The *.pdf files were very cluttered. Lots of images, different colours, the background wasn’t ideal (see next image).
  • The grading was not ideal. The given grading criteria discriminated the really creative people contributions.
  • The introductory videos had no meaning to me.

Cluttered text

Grade the course on a scale of 0-10.

I would grade this course with a seven out of ten. I was satisfied with this course in overall, but I did not reach the stated learning goal (not even 1/8 of the learning goal, as I did only one of eight weeks). Maybe my rating is too harsh in this case but I assumed a bit more.
In the end I used Minecraft as a tool to transport my room in another world. That was not what I assumed when I read about “Develop some basic design attitudes”.

Other comments

It was the first time I uploaded something to Instagram but it was a nice way to show everyone his work and see the work of the other participants.

Update: This was my first post written with the Open Live Writer of which I heard from Scott Hanselman. I needed to update some small format bugs.

Powershell cmdlets with dynamic param AND $args don’t work

Over the weekend I tried to implement auto completion in Elixir’s  mix (in Windows). Unfortunately I didn’t make it without introducing some problems. So I didn’t committed my changes to upstream. Currently I try to reach some of the more renowned Elixir/Windows contributors, to discuss the changes .


Under normal circumstances I don’t use more mix tasks then test, phoenix.server and release but sometimes you need this weird command, you just can’t remember. The command mix help  is your friend here as it shows you all available commands (project aware!). Yet I don’t like to look the documentation up, if I need just some information on spelling. For example in the beginning I often tried to start the phoenix project with mix phoenix.start (Hint: that does not work). I am used to auto completion in my development environments so I tried to extend mix  as well.


As I am using the Powershell for all my command line related tasks and the default file extension of Powershell is ps1, my command mix  execute the mix.ps1  in the Elixir bin folder.


Powershell scripts can have auto completion of parameters with an so called [validateSet("Param1","Param2",...)] , which incorporate all valid parameters. Sadly this is of no help, if we have to hard code the possible values for the parameter. A possible solution to this problem is the usage of a DynamicParam with dynamic validateSet (good resource here). To test my various iterations I wrote down all test cases (sorry no automated testing yet).

Iteration 1


If you have a look at the original mix  script (here) you can see that the script locates the mix.bat , flattens possible array arguments (is this still needed?) and then execute the mix.bat  with the newly flattened arguments.

The first problem we see here is the usage of the $args array. As Keith Hill points out in this SO comment the $args array “… contain any argument that doesn’t map to a defined function parameter…”. Which introduces the first problem: The DynamicParam  ONLY works for defined function parameters.

I copied the linked resource (again, here) and moved the old script to the process block. Because we are creating a script and not a function the signature of function Test-DynamicValidateSet {...}  needs to be removed. To generate the validateSet I replaced the line $arrSet = ... with

This populates the $arrSet  with all valid task. I also changed the value of the variable $ParameterName  to 'Task'  and renamed the variable $Path  to $Task


A short test shows, the command mix  does work, the command mix help  does not. Reason for that is, we assign the first value to the parameter $Task .

Iteration 2


The call to mix.bat in the last row now get the $Task parameter as well:


mix  works, mixd help  works. Awesome! Lets try auto completion. mix [tab]  …

This is weird. The auto completion takes it times (this is actually the time mix help --names  takes to return all valid tasks) yet the auto completion fills in file names from that folder… To fix that we need to make it clear, that our dynamic parameter is actually the first parameter. So after we set the $ParameterAttribute.Position = 0  (it was 1) we repeat our test.

mix  works, mixd help  works, mix [tab]  works, mix he[tab]  works also. What about arguments to parameters? like mix help --names ?


Iteration 3


OK, we need positional arguments. Lets add some.

I don’t like that approach, because this script will fail on having more than seven parameters (our dynamic and $p1  – $p6 ) with the aforementioned error message.

We also have to forward our new parameters to the mix.bat :

OK, besides the now unused “flatten possible array parameter” logic and our “it will fail on having eight or more parameters” problem, how good are we?


All tests in the test cases pass. Yet we have some unfinished problems.

Problems with this solution

  1. We can have only a fixed amount of parameters. This is not a big problem (as we can add more parameters in the signature), but this is neither elegant nor good practice.
  2. We now completely omit the “flatten array logic”. I have to admit, I’m not sure if  this is still needed, so I asked the original contributor of this logic but still wait for response.
  3. Most of the code was copied from our resource. We clearly added some of our own logic, yet we probably shouldn’t use this code without asking for permissions. I asked the author if I could use this snippet and wait for a response.
  4. Even if I omit the “flatten array logic” I tripled the Lines Of Code. I don’t know if the auto complete feature is worth this much code (read about code as a liability here)

As soon as the problem 3 is clarified I will upload the file here. As soon as the other problems are clarified (and/or fixed) I create a pull request in GitHub to upstream the changes.

Didaktics of media: Designing MOOCs

This blog post is one part of the series “learning diary” for a course at my university.


Based on the title for that unit I assume we will learn something about designing MOOC. The linked article on the other hand, features a paper about the Tech Ed Revolution, which “will be the Learner Revolution”. It goes on and on about investors needing to put money in education (either traditional education or start ups) and how these start ups could fuel the next learning revolution.

What have I learned:

A MOOC is a massive open online course with thousands of participants.

The two types of MOOCs are

  • CMOOC – Cooperative MOOC . MOOCs where all participants has to cooperate and work together.
  • XMooc – Extended MOOC. Classes of Universities where non students can watch the videos as well.

My University is one of the first Universities which can self accredit a course!

Designing MOOC

The main parts on designing a MOOC is conception, production, execution and evaluation.


Why do I want to serve MOOC? It’s cool, it makes money, I want to transfer knowledge. You need to define goals.

For whom do I create MOOC? For everyone? For Students? For Users of a technology? For employees of a company? For a community of special topic?

Which content has my MOOC? The language is important and has influence on the target group. The content is important as well. The length is good to have 6-8 weeks. What is the structure of the content.

What format has my MOOC? CMOOC or xMOOC? Is it self-paced or synchronous? Which tools are you gonna use for communication? Will there be gamification elements? Is there a certificate offered?

Attention: Do you have all the rights to ALL your used content? Images in slides? Videos, the tools which are needed for doing the tasks.

Under which conditions is my content published? See CC Matrix. Make it clear if one can use it in which ways.

Costs: The Hasso Plattner Institut plans $50000 for a MOOC. Simple MOOCs are possible starting at $10000.


Think about marketing. Offer them for example on Advertise them early on.

Promovideo: Declare content, organisation of the course

Webcam vs. real camera

Postproduction is needed

Should the quizzes be peer graded or automatically graded? You need to create unique questions. Watch cultural specialties.



Use an internal or external tool?

Does the MOOC needed to be archived?