Although this is not recommended (early optimization is almost always evil) you could create a static hashmap. The whole mendix framework runs in one JVM, so if you create a static hashmap it will remain in memory.
If your Java action takes such a long time to build the hashmap you should be wondering if this construction of retrieving all tasks first works for your situation.
A specially when you are using some static hashmaps that could create a lot of new and complex problems. What happens if one user is creating a new task, the id of that task has to be added as well. What happens if two users at a new task at the same time, a hashmap isn't synchronized, that means that when two users access the map at the same time the map won't receive all changes.
What I suggest is accept that it takes such a long time, or make some drastic changes in your Java and while iterating over your dataset retrieve all required tasks in batches when you need them instead of placing them all in your memory.
If you choose to change your code the structure will probably look something like this:
Retrieve dataset
foreach Row in dataset
put Row information in cache
are there about (100?) Rows in cache //The amount depends on the complexity of your objects
Retrieve all tasks required tasks for the cached data
find or create the object
set the reference to the tasks
commit the objects
continue with the next row
close the dataset