loads MOJO pipeline from actual bytes, however, it will happen for each thread running in executor (i.e., thread representing executor core). This brings significant memory, time overhead for bigger MOJO models.
Load the MOJO model only once per JVM and share it cross multiple executor threads.
If we decide to cache MOJO, we have to make sure we will not leave it in memory for too long,
and also expect that Spark job can use multiple MOJOs