Saving images at high speed

classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

Saving images at high speed

Nico Stuurman-2
Lukas Hille is looking into the issues saving images fast enough to keep
up with current sCMOS cameras and wrote me the following:

> Until now it is not clear to me where to start debugging/ boosting the system.
> On our pco.edge setup the system struggles sometimes with 1 camera at full resolution and speed (2048x2048, 100fps) without storing the data into ancreaseny Datastore.
> The return times of the steps from the burstAcquisition are:
> mmc.popNextTaggedImage(): 2-4ms
> mm.data().convertTaggedImage(): 4-16ms
> pipeline.insertImage(): 0ms (into RAMDatastore), 4-8ms into MultipageTIFFDatastore():
> I tried with fixed metadata (created by builder) and reduced devices,
> without any change.
> The timings seem to scale with different chip sizes (512², 1024², 2048², 4096²). Some things are confusing,  like the timings scale also with the exposure time (with shorter exposure time, the functions return quicker). This effect is small and there is a limit.
>
> time for popNextTaggedImage()+convertTaggedImage to return:
>
> chip size : min. exposure time (into RAM)
> 512² : 1ms
> 1024²: 4ms
> 2048²: 11ms
> 4096²: 38ms
>
> I tried 8bt, 16bit and 32bit, the bit depth doesn't influence the return time. I would expect, that the metadata is equal for all of this trials.

Great work!  This suggests that the function call:

rawPixels_ = DirectBuffers.bufferFromArray(tagged.pix);

in org.micromanager.data.internal.DefaultImage is the main culprit (but
it would be very good to confirm this).  This call allocates the
ByteBuffer memory and transfers the data into it.  It is unclear to me
why the 8, 16, and 32 bit images act about the same.  In any case, it
will help tremendously to figure out the real culprit here.  If
allocation is an issue, MM could pre-allocate multiple buffers (painful,
but doable).

I played a bit with the following (Beanshell/Java) script:

import java.nio.ByteBuffer;

ByteBuffer bb = ByteBuffer.allocateDirect(2048 * 2048);
byte[] bytes = new byte[bb.capacity()];

int runs = 1000;
long start = System.nanoTime();
for (int i = 0; i < runs; i++) {
     bb.clear();
     bb.put(bytes);
}
long time = System.nanoTime() - start;
mm.scripter().message("Average time to copy 4 MB was " + (time / runs /
1000) + "us");

On my system, copying the data takes less than 1 ms, however, when I add
allocation into the loop, the total average time goes up to about 4ms.

I hope that you will be able to pinpoint the time-consuming operations
even better, and hopefully we will find ways to work around them.

Best,


Nico


_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Saving images at high speed

Nico Stuurman-2
Hi all,

Saving data at high speed has been an issue for a while in
Micro-Manager.  Specifically, saving data from now standard sCMOS camera
at ~ 2k x 2k x 2 bytes at 100 fps used to work a few years ago in MM
1.4, but has not worked in MM 2.0 for quite some time. While sheltering
in place, I finally had time to look into the bottle necks, remove them,
and now can save such a data-stream on the very fast internal hard drive
of my laptop.

For those interested in the technical changes, look here:
https://github.com/micro-manager/micro-manager/pull/735

Updated code is in the latest MM 2.0-gamma nightly builds:
https://micro-manager.org/wiki/Download_Micro-Manager_Latest_Release


Best,


Nico


_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Saving images at high speed

PAVAK SHAH-4
Hi Nico,

This is great news! I'm hoping to take advantage of this once the diSPIM plugin in 2.0g is a bit further along. 

I've noticed some behavior in 1.4 that, I know is probably not worth resolving with 2.0 as the future, but was wondering if you have insight into and whether the underlying bottleneck in 2.0 may have been the same (and potentially resolved with these new optimizations).

When acquiring relatively small frames (say 400x400 px) at high effective framerates (280 fps in my case, 4 volumes / s, 35 z positions x 2 cameras per volume), 1.4 struggles to empty the sequence buffer when acquiring to a fast NVMe drive, to a 5x SATA SSD RAID0, or to memory. 

Initially the buffer is kept empty, but over time performance blips (maybe background system interrupts causing hiccups?) result in the accumulation of 100 or so frames in the buffer. These blips are quickly cleared out, but after a few hundred volumes are acquired the acquisition is not able to keep the buffer emptying fast enough and frames start steadily accumulating in the sequence buffer. If we use a large buffer (say 32 GB), we can finish the acquisition but it takes tens of minutes for the buffer to then empty out. This seems to be behavior tied to FPS rather than the average data throughput, as increasing the per frame size by 4x as many pixels and decreasing the effective FPS by the same ratio resolves this behavior.

In terms of system details, this is a new diSPIM running 2x Hamamatsu Fusions on the 3/9 nightly build of 1.4. The workstation itself is fairly powerful with 2x 8-core xeon's running at 3.4 GHz.

Do you have any suggestions or should I wait to test more with 2.0g?

Best,
Pavak

On Wed, Apr 8, 2020, 12:25 PM Nico Stuurman <[hidden email]> wrote:
Hi all,

Saving data at high speed has been an issue for a while in
Micro-Manager.  Specifically, saving data from now standard sCMOS camera
at ~ 2k x 2k x 2 bytes at 100 fps used to work a few years ago in MM
1.4, but has not worked in MM 2.0 for quite some time. While sheltering
in place, I finally had time to look into the bottle necks, remove them,
and now can save such a data-stream on the very fast internal hard drive
of my laptop.

For those interested in the technical changes, look here:
https://github.com/micro-manager/micro-manager/pull/735

Updated code is in the latest MM 2.0-gamma nightly builds:
https://micro-manager.org/wiki/Download_Micro-Manager_Latest_Release


Best,


Nico


_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general


_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Saving images at high speed

Nico Stuurman-2
Hi Pavak,

> This is great news! I'm hoping to take advantage of this once the
> diSPIM plugin in 2.0g is a bit further along.

Good point, and you may want to bring this up with Jon Daniels.  I
ported a then current version of the diSPIM plugin to 2.0 a while ago,
but developments on both sides (diSPIM 1.4 plugin and MM 2.0) probably
make that port useless, so this will need to be done again.  I'll be
happy to help, but kind of waiting for Jon to take the lead on that.

> I've noticed some behavior in 1.4 that, I know is probably not worth
> resolving with 2.0 as the future, but was wondering if you have
> insight into and whether the underlying bottleneck in 2.0 may have
> been the same (and potentially resolved with these new optimizations).
>
> When acquiring relatively small frames (say 400x400 px) at high
> effective framerates (280 fps in my case, 4 volumes / s, 35 z
> positions x 2 cameras per volume), 1.4 struggles to empty the sequence
> buffer when acquiring to a fast NVMe drive, to a 5x SATA SSD RAID0, or
> to memory.
>
> Initially the buffer is kept empty, but over time performance blips
> (maybe background system interrupts causing hiccups?) result in the
> accumulation of 100 or so frames in the buffer. These blips are
> quickly cleared out, but after a few hundred volumes are acquired the
> acquisition is not able to keep the buffer emptying fast enough and
> frames start steadily accumulating in the sequence buffer. If we use a
> large buffer (say 32 GB), we can finish the acquisition but it takes
> tens of minutes for the buffer to then empty out. This seems to be
> behavior tied to FPS rather than the average data throughput, as
> increasing the per frame size by 4x as many pixels and decreasing the
> effective FPS by the same ratio resolves this behavior. In terms of
> system details, this is a new diSPIM running 2x Hamamatsu Fusions on
> the 3/9 nightly build of 1.4. The workstation itself is fairly
> powerful with 2x 8-core xeon's running at 3.4 GHz.

I am afraid that the behavior on 2.0 will be similar.  The blips you are
seeing are most likely cause by the Java garbage collector, which has a
lot of work to do, because lots of objects are created and destroyed. 
This is in large part due to the image metadata.  It will be a useful
new project to assert that image metadata processing and saving is
indeed a bottleneck for the type of datasets you describe, and study how
to reduce that bottleneck.  My first inclination to discard per image
metadata after the first image will not work for your situation, since
you will at least need the metadata tag indicating the camera the image
came from. I bought the O'Reailly book on "Java performance", so that is
a start;)


Best,

Nico


_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Saving images at high speed

JonD
Administrator
Nico Stuurman-2 wrote
>> This is great news! I'm hoping to take advantage of this once the
>> diSPIM plugin in 2.0g is a bit further along.
>
> Good point, and you may want to bring this up with Jon Daniels.  I
> ported a then current version of the diSPIM plugin to 2.0 a while ago,
> but developments on both sides (diSPIM 1.4 plugin and MM 2.0) probably
> make that port useless, so this will need to be done again.  I'll be
> happy to help, but kind of waiting for Jon to take the lead on that.

Yes it's on our to-do list but not at the top.  Once 2.0 is "officialized"
there will be more of a push.  The good news is that my new-ish colleague
Brandon likely will be the main person working on this, and he's a better
programmer than I am ;-)



Nico Stuurman-2 wrote

>
> <snip>
>> Initially the buffer is kept empty, but over time performance blips
>> (maybe background system interrupts causing hiccups?) result in the
>> accumulation of 100 or so frames in the buffer. These blips are
>> quickly cleared out, but after a few hundred volumes are acquired the
>> acquisition is not able to keep the buffer emptying fast enough and
>> frames start steadily accumulating in the sequence buffer. If we use a
>> large buffer (say 32 GB), we can finish the acquisition but it takes
>> tens of minutes for the buffer to then empty out. This seems to be
>> behavior tied to FPS rather than the average data throughput, as
>> increasing the per frame size by 4x as many pixels and decreasing the
>> effective FPS by the same ratio resolves this behavior. In terms of
>> system details, this is a new diSPIM running 2x Hamamatsu Fusions on
>> the 3/9 nightly build of 1.4. The workstation itself is fairly
>> powerful with 2x 8-core xeon's running at 3.4 GHz.
>
> I am afraid that the behavior on 2.0 will be similar.  The blips you are
> seeing are most likely cause by the Java garbage collector, which has a
> lot of work to do, because lots of objects are created and destroyed. 
> This is in large part due to the image metadata.  It will be a useful
> new project to assert that image metadata processing and saving is
> indeed a bottleneck for the type of datasets you describe, and study how
> to reduce that bottleneck.  My first inclination to discard per image
> metadata after the first image will not work for your situation, since
> you will at least need the metadata tag indicating the camera the image
> came from. I bought the O'Reailly book on "Java performance", so that is
> a start;)

I'm not very familiar with the inner workings of metadata, but entertain a
brainstorm of an "armchair quarterback": maybe there could be a few
different categories of metadata so that some metadata (e.g. properties of
all the devices) don't need to be attached to every single image, but other
metadata tags do get attached.  Basically any metadata that isn't changed
only needs to be written once.  If you want to go crazy there could be
metadata that lives at different levels, one set for the entire acquisition,
another set for the timepoint, another set for the position, and so on down
to a set of tags that is present (and generally unique) for each individual
image.

Jon



--
Sent from: http://micro-manager.3463995.n2.nabble.com/


_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Saving images at high speed

PAVAK SHAH-3
Thanks guys!

Is this metadata management handled in the low level acquisition code or at the MDA / diSPIM plugin level?

Pavak

On Wed, Apr 8, 2020, 2:47 PM JonD <[hidden email]> wrote:
Nico Stuurman-2 wrote
>> This is great news! I'm hoping to take advantage of this once the
>> diSPIM plugin in 2.0g is a bit further along.
>
> Good point, and you may want to bring this up with Jon Daniels.  I
> ported a then current version of the diSPIM plugin to 2.0 a while ago,
> but developments on both sides (diSPIM 1.4 plugin and MM 2.0) probably
> make that port useless, so this will need to be done again.  I'll be
> happy to help, but kind of waiting for Jon to take the lead on that.

Yes it's on our to-do list but not at the top.  Once 2.0 is "officialized"
there will be more of a push.  The good news is that my new-ish colleague
Brandon likely will be the main person working on this, and he's a better
programmer than I am ;-)



Nico Stuurman-2 wrote
>
> <snip>
>> Initially the buffer is kept empty, but over time performance blips
>> (maybe background system interrupts causing hiccups?) result in the
>> accumulation of 100 or so frames in the buffer. These blips are
>> quickly cleared out, but after a few hundred volumes are acquired the
>> acquisition is not able to keep the buffer emptying fast enough and
>> frames start steadily accumulating in the sequence buffer. If we use a
>> large buffer (say 32 GB), we can finish the acquisition but it takes
>> tens of minutes for the buffer to then empty out. This seems to be
>> behavior tied to FPS rather than the average data throughput, as
>> increasing the per frame size by 4x as many pixels and decreasing the
>> effective FPS by the same ratio resolves this behavior. In terms of
>> system details, this is a new diSPIM running 2x Hamamatsu Fusions on
>> the 3/9 nightly build of 1.4. The workstation itself is fairly
>> powerful with 2x 8-core xeon's running at 3.4 GHz.
>
> I am afraid that the behavior on 2.0 will be similar.  The blips you are
> seeing are most likely cause by the Java garbage collector, which has a
> lot of work to do, because lots of objects are created and destroyed. 
> This is in large part due to the image metadata.  It will be a useful
> new project to assert that image metadata processing and saving is
> indeed a bottleneck for the type of datasets you describe, and study how
> to reduce that bottleneck.  My first inclination to discard per image
> metadata after the first image will not work for your situation, since
> you will at least need the metadata tag indicating the camera the image
> came from. I bought the O'Reailly book on "Java performance", so that is
> a start;)

I'm not very familiar with the inner workings of metadata, but entertain a
brainstorm of an "armchair quarterback": maybe there could be a few
different categories of metadata so that some metadata (e.g. properties of
all the devices) don't need to be attached to every single image, but other
metadata tags do get attached.  Basically any metadata that isn't changed
only needs to be written once.  If you want to go crazy there could be
metadata that lives at different levels, one set for the entire acquisition,
another set for the timepoint, another set for the position, and so on down
to a set of tags that is present (and generally unique) for each individual
image.

Jon



--
Sent from: http://micro-manager.3463995.n2.nabble.com/


_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general


_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Saving images at high speed

JonD
Administrator
PAVAK SHAH-3 wrote
> Is this metadata management handled in the low level acquisition code or
> at
> the MDA / diSPIM plugin level?

The diSPIM plugin reads the camera label metadata "tag" on incoming images
from MMCore.  The plugin adds "tags" like the channel index, frame index,
etc. based on how many images it has seen from that camera, then hands the
tagged images off to the Micro-Manager's built-in file saving code.  There
are also some acquisition-wide metadata tags that the diSPIM plugin adds.
I'm not familiar with how the metadata comes to be in the MMCore layer nor
how it is written to disk.

Jon



--
Sent from: http://micro-manager.3463995.n2.nabble.com/


_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Saving images at high speed

Nico Stuurman-2
In reply to this post by PAVAK SHAH-3
On 4/8/2020 2:54 PM, PAVAK SHAH wrote:
> Is this metadata management handled in the low level acquisition code
> or at the MDA / diSPIM plugin level?

Both.  In the lower level, tags are added by the camera adapter, the
circular buffer, and supplemented by everything the Core knows about the
system (i.e., the complete system state cache is added). Metadata are
then transferred to the Java layer, where some normalization happens,
and a few things are added.

For fast acquisitions, it would be clearly beneficial for the lower
layers to only send the full system state with the first image, and
things that change from the previous image from then on out.  Not a
trivial undertaking, as the upper layers also needs to ways to
understand this alternate organization of metadata, and want to stay
backward compatible.  Definitely something to look at, but there is
quite a bit to deal with already:

https://github.com/micro-manager/micro-manager/issues

Best,

Nico





_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Saving images at high speed

PAVAK SHAH-3
Thanks Nico, 

I understand completely and appreciate the explanation. It's helpful to have an idea of the scope of the endeavor and what it may take to make it possible.

Pavak

On Wed, Apr 8, 2020, 4:43 PM Nico Stuurman <[hidden email]> wrote:
On 4/8/2020 2:54 PM, PAVAK SHAH wrote:
> Is this metadata management handled in the low level acquisition code
> or at the MDA / diSPIM plugin level?

Both.  In the lower level, tags are added by the camera adapter, the
circular buffer, and supplemented by everything the Core knows about the
system (i.e., the complete system state cache is added). Metadata are
then transferred to the Java layer, where some normalization happens,
and a few things are added.

For fast acquisitions, it would be clearly beneficial for the lower
layers to only send the full system state with the first image, and
things that change from the previous image from then on out.  Not a
trivial undertaking, as the upper layers also needs to ways to
understand this alternate organization of metadata, and want to stay
backward compatible.  Definitely something to look at, but there is
quite a bit to deal with already:

https://github.com/micro-manager/micro-manager/issues

Best,

Nico





_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general


_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Saving images at high speed

JonD
Administrator
PAVAK SHAH-3 wrote
> I understand completely and appreciate the explanation. It's helpful to
> have an idea of the scope of the endeavor and what it may take to make it
> possible.

We "just" have to clone Mark and Nico and secure funding for said clones ;-)
Or maybe suitable clones can be found somewhere if funding became available.



--
Sent from: http://micro-manager.3463995.n2.nabble.com/


_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Saving images at high speed

PAVAK SHAH-3
This conversation led me to do a little bit of low stakes testing. Turns out that by default, the JRE on my system selects the parallel GC (it seems that this is an automated choice based on core counts and OS). The parallel GC supposedly supports the maximum throughput in memory turnover, but can cause long-ish stop-the-world pauses during this. The "blips" I observed during high framerate acquisitions were probably these events. 

Switching to a low-latency GC explicitly when launching the JVM, specifically the Concurrent Mark Sweep Collector, completely eliminated this behavior for me with small frames. With large frames (2100 columns x 400 rows) acquiring at the same 280 effective FPS does cause some frames to pile up in the sequence buffer, but far more slowly than with the parallel GC and a 3,600 volume acquisition at 4 volumes per second (x 2 cameras x 35 planes per volume) can be acquired with ~ 5 GB of sequence buffer without completely filling it up. 

It seems that it should be possible to further tune the parallel GC, for example by increasing the number of threads it's allowed to use (normally this is a fixed fraction of the total available) and by setting a maximum pause time. Based on my ~3 hour old understanding, the main advantage of parallel collection is more efficient heap utilization, but with memory as cheap as it is these days for folks that are really trying to push the limits of acquisition rates, maybe it's not so essential. Time permitting I'll play around a bit and report back if any findings become more definitive.

I did start getting a few of these errors during the acquisition, but I'm not positive yet that they're completely related the choice of GC:

2020-04-09T10:09:34.628451 tid10392 [IFO,App] Error: Error reading image metadata from file
2020-04-09T10:09:34.628451 tid10392 [IFO,App]
                                    [       ] org.micromanager.utils.MMScriptException: MMScript error: Can't figure out pixel type in Thread[AWT-EventQueue-0,6,main]
                                    [       ]   at org.micromanager.utils.MDUtils.getPixelType(MDUtils.java:240)
                                    [       ]   at org.micromanager.utils.MDUtils.isGRAY8(MDUtils.java:349)
                                    [       ]   at org.micromanager.utils.MDUtils.isGRAY(MDUtils.java:389)
                                    [       ]   at org.micromanager.utils.MDUtils.isGRAY(MDUtils.java:397)
                                    [       ]   at org.micromanager.imagedisplay.AcquisitionVirtualStack.getPixels(AcquisitionVirtualStack.java:146)
                                    [       ]   at org.micromanager.imagedisplay.AcquisitionVirtualStack.getProcessor(AcquisitionVirtualStack.java:164)
                                    [       ]   at ij.CompositeImage.updateImage(CompositeImage.java:252)
                                    [       ]   at org.micromanager.imagedisplay.MMCompositeImage.superUpdateImage(MMCompositeImage.java:136)
                                    [       ]   at org.micromanager.imagedisplay.MMCompositeImage.updateAndDraw(MMCompositeImage.java:172)
                                    [       ]   at ij.ImagePlus.updateAndRepaintWindow(ImagePlus.java:316)
                                    [       ]   at ij.ImagePlus.setSlice(ImagePlus.java:1510)
                                    [       ]   at ij.gui.StackWindow.setPosition(StackWindow.java:277)
                                    [       ]   at ij.ImagePlus.setPosition(ImagePlus.java:1377)
                                    [       ]   at org.micromanager.imagedisplay.HyperstackControls.onSetImage(HyperstackControls.java:436)
                                    [       ]   at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
                                    [       ]   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
                                    [       ]   at java.lang.reflect.Method.invoke(Unknown Source)
                                    [       ]   at com.google.common.eventbus.EventSubscriber.handleEvent(EventSubscriber.java:74)
                                    [       ]   at com.google.common.eventbus.SynchronizedEventSubscriber.handleEvent(SynchronizedEventSubscriber.java:47)
                                    [       ]   at com.google.common.eventbus.EventBus.dispatch(EventBus.java:322)
                                    [       ]   at com.google.common.eventbus.EventBus.dispatchQueuedEvents(EventBus.java:304)
                                    [       ]   at com.google.common.eventbus.EventBus.post(EventBus.java:275)
                                    [       ]   at org.micromanager.imagedisplay.VirtualAcquisitionDisplay.doShowImage(VirtualAcquisitionDisplay.java:724)
                                    [       ]   at org.micromanager.imagedisplay.VirtualAcquisitionDisplay.access$700(VirtualAcquisitionDisplay.java:87)
                                    [       ]   at org.micromanager.imagedisplay.VirtualAcquisitionDisplay$2.run(VirtualAcquisitionDisplay.java:695)
                                    [       ]   at java.awt.event.InvocationEvent.dispatch(Unknown Source)
                                    [       ]   at java.awt.EventQueue.dispatchEventImpl(Unknown Source)
                                    [       ]   at java.awt.EventQueue.access$500(Unknown Source)
                                    [       ]   at java.awt.EventQueue$3.run(Unknown Source)
                                    [       ]   at java.awt.EventQueue$3.run(Unknown Source)
                                    [       ]   at java.security.AccessController.doPrivileged(Native Method)
                                    [       ]   at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
                                    [       ]   at java.awt.EventQueue.dispatchEvent(Unknown Source)
                                    [       ]   at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
                                    [       ]   at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source)
                                    [       ]   at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
                                    [       ]   at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
                                    [       ]   at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
                                    [       ]   at java.awt.EventDispatchThread.run(Unknown Source)  

Best,
Pavak

On Wed, Apr 8, 2020 at 5:37 PM JonD <[hidden email]> wrote:
PAVAK SHAH-3 wrote
> I understand completely and appreciate the explanation. It's helpful to
> have an idea of the scope of the endeavor and what it may take to make it
> possible.

We "just" have to clone Mark and Nico and secure funding for said clones ;-)
Or maybe suitable clones can be found somewhere if funding became available.



--
Sent from: http://micro-manager.3463995.n2.nabble.com/


_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general


--
Pavak K Shah
Assistant Professor
Molecular, Cell and Developmental Biology
Institute for Quantitative and Computational Biosciences
5000C Terasaki Life Sciences Building
University of California, Los Angeles



_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Saving images at high speed

Nico Stuurman-2
Hi Pavak,

> This conversation led me to do a little bit of low stakes testing.
> Turns out that by default, the JRE on my system selects the parallel
> GC (it seems that this is an automated choice based on core counts and
> OS). The parallel GC supposedly supports the maximum throughput in
> memory turnover, but can cause long-ish stop-the-world pauses during
> this. The "blips" I observed during high framerate acquisitions were
> probably these events.

Great detective work!  I did not yet get to the GC chapter my book, but
clearly experimenting is always the best approach, and please keep us
updated with your findings.

The errors you see are likely related to timing issues between different
threads.  One of the main reasons for me to switch to 2.0 was the
display and horrible issues in the display code. Mark refactored thing
very nicely and managed to isolate the ImageJ drawing code (interfacing
with it well is difficult) very well. Those issues will persist in 1.4,
but when you see them in 2.0, please let us know how to reproduce.

Also looks like you are testing with real hardware (rare for people to
still have access!).  I am testing using the Democamera.  First
configure the camera how you want it and snap an image.  Then set the
"FastImage" property to 1.  The camera will now re-use the image it has
in an internal buffer, hence no cpu cycles are spend generating the image.


Best,


Nico




_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Saving images at high speed

PAVAK SHAH-3
Hi Nico,

Once Jon and Brandon manage to work down their queue to the 2.0 port I'll definitely see if this issue persists. It'll be particularly important since some of the updates they're working on for diSPIM control will allow us to almost double volumetric acquisition rates.

I run a test or two whenever I have to pop into the lab to check on our LN2 supply and freezers, plus I managed to get remote access to our scope workstations running (although I usually keep most of the hardware powered down) so I can continue some testing. Thanks for the democam tip, that'll be handy for testing more GC parameters from home.

Best,
Pavak

On Thu, Apr 9, 2020 at 11:34 AM Nico Stuurman <[hidden email]> wrote:
Hi Pavak,

> This conversation led me to do a little bit of low stakes testing.
> Turns out that by default, the JRE on my system selects the parallel
> GC (it seems that this is an automated choice based on core counts and
> OS). The parallel GC supposedly supports the maximum throughput in
> memory turnover, but can cause long-ish stop-the-world pauses during
> this. The "blips" I observed during high framerate acquisitions were
> probably these events.

Great detective work!  I did not yet get to the GC chapter my book, but
clearly experimenting is always the best approach, and please keep us
updated with your findings.

The errors you see are likely related to timing issues between different
threads.  One of the main reasons for me to switch to 2.0 was the
display and horrible issues in the display code. Mark refactored thing
very nicely and managed to isolate the ImageJ drawing code (interfacing
with it well is difficult) very well. Those issues will persist in 1.4,
but when you see them in 2.0, please let us know how to reproduce.

Also looks like you are testing with real hardware (rare for people to
still have access!).  I am testing using the Democamera.  First
configure the camera how you want it and snap an image.  Then set the
"FastImage" property to 1.  The camera will now re-use the image it has
in an internal buffer, hence no cpu cycles are spend generating the image.


Best,


Nico




_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general


--
Pavak K Shah
Assistant Professor
Molecular, Cell and Developmental Biology
Institute for Quantitative and Computational Biosciences
5000C Terasaki Life Sciences Building
University of California, Los Angeles



_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general