Questions on Hamamatsu Flash4

classic Classic list List threaded Threaded
31 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Questions on Hamamatsu Flash4

julou
Hello,

I’ve two questions related to our Hamamatsu Flash4 camera:

- the device adapter page states "ScanMode handles the readout speed. 1 = slow scan mode. 2 = fast scan mode”. However I can’t see a scanned property displayed. Is it relevant for this camera? what is the default scanmode for the flash4 (standard or slow)?
- trying to acquire fast stacks I realised that I can’t even acquire fast time series… Compared to the live mode rate (very fast >10hz for full frames), running a MDA with 30ms exposure and 35ms delay results in longer illumination (almost 1/2 s by eyes) and hence longer delays. Where does this difference come from? is there a simple workaround?

Thanks for your help. Best,
Thomas

--
Thomas Julou  |  Computational & Systems Biology  |  Biozentrum – University of Basel  |  Klingelbergstrasse 50/70 CH-4056 Basel  |  +41 (0)61 267 16 21


------------------------------------------------------------------------------
_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Thomas Julou  |  Computational & Systems Biology  |  Biozentrum – University of Basel  |  Klingelbergstrasse 50/70 CH-4056 Basel  |  +41 (0)61 267 16 21
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

Nico Stuurman-2
Hi Thomas,

I’ve two questions related to our Hamamatsu Flash4 camera: 

- the device adapter page states "ScanMode handles the readout speed. 1 = slow scan mode. 2 = fast scan mode”. However I can’t see a scanned property displayed. Is it relevant for this camera? 

That property is present with the ORCA 2ER camera (and possibly other CCDs).  I don't think that it has any bearing on the Flash 4.

- trying to acquire fast stacks I realised that I can’t even acquire fast time series… Compared to the live mode rate (very fast >10hz for full frames), running a MDA with 30ms exposure and 35ms delay results in longer illumination (almost 1/2 s by eyes) and hence longer delays. Where does this difference come from? is there a simple workaround?
That sounds bad (and like a bug in the device adapter/driver).  If short exposures are important to you it is always good to use hardware synchronization (see http://www.jbmethods.org/jbm/article/view/36/29 and https://micro-manager.org/wiki/Hardware-based_synchronization).

Best,

Nico




------------------------------------------------------------------------------

_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

Mark Tsuchida-3
In reply to this post by julou
Hi Thomas,

On Sat, Dec 19, 2015 at 1:06 AM, Thomas Julou
<[hidden email]> wrote:
> - trying to acquire fast stacks I realised that I can’t even acquire fast time series… Compared to the live mode rate (very fast >10hz for full frames), running a MDA with 30ms exposure and 35ms delay results in longer illumination (almost 1/2 s by eyes) and hence longer delays. Where does this difference come from? is there a simple workaround?

The difference comes from the fact that with 35 ms interval (30 ms
exposure), µManager is (1) instructing the camera to take a single
snapshot for each time point, which incurs setup/teardown overhead in
the driver and camera, and (2) switching the shutter (illumination)
for each time point (if in autoshutter mode), which has at least some
overhead. Also, aside from the overhead, timing events by software is
generally not accurate enough for interval adjustment at millisecond
resolution.

Performing acquisitions at precise intervals that are not dictated by
the camera (based on exposure and other settings) requires either (a)
a camera that supports adjustable intervals when in streaming mode,
with µManager support for such functionality, or (b) external
triggering of the camera using an appropriate device. Unfortunately
(a) is not available for Hamamatsu cameras in µManager, and neither
(a) nor (b) is supported in the µManager GUI ((b) is on our eventual
roadmap) and therefore would require scripting.

The best workaround I can suggest (if the experiment allows it) is to
set the interval to 0 and use the exposure to adjust the actual
interval (which typically includes readout time, unless using a frame
transfer CCD). The hope is that you can adjust illumination intensity
to a level appropriate for the exposure.

Best,
Mark

------------------------------------------------------------------------------
_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

julou
Thanks Nico and Mark!

Regarding the scan mode, I got an answer form Hamamatsu: slow scan is available only with firmware ≥ 2.5.

Regarding the fast acquisition of stacks (time- or z-), I'm not clear yet on what is the alternative to taking a single snapshot for each point… I guess it's a sort of streaming mode (probably what is used in live mode as well). Do I understand correctly that it is used transparently in any MDA as soon as the exposure is longer than the delay?
Does this mean that the only way for us to speed up z-stacks acquisition would be to use a triggerable z-stage?

Also, a new member of our group used to be able to acquire stacks faster (using another Flash4, maybe with more recent firmware…). He's using MM from Matlab (with snapImage() and getImage()) and is currently experiencing the same delays described in my previous email (while he used to be able to acquire 30ms images every 100ms, and move the stage between 2 images).

Is there any setting that we could have messed up in our config file and that would produce the delay?

Best, Thomas
Thomas Julou  |  Computational & Systems Biology  |  Biozentrum – University of Basel  |  Klingelbergstrasse 50/70 CH-4056 Basel  |  +41 (0)61 267 16 21
Reply | Threaded
Open this post in threaded view
|

Questions on Hamamatsu Flash4: How to read .dcimg in matlab

Pei Sabrina Xu
Hi Orca Flash 4 users , I got those .dcimg files, and checked the images by the LabView package with the vi “tm3samp_08_dcimgreader”. 

Can anyone read .dcimg files directly into Matlab? I have seen someone asked this question online, yet not answer. Anyone has a solution? Thanks! 
.....................................................................
Pei “Sabrina” Xu
Postdoctoral Fellow 
Scanziani Lab
The Center for Neural Circuits and Behavior 
University of California, San Diego (UCSD)
Email:[hidden email]
.....................................................................


------------------------------------------------------------------------------

_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4: How to read .dcimg in matlab

Pei Sabrina Xu
Since most people comes back from the holiday, anyone with the experience with dealing with .dcimg for Hamamatsu Flash4 camera? 

On Dec 30, 2015, at 6:21 AM, Pei Sabrina Xu <[hidden email]> wrote:

Hi Orca Flash 4 users , I got those .dcimg files, and checked the images by the LabView package with the vi “tm3samp_08_dcimgreader”. 

Can anyone read .dcimg files directly into Matlab? I have seen someone asked this question online, yet not answer. Anyone has a solution? Thanks! 
.....................................................................
Pei “Sabrina” Xu
Postdoctoral Fellow 
Scanziani Lab
The Center for Neural Circuits and Behavior 
University of California, San Diego (UCSD)
Email:[hidden email]
.....................................................................

------------------------------------------------------------------------------
_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general


------------------------------------------------------------------------------

_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

Mark Tsuchida-3
In reply to this post by julou
Hi Thomas,

On Tue, Dec 29, 2015 at 5:57 AM, julou <[hidden email]> wrote:
> Regarding the fast acquisition of stacks (time- or z-), I'm not clear yet on
> what is the alternative to taking a single snapshot for each point… I guess
> it's a sort of streaming mode (probably what is used in live mode as well).

Yes, it is a mode in which the frame timings are determined
autonomously by the camera (microsecond or better accuracy). Variously
called streaming, sequence acquisition, burst acquisition, etc.

> Do I understand correctly that it is used transparently in any MDA as soon
> as the exposure is longer than the delay?

Yes, that's correct.

> Does this mean that the only way for us to speed up z-stacks acquisition
> would be to use a triggerable z-stage?

Yes -- for the foreseeable future.

> Also, a new member of our group used to be able to acquire stacks faster
> (using another Flash4, maybe with more recent firmware…). He's using MM from
> Matlab (with snapImage() and getImage()) and is currently experiencing the
> same delays described in my previous email (while he used to be able to
> acquire 30ms images every 100ms, and move the stage between 2 images).
>
> Is there any setting that we could have messed up in our  config file

I can't think of anything, other than difference in the camera
version/firmware/settings.

Best,
Mark

------------------------------------------------------------------------------
_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

julou
Hello,

Thanks Mark for your answer.
Sabrina, I unfortunately have no experience with .dcimg files… By the way, it’d have been nicer to have started a dedicated thread since this is an independent question.

Performing acquisitions at precise intervals that are not dictated by  the camera (based on exposure and other settings) requires either (a) a camera that supports adjustable intervals when in streaming mode, with µManager support for such functionality, or (b) external triggering of the camera using an appropriate device. Unfortunately (a) is not available for Hamamatsu cameras in µManager, and neither (a) nor (b) is supported in the µManager GUI ((b) is on our eventual roadmap) and therefore would require scripting. 

> Does this mean that the only way for us to speed up z-stacks acquisition would be to use a triggerable z-stage? 

Yes -- for the foreseeable future.

Do I understand correctly that when a camera supports adjustable intervals, a MDA (with sequencable channels and/or piezo) drives the camera in streaming mode with the exposure defined by the channel’s exposure and the interval defined in the MDA delay?
What are the camera that support this feature? Would it be in principle possible to implement it with a Flash4 (by enhancing the device adapter and/or the driver) or is it beyond the camera specs?

> Also, a new member of our group used to be able to acquire stacks faster (using another Flash4, maybe with more recent firmware…). He's using MM from Matlab (with snapImage() and getImage()) and is currently experiencing the same delays described in my previous email (while he used to be able to acquire 30ms images every 100ms, and move the stage between 2 images).
> Is there any setting that we could have messed up in our  config file ?

I can't think of anything, other than difference in the camera version/firmware/settings.

We’ve done further testing with this… When we run the following bsh script with 30ms exposure, the average time is ≈330ms.
 for( int i =0; i<10; ++i){
      start = System.currentTimeMillis();
      mmc.snapImage();
      mmc.getImage();
      print(System.currentTimeMillis() - start);
   }
This is very puzzling because when we click as fast as possible on the “Add to Album” button of the GUI, we reach almost 10Hz!! So somehow MM And the camera can talk faster to each other…
Also we were able to test that this doesn’t come from our old firmware (2.03A) since Andreas Durandi (from Hamamatsu Switzerland) was kind enough to configure another flash4 (firmware 2.50A) with MM1.4 and run the same script: he obtains similar delays (≈350ms).

So I don’t understand what’s happening and what prevents us to operate the camera faster in capture mode. Any explanation / hint would be very much appreciated!
Best,
Thomas

--
Thomas Julou  |  Computational & Systems Biology  |  Biozentrum – University of Basel  |  Klingelbergstrasse 50/70 CH-4056 Basel  |  +41 (0)61 267 16 21

Thomas Julou  |  Computational & Systems Biology  |  Biozentrum – University of Basel  |  Klingelbergstrasse 50/70 CH-4056 Basel  |  +41 (0)61 267 16 21
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

Mark Tsuchida-3
Hi Thomas,

On Thu, Jan 14, 2016 at 9:27 AM, julou <[hidden email]> wrote:
> Do I understand correctly that when a camera supports adjustable intervals,
> a MDA (with sequencable channels and/or piezo) drives the camera in
> streaming mode with the exposure defined by the channel’s exposure and the
> interval defined in the MDA delay?

No, this is not supported by the MDA engine yet (it certainly will be
at some future time).
MDA always uses software timing when the interval is greater than exposure.

> What are the camera that support this feature? Would it be in principle
> possible to implement it with a Flash4 (by enhancing the device adapter
> and/or the driver) or is it beyond the camera specs?

The only camera adapter that implements it, that I'm aware of, is
Andor (not AndorSDK3).
In any case, it can only be accessed from scripts/code.

>> Also, a new member of our group used to be able to acquire stacks faster
>> (using another Flash4, maybe with more recent firmware…). He's using MM from
>> Matlab (with snapImage() and getImage()) and is currently experiencing the
>> same delays described in my previous email (while he used to be able to
>> acquire 30ms images every 100ms, and move the stage between 2 images).
[...]

> We’ve done further testing with this… When we run the following bsh script
> with 30ms exposure, the average time is ≈330ms.
>
>  for( int i =0; i<10; ++i){
>       start = System.currentTimeMillis();
>       mmc.snapImage();
>       mmc.getImage();
>       print(System.currentTimeMillis() - start);
>    }
>
> This is very puzzling because when we click as fast as possible on the “Add
> to Album” button of the GUI, we reach almost 10Hz!! So somehow MM And the
> camera can talk faster to each other…

Note the difference between "Snap to Album" (in the main window) and
"Add to Album" (in the Snap/Live window). The latter simply copies the
existing image to the album.

> Also we were able to test that this doesn’t come from our old firmware
> (2.03A) since Andreas Durandi (from Hamamatsu Switzerland) was kind enough
> to configure another flash4 (firmware 2.50A) with MM1.4 and run the same
> script: he obtains similar delays (≈350ms).

These delays don't strike me as out of the ordinary.

The only way to (appear to) capture individual frames faster that I
can think of is to have a sequence acquisition running in the
background. In this case, snapImage()/getImage() will return the
latest frame from the sequence acquisition (at least for some
cameras). I wouldn't depend on this behavior without testing with your
particular camera that it returns an image acquired at the timing you
expect, though, since this is an area where µManager's device
interface is not as strictly defined as it ideally would be.

This is not to say that µManager cannot be improved to better support
accurate intervals. It's definitely on our mid-term to-do list.

Best,
Mark

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

julou
Hi Mark,

>> This is very puzzling because when we click as fast as possible on the “Add
>> to Album” button of the GUI, we reach almost 10Hz!! So somehow MM And the
>> camera can talk faster to each other…
>
> Note the difference between "Snap to Album" (in the main window) and
> "Add to Album" (in the Snap/Live window). The latter simply copies the
> existing image to the album.

Thanks for pointing this, I feel really stupid now…

>> Also we were able to test that this doesn’t come from our old firmware
>> (2.03A) since Andreas Durandi (from Hamamatsu Switzerland) was kind enough
>> to configure another flash4 (firmware 2.50A) with MM1.4 and run the same
>> script: he obtains similar delays (≈350ms).
>
> These delays don't strike me as out of the ordinary.

OK… Guillaume Witz from our lab will try to give more details on this very soon.

> The only way to (appear to) capture individual frames faster that I
> can think of is to have a sequence acquisition running in the
> background. In this case, snapImage()/getImage() will return the
> latest frame from the sequence acquisition (at least for some
> cameras). I wouldn't depend on this behavior without testing with your
> particular camera that it returns an image acquired at the timing you
> expect, though, since this is an area where µManager's device
> interface is not as strictly defined as it ideally would be.

Interesting…
Here is a variation on the same topic: provided that we manage to speed up the single-image capture to e.g. <100ms, do I understand correctly that if delay>exposure, the MDA will use single-image capture and hardware triggering for sequencable properties? then we could use the camera/MM communication time to move the stage (e.g. by triggering it with the falling edge of the camera trigger)… does this make sense?

Best,
Thomas
Thomas Julou  |  Computational & Systems Biology  |  Biozentrum – University of Basel  |  Klingelbergstrasse 50/70 CH-4056 Basel  |  +41 (0)61 267 16 21
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

julou
In reply to this post by Mark Tsuchida-3
Hi Mark,

>> This is very puzzling because when we click as fast as possible on the “Add
>> to Album” button of the GUI, we reach almost 10Hz!! So somehow MM And the
>> camera can talk faster to each other…
>
> Note the difference between "Snap to Album" (in the main window) and
> "Add to Album" (in the Snap/Live window). The latter simply copies the
> existing image to the album.

Thanks for pointing this, I feel really stupid now…

>> Also we were able to test that this doesn’t come from our old firmware
>> (2.03A) since Andreas Durandi (from Hamamatsu Switzerland) was kind enough
>> to configure another flash4 (firmware 2.50A) with MM1.4 and run the same
>> script: he obtains similar delays (≈350ms).
>
> These delays don't strike me as out of the ordinary.

OK… Guillaume Witz from our lab will try to give more details on this very soon.

> The only way to (appear to) capture individual frames faster that I
> can think of is to have a sequence acquisition running in the
> background. In this case, snapImage()/getImage() will return the
> latest frame from the sequence acquisition (at least for some
> cameras). I wouldn't depend on this behavior without testing with your
> particular camera that it returns an image acquired at the timing you
> expect, though, since this is an area where µManager's device
> interface is not as strictly defined as it ideally would be.

Interesting…
Here is a variation on the same topic: provided that we manage to speed up the single-image capture to e.g. <100ms, do I understand correctly that if delay>exposure, the MDA will use single-image capture and hardware triggering for sequencable properties? then we could use the camera/MM communication time to move the stage (e.g. by triggering it with the falling edge of the camera trigger)… does this make sense?

Best,
Thomas
Thomas Julou  |  Computational & Systems Biology  |  Biozentrum – University of Basel  |  Klingelbergstrasse 50/70 CH-4056 Basel  |  +41 (0)61 267 16 21
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

Chris Weisiger
In reply to this post by julou
Hi Thomas,

On Thu, Jan 14, 2016 at 11:53 PM, julou <[hidden email]> wrote:
Hi Mark,

> The only way to (appear to) capture individual frames faster that I
> can think of is to have a sequence acquisition running in the
> background. In this case, snapImage()/getImage() will return the
> latest frame from the sequence acquisition (at least for some
> cameras). I wouldn't depend on this behavior without testing with your
> particular camera that it returns an image acquired at the timing you
> expect, though, since this is an area where µManager's device
> interface is not as strictly defined as it ideally would be.

Interesting…
Here is a variation on the same topic: provided that we manage to speed up the single-image capture to e.g. <100ms, do I understand correctly that if delay>exposure, the MDA will use single-image capture and hardware triggering for sequencable properties? then we could use the camera/MM communication time to move the stage (e.g. by triggering it with the falling edge of the camera trigger)… does this make sense?

For clarity's sake, let me recap some basics. If the exposure time is greater than the delay, then µManager will perform a "sequence acquisition", in which the camera is responsible for all timing. If the exposure time is less than the delay, then µManager will perform "snap acquisitions", where µManager itself is responsible for timings. In order to take advantage of µManager's hardware triggering logic, you must be using a sequence acquisition. If you are using snap acquisitions, then no attempt is made to arrange hardware triggering of devices, e.g. telling a triggerable stage what positions to move to on receipt of each trigger. Instead, µManager snaps an image, then moves the stage, then snaps another image, etc. -- all done in software.

Now, in principle you could set up hardware triggering yourself, which could run in both sequence and snap acquisitions. But it would exist "outside" of the MDA system (and indeed outside of µManager's device control in general) and would probably be very messy to try to set up. For example, while you might be able to tell your Z stage to change position every time it gets a trigger from the camera, µManager would not realize that the stage has moved, so its Z-stack acquisition logic would become inaccurate. You would need to disable µManager's Z-stacks so that it does not try to move the stage itself, or fool it using a virtual Z stage that doesn't actually move your sample.

It's not an impossible task (I believe), but it would be significant work, as compared to µManager's built-in hardware sequencing which just works, as long as you have supported hardware.

-Chris


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

julou
Hi Chris,

> For clarity's sake, let me recap some basics. If the exposure time is greater than the delay, then µManager will perform a "sequence acquisition", in which the camera is responsible for all timing. If the exposure time is less than the delay, then µManager will perform "snap acquisitions", where µManager itself is responsible for timings. In order to take advantage of µManager's hardware triggering logic, you must be using a sequence acquisition. If you are using snap acquisitions, then no attempt is made to arrange hardware triggering of devices, e.g. telling a triggerable stage what positions to move to on receipt of each trigger. Instead, µManager snaps an image, then moves the stage, then snaps another image, etc. — all done in software.

Got it. This clarification is indeed useful… (sorry I’m getting a bit confused by the different source of information on hardware triggering).
For future readers stumbling on this post, I want to clarify Chris’ statement that “If the exposure time is less than the delay, then µManager will perform “snap acquisitions”…” In a MDA with a sequence able device, several channels/z-positions will be acquired in streaming mode even when the delay/interval is longer than the exposure; this is of course the expected behaviour but Chris’ paragraph made me wonder at some point whether it was really happening.

> It’s not an impossible task (I believe), but it would be significant work, as compared to µManager's built-in hardware sequencing which just works, as long as you have supported hardware.

Yes, we’re doing our best to stay in this situation :)
In fact, we realize that using MDA with hardware trigger, we can use the flash4 “global exposure” output to trigger the illumination and the “programmable” output (on Vsync) to trigger stage motion (while the global exposure is not yet active). We’ve done tests with sequenceable channels using an arduino: the “global exposure” trigger works great (we are still waiting for the stage). However, we realised that there is no property to set the “reference signal” (to Vsync or to ReadEnd) when the trigger mode is set to “programmable”. It might be due again to our old camera firmware (2.03A), or is it not available in micromanager? Do you know which signal is the default one? (we’ll check it using an oscilloscope anyway and keep our finger crossed in the meantime).

By the way, this new approach circumvents our issue described earlier in this thread with “slow” acquisition in snap mode…
Best,

Thomas
Thomas Julou  |  Computational & Systems Biology  |  Biozentrum – University of Basel  |  Klingelbergstrasse 50/70 CH-4056 Basel  |  +41 (0)61 267 16 21
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

Mark Tsuchida-3
Hi Thomas,

On Fri, Jan 22, 2016 at 10:21 AM, julou <[hidden email]> wrote:
> However, we realised that there is no property to set the “reference signal”
> (to Vsync or to ReadEnd) when the trigger mode is set to “programmable”. It
> might be due again to our old camera firmware (2.03A), or is it not
> available in micromanager?

My understanding is that that option is not yet supported by
Micro-Manager (HamamatsuHam). If I understand correctly from the
manual, there are 3 programmable timing output lines, and quite
complex behavior can be programmed (e.g. a given line can be made to
fire every 3 frames). I'll forward your question to Hamamatsu.

Best,
Mark

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

Mark Tsuchida-3
Hi Thomas,

On Fri, Jan 22, 2016 at 12:16 PM, Mark Tsuchida <[hidden email]> wrote:

> On Fri, Jan 22, 2016 at 10:21 AM, julou <[hidden email]> wrote:
>> However, we realised that there is no property to set the “reference signal”
>> (to Vsync or to ReadEnd) when the trigger mode is set to “programmable”. It
>> might be due again to our old camera firmware (2.03A), or is it not
>> available in micromanager?
>
> My understanding is that that option is not yet supported by
> Micro-Manager (HamamatsuHam). If I understand correctly from the
> manual, there are 3 programmable timing output lines, and quite
> complex behavior can be programmed (e.g. a given line can be made to
> fire every 3 frames). I'll forward your question to Hamamatsu.

I was mistaken. Here is the answer from Hamamatsu:

> If I am not mistaken, the Programmable output lines you refer to are the Programmable output trigger ports.  And yes, they are supported.  To setup an output trigger port to programmable, simply set any of the OUTPUT TRIGGER KIND properties to "PROGRAMMABLE".  At that point, OUTPUT TRIGGER DELAY, POLARITY, PERIOD, and SOURCE will control how the output trigger will look.

Best,
Mark

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: Questions on Hamamatsu Flash4

julou
Hi Mark,

I was mistaken. Here is the answer from Hamamatsu: 

> If I am not mistaken, the Programmable output lines you refer to are the Programmable output trigger ports.  And yes, they are supported.  To setup an output trigger port to programmable, simply set any of the OUTPUT TRIGGER KIND properties to "PROGRAMMABLE".  At that point, OUTPUT TRIGGER DELAY, POLARITY, PERIOD, and SOURCE will control how the output trigger will look. 

Yes, we can see these properties and used them for our tests. However, although we see KIND, DELAY, POLARITY, PERIOD for trigger 0, 1 and 2, we have no SOURCE property displayed. Is suspect this one is used to set the “reference signal” described in the user manual. Hence my previous email asking whether this could come from the old firmware of our camera (2.03A), and what the default value is (readEnd or vsync?) if it cannot be set…

Maybe your hamamatsu talk person would know this?
Best,

Thomas
Thomas Julou  |  Computational & Systems Biology  |  Biozentrum – University of Basel  |  Klingelbergstrasse 50/70 CH-4056 Basel  |  +41 (0)61 267 16 21
Reply | Threaded
Open this post in threaded view
|

HCS plugin

PEARSON Matthew
In reply to this post by julou
Hi all,

I was wondering how many of you use the High Content Screening plugin within Micro-Manager?  And whether you can auto-focus reliably? I have a plastic 96 well plate and struggling to achieve consistent focus using either the 3 point auto focus or JAF (H & P) methods.  There seems to be considerable Z offset between the wells and the multi-well plate insert and stage has no grub screws for adjusting any tilt.  I am imaging at 10x so perhaps at the limit of what the 3 point interpolation focus can deal with.  I have tested JAF (H&P) be defocussing the live image and trying different z steps and i have been able to get it to work sometimes like this but as soon as i set it to capture all the imaging sites, many are out of focus.  We may try using a glass bottomed plate which should at least decrease the error inherent in the plate itself.  I just wondered how much this is being used by the community for HCS.

Thanks,

Matt


--
Matt Pearson
Microscopy Facility
MRC Human Genetics Unit
IGMM
University of Edinburgh
Crewe Road
EH4 2XU





--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: HCS plugin

PhilippeP
Hi Matt,
We are happily using the HCS plugin with Micro-Manager.
We at first had a similar problem when changing well. The best solution
we found is the following combination:

- Stable temperature during the whole process (with microscope
temperature already stabilized at start);
- Bottom "glass" plate;
- Definite Focus activated. Don't forget to set it on "Last position"
after "Measure" has been set. We use Zeiss. Any similar system should do;
- Manually verify the first position of the chosen grid in EACH well
(yes, it's a pain...), and do "Replace" for each first position. This
way, the z is memorized, and acts as a good starting point for the
Definite Focus.
- Don't forget to save the position list, in case of MM crash...

By the way, it's too bad HCS plugin cannot set grids in a circular
pattern, to avoid "wasting" cells that cannot be included in a square
grid (our wells are round).
Philippe


On 01/25/2016 05:26 PM, PEARSON Matthew wrote:

> Hi all,
>
> I was wondering how many of you use the High Content Screening plugin within Micro-Manager?  And whether you can auto-focus reliably? I have a plastic 96 well plate and struggling to achieve consistent focus using either the 3 point auto focus or JAF (H & P) methods.  There seems to be considerable Z offset between the wells and the multi-well plate insert and stage has no grub screws for adjusting any tilt.  I am imaging at 10x so perhaps at the limit of what the 3 point interpolation focus can deal with.  I have tested JAF (H&P) be defocussing the live image and trying different z steps and i have been able to get it to work sometimes like this but as soon as i set it to capture all the imaging sites, many are out of focus.  We may try using a glass bottomed plate which should at least decrease the error inherent in the plate itself.  I just wondered how much this is being used by the community for HCS.
>
> Thanks,
>
> Matt
>
>
> --
> Matt Pearson
> Microscopy Facility
> MRC Human Genetics Unit
> IGMM
> University of Edinburgh
> Crewe Road
> EH4 2XU
>
>
>
>
>

--
Philippe Pognonec, Ph.D.
Directeur de Recherche CNRS
Transporteurs en Imagerie et Radiotherapie en Oncologie, CEA
Faculté de Médecine, Université de Nice
28, Avenue de Valombrose
06107 Nice Cedex 2
France

Tel/fax 33 493 37 77 14



------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: HCS plugin

Kurt Thorn
In reply to this post by PEARSON Matthew
We have found that many plates are very far from flat - we have seen
plates where the well bottom differs by 200 um over the surface of the
plate.  This company makes plates that claim to be +/- 15 um over the
plate:
http://www.porvair-sciences.com/en/services-menu/life-sciences/glass-bottom-assay-plates/

We haven't tried them, but that might be a good starting point. Other
companies may have similar products; I don't know.

Kurt

On 1/25/2016 8:26 AM, PEARSON Matthew wrote:

> Hi all,
>
> I was wondering how many of you use the High Content Screening plugin within Micro-Manager?  And whether you can auto-focus reliably? I have a plastic 96 well plate and struggling to achieve consistent focus using either the 3 point auto focus or JAF (H & P) methods.  There seems to be considerable Z offset between the wells and the multi-well plate insert and stage has no grub screws for adjusting any tilt.  I am imaging at 10x so perhaps at the limit of what the 3 point interpolation focus can deal with.  I have tested JAF (H&P) be defocussing the live image and trying different z steps and i have been able to get it to work sometimes like this but as soon as i set it to capture all the imaging sites, many are out of focus.  We may try using a glass bottomed plate which should at least decrease the error inherent in the plate itself.  I just wondered how much this is being used by the community for HCS.
>
> Thanks,
>
> Matt
>
>
> --
> Matt Pearson
> Microscopy Facility
> MRC Human Genetics Unit
> IGMM
> University of Edinburgh
> Crewe Road
> EH4 2XU
>
>
>
>
>


--
Kurt Thorn
Associate Professor
Director, Nikon Imaging Center
http://thornlab.ucsf.edu/
http://nic.ucsf.edu/blog/



------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
Reply | Threaded
Open this post in threaded view
|

Re: HCS plugin

PEARSON Matthew
Hi Philippe and Kurt,

Thanks for the info, we will at least try a glass bottomed plate, not sure how flat they are but better than plastic.  Mark Tsuchida has also mentioned that some plate inserts may warp the plate through lateral forces, thought i'd mention for anyone else reading this.

Philippe, we are also using Zeiss and have definite focus although have had mixed success with it and didn't even try it for HCS.  Could you describe a little more how you use DF for HCS.  Where do you travel to to "measure" on the plate and how many offsets do you use?  When you travel to each first position in a grid and replace it in the xy list do you "measure" DF at the same time?  I noticed that when the xy positions are built into the list it does not take into account Z but i assume it does if you click replace as you say.

Thanks for the help,

Matt



 
--
Matt Pearson
Microscopy Facility
MRC Human Genetics Unit
IGMM
University of Edinburgh
Crewe Road
EH4 2XU



On 25 Jan 2016, at 18:24, Kurt Thorn <[hidden email]>
 wrote:

> We have found that many plates are very far from flat - we have seen
> plates where the well bottom differs by 200 um over the surface of the
> plate.  This company makes plates that claim to be +/- 15 um over the
> plate:
> http://www.porvair-sciences.com/en/services-menu/life-sciences/glass-bottom-assay-plates/
>
> We haven't tried them, but that might be a good starting point. Other
> companies may have similar products; I don't know.
>
> Kurt
>
> On 1/25/2016 8:26 AM, PEARSON Matthew wrote:
>> Hi all,
>>
>> I was wondering how many of you use the High Content Screening plugin within Micro-Manager?  And whether you can auto-focus reliably? I have a plastic 96 well plate and struggling to achieve consistent focus using either the 3 point auto focus or JAF (H & P) methods.  There seems to be considerable Z offset between the wells and the multi-well plate insert and stage has no grub screws for adjusting any tilt.  I am imaging at 10x so perhaps at the limit of what the 3 point interpolation focus can deal with.  I have tested JAF (H&P) be defocussing the live image and trying different z steps and i have been able to get it to work sometimes like this but as soon as i set it to capture all the imaging sites, many are out of focus.  We may try using a glass bottomed plate which should at least decrease the error inherent in the plate itself.  I just wondered how much this is being used by the community for HCS.
>>
>> Thanks,
>>
>> Matt
>>
>>
>> --
>> Matt Pearson
>> Microscopy Facility
>> MRC Human Genetics Unit
>> IGMM
>> University of Edinburgh
>> Crewe Road
>> EH4 2XU
>>
>>
>>
>>
>>
>
>
> --
> Kurt Thorn
> Associate Professor
> Director, Nikon Imaging Center
> http://thornlab.ucsf.edu/
> http://nic.ucsf.edu/blog/
>
>
>
> ------------------------------------------------------------------------------
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
> _______________________________________________
> micro-manager-general mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/micro-manager-general


--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
micro-manager-general mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/micro-manager-general
12