Skip to content

Add base class for cameras#131

Draft
ehpor wants to merge 56 commits intodevelopfrom
bmc_dm_refactor
Draft

Add base class for cameras#131
ehpor wants to merge 56 commits intodevelopfrom
bmc_dm_refactor

Conversation

@ehpor
Copy link
Copy Markdown
Collaborator

@ehpor ehpor commented Oct 24, 2023

WIP but I might want some feedback soon.

These base services generalize much of the functionality that is currently duplicated between cameras and DMs. This simplifies their implementation and makes fixing bugs in ROI flips and rotations, DM channel updates, etc. much easier in the future.

The DM base class has been moved out of this and merged a while ago.

This PR introduces a camera base class that generalizes much of the functionality that is currently duplicated between all the different camera services. This simplifies their implementation and makes finding bugs in ROI flips and rotations in particular much easier. This work is entirely based on what's in #124.

Note how successfully merging this PR should result in:

@ehpor ehpor added the refactor Refactor existing code. label Oct 24, 2023
@ehpor
Copy link
Copy Markdown
Collaborator Author

ehpor commented Oct 24, 2023

@ivalaginja This might be interesting for you. I mainly want to know if the Andor camera is compatible with this base class.

@ehpor ehpor marked this pull request as draft October 24, 2023 05:20
@ivalaginja
Copy link
Copy Markdown
Collaborator

@ehpor this looks totally compatible with our Andor Neo. I don't think we'd have some of the parameters to read from the config, like the gain, but we can keep those properties unimplemented and the value in the configfile at None or empty so that it just sets the default value, considering it would never use it anyway.

@ehpor ehpor mentioned this pull request Nov 12, 2023
8 tasks
@raphaelpclt raphaelpclt changed the title Add base classes for cameras and DMs Add base classes for cameras Oct 2, 2024
@ivalaginja
Copy link
Copy Markdown
Collaborator

Just a note: the DM base classes have now been merged in #156, so anything DM in this PR can simply be removed.

@ivalaginja
Copy link
Copy Markdown
Collaborator

ivalaginja commented Jun 26, 2025

I will be taking this over from @alexisyslau.

Considering the discussion in #124, the three parameters rot90, flip_x and flip_y are sufficient to transform images from camera coordinates to system coordinates. I am now trying to figure out how to choose these three parameters based on measured images on the THD2 testbed, so I can determine our system-level coordinate system for images.

Choosing rot90 should be easy because that is figuring out what is up-down, and what is left-right. For the flips, I understand they should be used to "flip back" the image that would have flipped by an upstream lens, but I wonder how to determine my initial reference? Is it just that if I have an even number equal lenses upstream, I don't need a flip, and if I have an uneven number of equal lenses upstream, I do need the respective flip? Thoughts @steigersg @ehpor?

Edit: Could I also assume that the goals of the flips in x and y are to simply match the optical model?

@ivalaginja ivalaginja changed the title Add base classes for cameras Add base class for cameras Jul 2, 2025
@ivalaginja
Copy link
Copy Markdown
Collaborator

ivalaginja commented Jul 2, 2025

Spent a good chunk of time on this with @steigersg today and made good progress. What's left to do:

  • Rewrite the offset transformation matrix to accommodate an inverse transform; also check the forward logic.
  • Test the transformation function on arrays outside of the service.

@ivalaginja
Copy link
Copy Markdown
Collaborator

ivalaginja commented Jul 11, 2025

@ehpor do you think you could look over this for an initial review? I would like to get some feedback on the base class and on the ZWO v2, as I want to have something robust here first before moving on to the other cameras. We also have none of the ZWO or FLIR over here, so before I can test anything, I will need to do the transfer also for the AVT and/or Hamamatsu cameras first.

@ivalaginja
Copy link
Copy Markdown
Collaborator

@ehpor @steigersg @alexisyslau I'd like to ask your guys' advice because this PR is slowly breaking my brain. @steigersg and I figured out the different transformations and property logic (we think), which is the first big part of this PR. The second big part is to actually use the camera base class with one of the camera and check the beforementioned transformations and logic.

Since it looks like I am the one moving forward with this, I need a new class of a camera type that I have access to on THD2, which is now the Hamamatsu ORCA.

Could I ask you guys for feedback specifically on the new file hamamatsu_camera_v2.py and whether this looks the way it should? I am struggling in particular with the following points:

  • For the Hamamatsu, we need to read the sensor width and height before changing the ROI (I forgot why we can't read this from the config but there was a reason. I can go dig for it if this becomes a pain). In the old Hamamatsu service this is simply assigned to a class attribute in the open(), but with the base class this would have to be done smarter and right now I don't know how.
  • Inside the Hamamatsu service, we override the monitor_temperature() method because we added a safety check inside, I don't know if there's a way to duplicate things less than I am doing here since it uses a while statement.
  • The hardest part for me right now is to figure out how to work in the StoppedAcquisition context manager correctly. Almost all Hamamatsu properties need that, with only 2-3 exceptions.

Any help is appreciated.

@ivalaginja
Copy link
Copy Markdown
Collaborator

@alexisyslau and I solved the first two of the above points. The last one still gives me headaches.

The basic camera properties are defined in the camera base class. It is also in the camera base class that they are made into catkit2 properties (using the make_property_helper that calls self.make_property). The problem I am facing is that I have a bunch of these basic camera properties, i.e. width, height, offset x and offset y, that need to use the StoppedAcquisition context manager. This context manager is not used inside the camera base class though, so I don't know how to implement this in the Hamamatsu child class so that it applies to these camera properties.

It seems to be that by defining the basic properties like width and height etc. in the base class, I loose the option to warp them into a StoppedAcquisition optional.

@ehpor please help?

@ivalaginja ivalaginja force-pushed the bmc_dm_refactor branch 2 times, most recently from 480de45 to 7617cce Compare September 23, 2025 14:39
@ivalaginja
Copy link
Copy Markdown
Collaborator

The above is now solved, @RemiSoummer can give more details on that.

I have been able to run the Hamamatsu v2 service on hardware: I can stream images on the viewer and I can change the exposure time successfully. However, changing the offsets gives me errors so I will stop here for now and try to come back to it with a fresh head another time.

@ivalaginja
Copy link
Copy Markdown
Collaborator

@steigersg @RemiSoummer I have now written the ZWO v2 service in exactly the same way like the Hamamatsu service that I started testing on hardware. Depending on your availability, it is ready to be tested on hardware too. Note how I remove the HiCAT hack in the ZWO service that should be fixed through this PR and the correct config parameters for the camera.

@RemiSoummer
Copy link
Copy Markdown
Collaborator

After discussion with @ivalaginja (sending this message after the fact), documenting here a few thoughts.
The issue is that the Hamamatsu has some properties that require stopping the acquisition. First this sounds to me like a reasonable feature that may exist on other types of cameras so I'm interested in solving this in a general way, even if this means for ZWO users we have to change something on our end.

  • My first question was whether we can override the make_property_helper at the child class level to add the logic and the context manager to turn off the acquisition for the Hamamatsu (or other brands in their own future child classes). We agree this would be a bad solution because this defeats the purpose of base class. Indeed we need the acquisition pause for properties that require the flip/rotation logic, so we would be over-writing those functions that contain the rotation/flip logic, therefore having to duplicate that critical code. The point of the base class is that we never touch the code that handles the rotation/flips between the detector frame of reference to the testbed frame of reference.

  • Another solution would be to implement some kind of decorator with the acquisition pause context manager and logic around the setter/getters that call the API in the child service. I don't like this solution because 1) not sure how to implement that properly and 2) this means the code would look different depending on the camera and by looking at the ZWO service one does not know how to implement another brand that requires the acquisition pause without looking at the Hamamatsu example. I prefer a solution where the code is the same for all cameras for basic functionalities and we use some kind of config entries to turn on or off standard features (I'm considering that the acquisition pause as something that is probably standard for a number of brands, as opposed to stuff very specific to the Hamamatsu e.g. the fan.)

  • The chosen solution is to have class attributes that are hard coded in the service child class, for example something like self.offset_x_requires_acquistion_pause = True and same for all other properties. The make_property_helper would then have an optional parameter e.g. requires_acquistion_pause that defaults to False and called by the child service. An alternative would be to put additional parameters in the config file but I prefer moving those at the child service levels because they depend on the hardware, and that's something we will hard code in at the same time we have the camera manual and API in front of us. Adding those parameters offset_x_requires_acquistion_pause = True/False at the config file level gives the chance the user messes with it and it's really strictly limited to the hardware so we don't need to expose that to the user. With the chosen solution, the base class is untouched and the code is identical for all subclass implementations, we don't expose anything unnecessary at the config file, and it just requires setting up these flags for each properties in the child service. Moreover, we can do that easily when we migrate the ZWO and FLIR services to use the base class, so this does not seem too invasive to me.

@RemiSoummer
Copy link
Copy Markdown
Collaborator

RemiSoummer commented Sep 23, 2025

@steigersg @RemiSoummer I have now written the ZWO v2 service in exactly the same way like the Hamamatsu service that I started testing on hardware. Depending on your availability, it is ready to be tested on hardware too. Note how I remove the HiCAT hack in the ZWO service that should be fixed through this PR and the correct config parameters for the camera.

@ivalaginja I think we could also do this in two PRs maybe? Since you have time pressure to merge the base class and the Hamamatsu service @steigersg and I can do the ZWO transition later in a separate PR and we still use the current service in the mean time.

@steigersg
Copy link
Copy Markdown
Contributor

@ivalaginja I found some bugs in the offset transform in particular when the 90 degree rotation is enabled. A summary of what was added:

  • Added hidden variables for offset_x and offset_y so that those values can be internally retrieved without calling the getter. This is important for the 90 degree rotation case which requires knowledge of both the x and y offsets even if the user is only requesting one.
  • In each of the offset_x and offset_y setters, the call to the set_roi_offset function is made for both x and y even if only one of those values is updated. This is again for the 90 degree rotation case since moving in one dimension in user coordinates will move the roi in the opposite dimension in camera coordinates necessitating both being updated .

I tested this on the new HiCAT ZWO camera but would appreciate you also testing this out if possible/ double checking the logic (especially when rot90 = True). As you know it can be tough to keep all this straight with the different transformations and coordinate systems so a second set of eyes would be useful.

@ivalaginja
Copy link
Copy Markdown
Collaborator

Looking at this asap.

@ehpor might also be useful to get your comments on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

refactor Refactor existing code.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants