Add workflow and notes for FGS mask implementation#88
Add workflow and notes for FGS mask implementation#88srosagomez wants to merge 1 commit intospacetelescope:mainfrom
Conversation
rgcosentino
left a comment
There was a problem hiding this comment.
I'm a little hesitant to try to optimize this workflow and incorporate it with products that are in different coordinate systems. That seems like a messy thing to keep track of and prone to mistakes down the line.
Thoughts from others?
|
|
||
| The script to generate the FGS mask (both bitmask and boolean mask) is `full_workflow.py`. It takes multiple existing components of the RFP to generate intermediate products (such as a dark rate image, CDS noise image, super slope image) that are used to identify different bad pixel classes. The thresholds applied (defined at the top of the file, all in DN) are directly from GSFC. Note that the FGSMask() implmentation into the RFP will be a bit different than the other existing modules. See S. R. Gomez STScI-000863 for more details on the file format that PSS expects for this mask. | ||
|
|
||
| 1) The mask is in detector coordinates, NOT SCIENCE! |
There was a problem hiding this comment.
Sierra, oh my this is interesting. Ok we should chat about this. Please include more language here and be extremely verbose and overly explicit.
Are you saying that all of the analysis for the FGSMask() is in detector coordinates and not the science coordinate system? This sort of means to me that we cant just import the old mask and look at it but need to do a transformation. And if we do parallel computations we need to keep track of what modules and functions are operating in detector and science coordinate systems. Is that correct to your understanding?
@BradleySappington when it comes to science coordinate system or detector coordinate system, every single detector has a flip either in X or Y direction. It's not trivial and a headache to keep track of. I'm almost inclined now to keep isolated if this is true. Thoughts and questions for next Tuesday?
|
|
||
| def change_coord_to_det(arr, det): | ||
| ''' | ||
| Change the detector coordinates from DETECTOR to SCIENCE (run again to undo). Dependent on detector. |
There was a problem hiding this comment.
Ah look at this here. Ok, I see now. We might want to discuss this as a group.
| datacube = FlatDataCube(data.shape[0], degree=1) | ||
|
|
||
| # Extract the linear coefficient | ||
| slope = datacube.fit(data)[1] |
There was a problem hiding this comment.
Is this a slope image? 2D array that is 4096 x 4096?
| """ | ||
| Data cube to create the slope map | ||
| --- COPIED FROM RFP MASK MODULE --- | ||
| Lightweight polynomial fitter for ramp cubes. From Bernie R. and Sarah Betti. |
There was a problem hiding this comment.
@BradleySappington what do you think about the below code here? It's getting pretty python expert level and adding more dependencies for capabilities we already have. Do we adopt this for other base data cube class or keep what we have and use ours instead?
I'm wondering how computationally cost efficient one approach is to the other even?
BradleySappington
left a comment
There was a problem hiding this comment.
I have started moving this code to a pipeline based approach as if this is another reference type. Lets schedule a meeting and discuss what is happening while finishing this transfer together so everyone is on the same page. The comments I made in the code review can be good discussion points.
| module="asdf" | ||
| ) | ||
|
|
||
| os.environ["ROMAN_VALIDATE"] = "false" |
There was a problem hiding this comment.
Wondering why you added this here?
Lets keep all needed environment changes to be configurable outside of the code or as part of the initial setup as outlined in the README.
| ) | ||
|
|
||
| os.environ["ROMAN_VALIDATE"] = "false" | ||
| asdf.get_config().validate_on_read = False |
There was a problem hiding this comment.
Make sure you have TODO here to delete this when no longer needed.
| os.environ["ROMAN_VALIDATE"] = "false" | ||
| asdf.get_config().validate_on_read = False | ||
|
|
||
| log_file = f"fgs_mask_log_{datetime.now().strftime('%Y%m%d_%H%M%S')}.txt" |
There was a problem hiding this comment.
we have logging implemented already, lets use that
| NPROCESSES = 4 | ||
|
|
||
| # Current FGS BPM flags | ||
| class flags(np.uint32, Enum): |
There was a problem hiding this comment.
these should be imported from roman data models
| HOT_FROM_GW = 2**29 | ||
|
|
||
|
|
||
| class DarkDataCube(): |
| logging.info(e) | ||
|
|
||
|
|
||
| class FlatDataCube: |
| Code from Sarah Betti | ||
| ''' | ||
| # Detector coordinate positions; GSFC uses detector, SOC uses science | ||
| detector_pos = { |
There was a problem hiding this comment.
put in constants
| return super_slope / np.nanmean(super_slope) | ||
|
|
||
|
|
||
| def compute_cds_noise_from_datacube(rn_cube, cds_noise_path): |
There was a problem hiding this comment.
import from readnoise
| # Thresholds and constants | ||
| E_PER_DN = 2 | ||
|
|
||
| DEAD_SIGMA_THR = 5.0 |
There was a problem hiding this comment.
make configurable
This PR contains a script with the general workflow to generate the FGS bad pixel mask. It includes a README that contains notes about the differences between the science mask and the FGS mask.