GMOS FAQ

We have taken some of the more frequently asked questions from the Gemini Help Desk archive and reproduced here the solutions to those questions. The list is arranged by GMOS mode, but there are questions from several aspects of working with that mode from using the ITC to data reduction. These are meant to be quick answers to a few general questions so in some cases, you may need to use the Gemini HelpDesk to have your questions answered fully. Click on the headings below to be taken directly to the relevant subsections.



Content:

General Questions about the GMOS

Data Reduction

Questions about the GMOS Integral Field Unit (IFU)

ITC

Data Reduction

Questions about the GMOS Nod and Shuffle (N&S) Mode

Observational Setup

Data Reduction



General Questions about the GMOS


Data reduction


Question 1
When I try to run gbias/gfreduce/gsprepare/etc. I get the error message: ERROR: parameter ` either do not exist, are not MEF files, or’ not found. What is the problem?
Solution 1
This message often happens when the user has started IRAF in a directory without a login.cl file. the user should double check from where IRAF was started. Alternatively, there may be a problem with uparm file. One can try unlearning gemini or if that does not work, deleting the uparm file and starting over.
Question 2
In the GMOS Cookbook, the script fails and cannot locate the bias mastercals “MCbiasCenSp” and “MCbiasFull” that it creates in the previous steps, leading to a failure when it performs GSFLAT for the flat normalization. Examples of errors are:

 

-Creating GCAL Spectral Flat-Field MasterCals–
-CenterSpec GCAL-flat normalization, non-interactive-
GPREPARE: Using MDF defined in the header 1.0arcsec ERROR - GIREDUCE: Can not find bias frame: MCbiasCenSp WARNING - GIREDUCE: Bad Pixel Mask filename is an empty string

Only saturated pixels will be flagged
ERROR - GSREDUCE: There was an apparent fatal error with GIREDUCE

ERROR - GSREDUCE: Program execution failed with 1 errors ERROR - there was a problem running GSREDUCE. Stopping now.

ERROR - GSFLAT: Program execution failed with 1 errors. GPREPARE: Using MDF defined in the header 1.0arcsec ERROR - GIREDUCE: Can not find bias frame: MCbiasCenSp WARNING - GIREDUCE: Bad Pixel Mask filename is an empty string

Only saturated pixels will be flagged
ERROR - GSREDUCE: There was an apparent fatal error with GIREDUCE

 

Solution 2
Possibly the problem is that included pre-prepared _BIAS.fits and _stack_bias.fits files that have been trimmed. This causes the CookBook scripts to execute IRAF tasks incorrectly. The GMOS CookBook assumes that only raw data is being used in the scripts and takes the user through the process of reducing the raw data (not the pre-prepared).
Question 3
I am experiencing a bug related to gbias when I attempt to apply the command to my own data and make a MasterCal bias file:

 

stsci.tools.irafglobals.IrafError: Error running IRAF task files IRAF task terminated abnormally ERROR (603, “Parameter not a legal boolean (try ‘yes’ or ‘no’) (sort)”)

 

Solution 3
This appears to be due to a line-limit length in IRAF. The error described occurs if the input to a task is 1023 or more characters (a limit not normally encountered). A possible workaround could be to write the list of files to a file and use IRAF’s “@” notation.

Questions about the GMOS Integral Field Unit (IFU)


ITC


Question 1
I am having a big disagreement with the GMOS ITC in IFU mode and I was wondering if you can help. I am doing this for a 22.7 mag/arcsec^2 extended source in the V-band and calculating the SNR in 1”x1” aperture. (9x1200s) The ITC gives me SNR = 1 total and a count rate of 4 electrons/exposure which is a lot less than I was expecting, more like SNR = 22 which makes more sense for me at this mag.
Solution 1
Several users of the IFU have confirmed the ITC results look to give realistic values, although the output needs to be better explained as it is easily misinterpreted. In addition to misinterpreting the output, the user had overestimated the efficiency of the IFU, not accounted for IFU magnification (the IFU fore-optics end up projecting a 0.2” fiber onto 4-5 pixels, rather than the 2.7 unbinned pixels that you’d get without the IFU in the beam, and since the light is spread out over more pixels read noise has a bigger impact), and had wrongly assumed it is fine to bin IFU spectra 4x4. It is not recommended to bin spatially IFU data as the fibers blend together and cannot be reliably extracted, and furthermore binning spectrally x4 results in very under-sampled data. The recommended binning for this configuration is 2x1.
Question 2
Using the ITC, I’ve noticed that in switching from longslit to IFU on GMOS gives a drop in counts of approximately half, while the background noise increases by approx 30%. Is this a real effect in using the IFU when compared to long slit observations, or is this an artifact of the ITC SNR formula? In both cases, all observing conditions/specifications were kept static, and only the mask was changed.
Solution 2
The user had originally wished to compare the S/N in a single spatial row of pixels from a 0.5” longslit with the IFU but the read noise from the additional rows of pixels summed by the ITC for a single IFU element was causing confusion. When using the GMOS ITC for IFU calculations, there is an option to “select multiple IFU elements along a radius with offsets of 0.00 to XX arcsec”. This produces a set of plots, each plot for 1 IFU element, for an IFU at 0.0, one at 0.2”, one at 0.4” etc along the defined radius. Most users then understand that each plot refers to only 1 IFU element and is NOT the sum of all the IFU elements. However this is not explained in the help files under ‘calculation method’ or “analysis method’. The problem arises when the user selects a “uniform surface brightness” for the source, since in this case the option to “select multiple IFU elements along a radius with offsets of 0.00 to XX arcsec” produces only 1 plot! it makes sense since for a uniform surface brightness all the plots for each ifu elements would be the same. However this leads to confusion, because then the user definitely thinks that all the fluxes of all the IFU elements along the defined radius have been SUMMED together (when they haven’t)- and lead them to conclude that the GMOS IFU is worthless...PLease would you place a warning in the helpfile under “analysis method’ to point out that the plot is NOT a sum but is for only 1 IFU element.

Data reduction


Question 1
I found some nights of data where the reduction was crashing when trying to run gfcube. The error that is given is:

 

ERROR: gfcube: num image rows != num good fibres in MDF

 

On inspection of the MDF, when I run fxhead on the reduced flat frame (created using gfreduce) this is what i get:

 

EXT# EXTTYPE EXTNAME EXTVE DIMENS BITPI INH OBJECT 0 ergS20100210S00xx.fits 16 GCALflat 1 BINTABLE MDF 1 33x750 8 2 IMAGE SCI 1 6218x742 -32 F GCALflat

 

This suggests that 8 of the fibres have been ignored during the reduction (742 kept out of 750). However, when I look at the apids in the MDF, only 7 of them are set to 0, i.e. there are 743 rows. Therefore, the number of image rows does not equal the number of good fibres in the MDF.
Solution 1
Set BEAM=-1 in the MDF to ignore missing fibres in the last fibre block during extraction.
Question 2
I’m reducing GMOS-IFU obtained with the one slit mode (red slit) and the following error is occurring when I try to run gswavelengh for the extracted Arc lamp file.

 

“Calibrating extension: 1 ERROR: Attempt to access undefined local variable `mdfrow’.

“printf (“%03dn”, mdfrow) | scan(snum)”
line 246: gmos$gswavelength.cl”

 

Solution 2
Edit the MASKTYP keyword in the raw data to be equal -1. The MASKTYP keyword was set incorrectly. Editing the MASKTYP keyword in the raw data to be equal -1 (which is what it should be for IFU data) will solve the problem. The gprepare task has been edited to include more thorough checks and exit with a more appropriate error if the MASKTYP keyword is accidentally set incorrectly in the future.
Question 3
I’m trying to WCS-calibrate my reduced GMOS-N IFU datacube to overplot the reconstructed velocity field over some HST data. The data-reduction was completed successfully using the gemini tasks under iraf, and everything worked more or less fine, including the final gfcube-command. However, the resulting cube contains no WCS system, but rather gives coordinates in arcsec relative to some center outside the field (I’m only using one slit, although the data actually was taken in 2-slit-mode). My question therefore is, how I can obtain the center coordinate of the cube, or what are the reference pixels and coordinates? I played around quite a bit by now (e.g. using the WCS headers in extension 0, while the actual cube is extension 1 or including the RA/DEC offsets from cube-header) but unfortunately unsuccessful so far.
Solution 3
The WCS produced by gfcube in the science extension is just a relative WCS, in arcseconds relative to the bottom-right corner of the IFU field (as defined in the GMOS “MDF” table, which specifies the fibre mapping). The WCS axes are parallel to the edges of the IFU field, rather than RA and Dec (unless PA=0). If your target was centred accurately, the RA/Dec you requested will be at the centre of the datacube (plus or minus any offsets, as Kathy explained). The difference in pixels between the centre of the cube and some other position times CD1_1 or CD2_2 is therefore the distance in arcseconds from your target position along the relevant axis. If you want to calibrate the datacube directly in RA/Dec, here’s what I think you can do (in the SCI header, using hedit): o Set CRPIX1 and CRPIX2 to the centre of the cube in x and y respectively, so if the cube is 110x163x2732 pixels, that would be CRPIX1=55.5, CRPIX2=82.0. If you can see your target isn’t perfectly centred, you can adjust the pixel number accordingly.
  • Set CRVAL1 and CRVAL2 to the central RA/Dec of the IFU, ie. your target co-ordinates plus any offsets (both in decimal degrees).
  • Set the CDi_j keywords to represent the tangent plane increment in decimal degrees parallel to RA and Dec along each datacube axis:
    • CD1_1 degrees parallel to RA per x pixel CD1_2 degrees parallel to RA per y pixel
    • CD2_1 degrees parallel to Dec per x pixel CD2_2 degrees parallel to Dec per y pixel
    • If all your PAs are increments of 90deg, this is relatively easy, eg. for PA=0/180, CD1_2 and CD2_1 are ~zero and CD1_1/CD2_2 are just plus or minus the pixel size. I believe the reconstructed datacube has the same orientation as the GMOS field (so you can copy the signs from the CD keywords in extension 0), but the size of the increment needs adjusting to your datacube pixel size rather than the GMOS detector pixel size. Otherwise, for intermediate PAs, you’d have to multiply by sin(PA) and cos(PA) to project the pixel increment onto each of your rotated co-ordinate axes.
  • Make sure you have CTYPE1=RA—TAN and CTYPE2=DEC–TAN in the header (note the correct number of dashes!), so that your FITS reader knows to project the above tangent plane increment parallel to RA onto the equator. If you don’t have these keywords, I believe your RA increment will be wrong by a factor of 15*cos(dec); conversely, you don’t need to include the latter factor explicitly if you do have the keywords.
Thus in your example with PA=90 and 0.03” pixels, I think this is what you need:
  • CRPIX1 = 55.5 CRPIX2 = 82.0 CRVAL1 = 49.938125 CRVAL2 = 41.51680556
  • CD1_1 = 0.0 CD1_2 = -8.33333333333333E-06 CD2_1 = 8.33333333333333E-06
  • CD2_2 = 0.0 CTYPE1 = RA—TAN CTYPE2 = DEC—TAN
  • (Please double check the signs etc. & make sure the result looks sensible.)
I’m not a WCS expert, but this is my understanding after some head scratching to figure out where the projection of RA to the equator comes in. In any case, it reproduces the same conventions used for GMOS imaging. If you need further information on what the WCS keywords mean, you can find the FITS standard in this document.
Question 4
I am reducing Gemini-N GMOS IFU observations, 2 slit mode. I am using the IRAF tools in the gemini package. I have successfully done bias, flat-fielding, cosmic rays, wavelength calibration, rectification. I am ready to sky subtract now. Part of my 2D spectrum shows significantly higher noise than the rest. Looks like it’s in the B slit (not sure about this). Is this increased noise a known feature of the array, or do you think there was something in my reductions that introduced this? This appears in all objects, which use difference GCALs, but share one twilight flat and one bias.
Solution 4
Changing the gfreduce.weights parameter from the default value (“variance”) to “none” solves the issue. (Thanks to Mark Swinbank for this tip).
Question 5
It seems that the ITC uses a value for (source aperture area/sky aperture area) of 1 in calculating the SNR for the IFU. I would have thought that a ratio closer to 1/500 (500 being the number of sky IFU elements) should be used. If this is correct then in the background limited case, the ITC would under estimate the SNR by a factor approaching almost SQRT(2). Is this Correct? Changing the sky aperture in the analysis section doesn’t seem to affect the SNR at all when in IFU mode (produces identical SNR ascii file). eg. part of the snr ascii file for a 1hr integration of USB R=21mag/arcsec^2 spiral.
Solution 5
Let me try to explain how the resulting S/N is presented and how the size of the sky aperture affects this output. The red curve gives you the S/N for a single exposure - ignoring the sky subtraction. So this is what you would get if you could do completely noiseless sky subtraction for a single exposure. The yellow curve gives you the final S/N, including the noise contribution from the sky subtraction. So if you have only a single exposure and the default sky aperture of 5xobject_aperture, then the final S/N curve (yellow) is below the curve for S/N for a single exposure (red). If you now increase the sky aperture to say 500xobject_aperture, then the red curve will be the same as before (single exposure ignoring the sky subtraction), but the yellow curve will show a higher S/N than before. This reflects that the sky subtraction contributes less noise than in the previous case. Another example would be if you specified multiple exposures. In this case the final S/N curve will show you the total S/N from combining all the exposures _and_ sky subtracting them with the aperture that you specify. I hope this clears up the confusion.
Question 6
Fringing: the GMOS website states that fringing is a problem redward of 750nm. How serious a problem is this at 850nm? Is there a standard way to deal with this issue (by dithering, etc)?
Solution 6
Regarding the fringing in the red: We have limited experience yet with data reduction of IFU data taken in the far red. I recommend that you dither the observations using at least 4 dither positions with sufficient dithers to move the object at least 2 fibers, e.g. 0.4” dithers. This should allow you to determine the fringing and subtract it out. Make sure you specify that all 4 dither positions should be observed together – put them in one observations with an offset iterator, and if the total time required for the observation is more than 2-2.5hours, also put a note in the observation stating that we are not allowed to split the observation.
Question 7
I have had a question from a UK PI about how to offset the IFU by one fibre spacing, and I just want to check that I understand the layout of the fibre array on the sky. It looks from the diagrams on the web pages that that an offset parallel to the short axis of the IFU (in 2 slit mode) would go along a column of fibres. So an offset of 0.2” in q would move the target along by one fibre. Is that correct?
Solution 7
Yes, you are correct. If you offset parallel to the short axis of the IFU (in 2 slit mode) with 0.2” then you will offset with exactly one fiber spacing.

Questions about the GMOS Nod and Shuffle (N&S) Mode


Observational Setup


Question 1
Can one do N&S MOS microshuffling with a few tilted slits in the mask as well as normal slits? I think this should be possible as long as the projected y length of the tilted slit is the same as the shuffle distance.
Solution 1
For the shuffling part you are correct in your assessment that it is the length of the slit in the projected y pixel direction that matters, this has to be the same for all of the slits when micro-shuffling. The GMMPS already assumes that the slit length is in projected y pixels so tilted slits are longer than non-tilted slits in actual length. This is as it should be for micro-shuffling, I believe there is no issue there and it is automatically supported in gmmps. However, the users usually wish the object to be in the slit in both nod positions. But perhaps you in fact want to do nod and shuffle where the object is at (0,0) for the first nod and then the second nod position is large, the objects are out of the slit and the second nod/shuffle position is only for sky subtraction. This will work fine! There are not many requests for Nod&Shuffle in this mode because of the high overhead. Note there is no difference between micro and band shuffling for this issue, the only issue I believe is whether or not you want the object to nod along the slitlet.
Question 2
I just noticed that the offset iterator is disabled if you are using Nod and Shuffle. I can see why this is sensible for N&S with MOS, but how about N&S with longslit? I thought that a good way to avoid charge trap problems would be to have small offsets along the slit between each N&S observation. However I can’t define this in the OT. Is there some reason why this is a bad idea? (I realise they can use the DTAX offset, I just want to know if there is a problem with the other way).
Solution 2
You raise an excellent point, but as you noted this mode just is not supported at the present time. The reason has to do with the way that the offsets in the Nod&Shuffle component are implemented in the telescope control software - they are absolute and not relative offsets. So even if you could put an offset iterator into the Nod&Shuffle observation, when the exposure started your offset put in by your iterator would just be cleared when the telescope moved to the first nod position given in the Nod&Shuffle component. So for now your PI will just have to use the DTA-X iterator to achieve the same thing. If that does not give enough positions (and it might not depending on the length of the program and number of exposures) you might want to suggest to the PI that they break their N&S observation into two observations, each with its own N&S component because you can define different nods for each of those. Couple that with a DTA-X iterator and you should have enough steps. A tip for the DTA-X iterator: bigger steps are better, the charge traps are more than 1 pixel high. If using GMOS-N I recommend steps of +6, +3, 0, -3 and -6. If you binning 2x2 then you’ll want to have even steps, so +6, +2, -2, -6. If you are using GMOS-S the steps will be different because the range there isn’t quite as big (I’m not sure about the details, a GMOS-S person would have to help you there).

Data Reduction


Question 1
Are there reduction examples for GMOS N&S? I have GMOS Example as given in Gemini IRAF package but I am trying to figure out how to combine each N&S exposure using ‘gnscombine’.
Solution 1
In addition to the material included in the gemini IRAF helppages, you might want to read the appendix of the first Gemini Deep Deep Survey paper, Abraham et al., 2004, AJ, 127, 2455. They describe the steps they took in doing the reductions, including running the gnscombine task (which I believethey wrote). One step which they do not implement, but which you might need to depending on your data, is the application of a flatfield. This would bedone after the sky subtraction step (gnsskysub), but before the offsetting & combining (gnscombine).
Question 2
I could not find in any of the GMOS MOS cookbooks mention about how to subtract dark frames, which for Nod & Shuffle observations is relevant indeed.
Solution 2
The best place to subtract the dark is just after the bias correction. In fact you can get good results if you simply use the combined N&S dark instead of a bias, but the proper order should probably be: 1) Overscan correct both the science data and the N&S darks (this itself is a departure from normal procedures since we used to not recommend overscan correction but we have found the bias level does float around a bit - see the GMOS Hot News for details which I will post shortly). Be careful to check for contamination of the overscan region from bright targets, but for most N&S data it should be possible to get a good overscan correction using a constant fit to uncontaminated overscan regions. 1a) Bias correct both the science data and the N&S darks using an overscan corrected combined bias image. This step can probably be skipped altogether since when you subtract the combined N&S dark from the individual science exposures you are effectively also subtracting off the bias. 2) Produce a combined N&S dark. The task gdark does not (yet) exist, at this point I do not know of any plans to create/release a gdark task. 3) Subtract the combined N&S dark from the individual overscan (and bias corrected if the N&S dark is bias corrected) science data. I do recommend you follow this sequence instead of subtracting the N&S dark after the skysubtraction has been done by gnsskysub. If the charge traps are still present when you make the skysubtraction you will get negative charge trap signatures in your sky subtracted data (from the charge traps in the Nod B image of the data) and these will not subtract out if you then do the N&S dark correction.
Question 3
I have 25 Nod & Shuffle darks (60 sec. exposure time each) that were taken on different days. For each of the days I have found the original bias frames by searching the archive. However, when I bias-subtract the darks I do not get exactly the same level of counts for the resultant images: they seem to very from -0.6 to 13.3 mean counts. Can you tell me what the normal mean level and variation between different exposures should be, and what may be the problem?
Solution 3
I can only conclude that these darks showed that the dark count is not quite as stable with GMOS as we had thought. I will investigate further to see if this is a one time occurrence during these months or if this is a heretofore unrecognized “feature” of GMOS-N. In any case the darks you have now do not have this effect, and I believe even the darks that did have this offset would still be useful as long as you were to subtract this constant term off of each dark before average or median combining it with the other darks in order to produce your final high signal to noise dark for correcting the charge traps in your science data. A somewhat variable dark level will not affect your science data because the dark level, whatever it is, will subtract off when you do the sky subtraction. So even though I don’t have a satisfactory explanation, you have, I hope, an explanation for how to deal with the old dark and you can request that new darks be taken.

 

Updated on December 13, 2021, 7:24 am