&Viz.NoteHead "The PMM USNO-A1.0 Catalogue">
The details listed below are extracted from the set of 10 CD-ROMs kindly supplied to CDS by Dave Monet at the ftp://ftp.nofs.navy.mil/usnoa site. Please refer to http://ftp.nofs.navy.mil/projects/pmm/ for the most recent details about the PMM products.
A compression technique adapted to the PMM USNO-A1.0 was used for the CDS installation, keeping the direct access possibilities on a catalog shrinked to 3.6Gbytes.
This file is the first level of the on-board documentation. Please read this file first. Near the end of this file is a list of other files that might be helpful. Questions and comments should be directed to Dave Monet US Naval Observatory Flagstaff Station PO Box 1149 (US Mail Only) West Highway 66 (FedEx, UPS, etc.) Flagstaff AZ 86002 Voice: 520-779-5132 FAX: 520-774-3626 e-mail: Please understand that the level of support provided will be commensurate with the level of effort expended. I am too busy to do your homework for you. E-mail works better than the phone. ============ Title ==================== USNO-A V1.0 A Catalog of Astrometric Standards David Monet a) Alan Bird a), Blaise Canzian b), Hugh Harris a), Neill Reid c), Albert Rhodes a), Stephen Sell a), Harold Ables d), Conard Dahn a), Harry Guetter a), Arne Henden b), Sandra Leggett e), Harold Levison f), Christian Luginbuhl a), Joan Martini a), Alice Monet a), Jeffrey Pier a), Betty Riepe a), Ronald Stone a), Frederick Vrba a), Richard Walker a) a) U.S. Naval Observatory Flagstaff Station (USNOFS) b) Universities Space Research Association (USRA) stationed at USNOFS c) Palomar Observatory, California Institute of Technology d) USNOFS, now retired e) USRA, now at University of Hawaii f) USRA, now at Planetary Science Institute, Boulder CO ============== Abstract ======================= USNO-A is a catalog of 488,006,860 sources whose positions can be used for astrometric references. These sources were detected by the Precision Measuring Machine (PMM) built and operated by the U. S. Naval Observatory Flagstaff Station during the scanning and processing of the Palomar Observatory Sky Survey I (POSS-I) O and E plates, the UK Science Research Council SRC-J survey plates, and the European Southern Observatory ESO-R survey plates. The PMM detects and processes at and beyond the nominal limiting magnitude of these surveys, but the large number of spurious detections requires that a filter be used to eliminate as many as possible. USNO-A's sole inclusion requirement was that there be spatially coincident detections (within a 2 arcsecond radius aperture) on the blue and red survey plate. For field centers of -30 degrees and above, data come from POSS-I plates, while data from field centers of -35 and below come from SRC-J and ESO-R plates. USNO-A presents right ascension and south polar distance in the system of J2000 at the epoch of the survey blue plate for each object, and lists an estimate of the blue and red magnitude. For POSS-I sources, the photometric system is the photographic system defined by the O and E emulsions and filters, while southern sources are measured in the photometric system defined by the IIIa-J and IIIa-F emulsions. It is believed that the typical astrometric error is about 0.25 arcseconds and that the typical photometric error is about 0.25 magnitudes. However, these error estimates are dominated by the systematic errors incorporated in the calibration procedure, and some fields may be significantly worse. Should users be willing to locally recalibrate the astrometry and photometry, the errors arising from the PMM are believed to be in the range of 0.15 arcsecond and 0.15 magnitude. To avoid the necessity of consulting many catalogs, objects brighter than 11th magnitude that appear in the Guide Star Catalog that were not detected by the PMM were inserted. USNO-A covers the entire sky, and goes as deep as O=21, E=20, J=22, and F=21 for objects with appropriate colors. The limiting magnitude is brighter for objects with extreme colors, and follows from the requirement for a detection on both the blue and red survey plate. Although it covers the entire sky, there are holes in the catalog in the vicinity of bright stars, regions of nebulosity, crowded fields, etc. ============== Statement of Intellectual Property Rights ==================== This catalog contains data from a diverse collection of photographs, reductions, and catalogs. A large number of different organizations claim copyright and/or intellectual property rights on the various components. Although the details differ, all permissions for usage of data are contingent on unrestricted access. Distribution and/or other direct costs can be recovered, but re-packaging, re-formatting, or similar activities, especially for commercial purposes, are not permitted except as authorized by the U. S. Naval Observatory in consultation with the other institutions listed below and as appropriate. 1) Palomar Observatory, National Geographic Society, and California Institute of Technology own Palomar Observatory Sky Surveys I and II. 2) European Southern Observatory owns the ESO-R survey. 3) The UK Particle Physics and Astronomy Research Council (formerly Science and Engineering Research Council and before that Science Research Council) owns the SRC-J survey. 4) Space Telescope Science Institute (and AURA and NASA) own the Guide Star Catalog. 5) US Naval Observatory owns the digitization of the plates and the object parameters and catalogs made from them. 6) The Anglo-Australian Telescope Board retains the copyright to plates taken with the U.K. Schmidt Telescope after 15 June 1988. In particular, we reserve the rights to compile and distribute zone catalogs, summary catalogs, or other significant pieces of USNO-A beyond that which is needed to support personal or institutional scientific or educational projects. Included in this is the preparation and distribution of images generated from the USNO-A catalog beyond those needed for finding charts and similar purposes. In summary, you are welcome to use the catalog, but significant redistribution, extraction, and image rights are reserved, and permission needs to be obtained before using this catalog for such purposes. Please don't make us wake up the lawyers. Please treat these data and catalogs in the spirit in which they were created. They are for non-profit educational and scientific pursuits, and not for third parties to remarket for a profit. =======================Request for Citations================================= It is an unfortunate aspect of modern funding that impersonal and statistical measures are used to assess the productivity and usefulness of programs. If you benefited from using USNO-A, we ask that you give the catalog a citation. By doing so, we may be able to justify the expense of continued production of catalogs. Whenever possible, it would be appropriate to note which survey (POSS-I, ESO, and/or SRC) supplied the relevant data, since these surveys are measured, in part, by the extent to which they serve the community. =======================How To Proceed==================================== There is no paper copy of this catalog. All documentation that exists has been put somewhere on the CD-ROM set. A reasonable strategy for learning about and using this catalog is the following. read.me - Contains a brief description and the necessary statement of intellectual property rights. read.use - Contains a description of the format of the various files and what they contain. A companion file, demo.tar, contains the source code for a simple program that uses this catalog. catalog.tar - contains the ASCII text files that describe each of the plates taken as part of the various surveys. Of particular interest is the epoch of each plate since proper motions have not been computed and applied to the position of each source. read.ast - Contains a description of the astrometric reduction procedure. read.pht - Contains a description of the photometric reduction procedure. read.pmm - Contains a description of the PMM hardware and software. sg1.tar - Contains the source code for all software needed to run the PMM and do the real time processing. Included in this file are the bias and flat field frames as well as the coefficients used in the geometric calibration procedure. binary.tar - Contains the source code for about half of the routines used to reduce the raw PMM data and produce this catalog. newbin.tar - Contains the source code for the rest of the routines used to reduce the raw PMM data and produce this catalog. ================Table of Contents===================== File CD-ROM ---------------------------------------------- zone0000.acc/.cat 1 zone0075.acc/.cat 1 zone0150.acc/.cat 6 zone0225.acc/.cat 5 zone0300.acc/.cat 3 zone0375.acc/.cat 2 zone0450.acc/.cat 1 zone0525.acc/.cat 4 zone0600.acc/.cat 6 zone0675.acc/.cat 5 zone0750.acc/.cat 7 zone0825.acc/.cat 10 zone0900.acc/.cat 8 zone0975.acc/.cat 7 zone1050.acc/.cat 8 zone1125.acc/.cat 9 zone1200.acc/.cat 9 zone1275.acc/.cat 4 zone1350.acc/.cat 10 zone1425.acc/.cat 3 zone1500.acc/.cat 2 zone1575.acc/.cat 6 zone1650.acc/.cat 2 zone1725.acc/.cat 3 .lut files for all zones 7 pmmgsc.len 7 read.me 1 read.ast 1 read.pht 1 read.pmm 1 read.use 1 demo.tar 1 catalog.tar 2 newbin.tar 6 binary.tar 8 sg1.tar 8 ================Other Notices===================== The source codes are the intellectual property of the U. S. Naval Observatory and are provided so that the expert user can answer detailed questions about how the catalog was constructed. Casual users should avoid them because they contain no instructions as to where various useful things are hidden. Release of the source code is made to support such investigations only, and is not intended for commercial or other non-professional applications. The source code is a protected property, and illegal usage is prohibited. If in doubt, please contact Dave Monet for clarifications and protections. The PMM program has been supported through internal funding by USNO, and by funding provided by the U. S. Air Force through the Space Surveillance Network Improvement Program. This work is based partly on photographic plates obtained at the Palomar Observatory 48-inch Oschin Telescope for the First and Second Palomar Observatory Sky Surveys which was funded by the Eastman Kodak Company, the National Geographic Society, the Samuel Oschin Foundation, the Alfred Sloan Foundation, the National Science Foundation grants AST84-08225, AST87-19465, AST90-23115 and AST93-18984, and the National Aeronautics and Space Administration grants NGL 05002140 and NAGW 1710. This catalog is based, in part, upon original material from the UK Schmidt Telescope, copyright in which is owned by the UK Particle Physics and Astronomy Research Council. No charge beyond recovery of costs has been made by USNO or PPARC for the provision of these data. These data are provided to its recipient for its purpose without restriction, except on the condition that the data should not be replicated, in whole or in part, and passed on for profit. The plates for the SRC-J survey were taken with the UK Schmidt Telescope (UKST). The copyright inherent in the plate material obtained with the UK Schmidt Telescope taken after 15 June, 1988 rests with the Anglo-Australian Telescope Board. The European Southern Observatory holds the copyright to the ESO-R Survey plate material, and USNO scanned these plates under an agreement with ESO. This agreement states, in part, that the data derived from these scans shall be available to the public without restriction and without charge beyond recovery of the cost of distribution. USNO would like to thank NOAO/KPNO for the permission to borrow the glass copies of the SRC and ESO surveys for scanning, and to Bill Schoening and Richard Green for their assistance.
The following text is copied from the UJ1.0 CD-ROM and gives an overview of the PMM and its programs. In an attempt to satisfy the serious user of this catalog, the source code for the PMM is found in the sg1.tar file somewhere in this CD-ROM set. This file contains the source code for all pieces of the executable image as well as the key data files used to calibrate various pieces of the PMM. References to code in this section point to files in sg1.tar. Dave Monet is The Precision Measuring Machine (PMM) was designed to digitize and reduce large quantities of photographic data. It differs from previous designs in the manner by which the plates are digitized and in that it reduces the pixel data to produce a catalog in real time. This section gives an introduction to the design, hardware, and software of the PMM. For those wishing to pursue issues in greater detail, the software used to control the PMM may be found in the directory exec/c24, and all software used to acquire and process the image data is found in the other directories under exec/ (processing begins with exec/misc/f_parse). High-speed photographic plate digitization has been accomplished using three different approaches. Many machines (APS, APM, PDS, etc.) have a single illumination beam and a single channel detector. This approach can offer extremely accurate microdensitometry at slow scanning speeds (PDS) and has been used by intermediate-speed machines (APS, APM, etc.) that have produced many useful scientific results. The second approach is to use a 1-dimensional array sensor, such as the SuperCOSMOS design. These offer much higher scanning rates but suffer from more scattered light than true microphotometers. The third approach (PMM) is to use a 2-dimensional array sensor, such as a CCD. This offers yet higher throughput at the expense of more scattered light. The 1-D and 2-D designs are new enough that detailed comparisons with single pixel designs have yet to be done. Of the three designs, only the 2-dimensional array design separates image acquisition from mechanical motion. In this approach, the platen is stepped and stopped, its position is accurately measured, and then a CCD camera takes a picture of a region of the photographic material. In this manner, the transmitted light (plates and films) or reflected light (prints) is digitized and sent to a computer for processing and analysis. The mechanical system is not required to move the platen in a precise direction or speed while image data are being taken. Therefore, the mechanical system is much easier to build and keep operational, and platen sizes can be much larger (a feature needed to minimize the thermal and mechanical impact of replacing the photographic materials). A. Hardware The PMM design is conceptually simple. The mechanical system executes a step and stop cycle, and then reports its position to the host computer. A CCD camera takes an exposure of the "footprint" in its field of view, and the signal is then read, digitized, and passed to the host computer. Once the image is in the computer, the mechanical motion may be started and image processing and mechanical motion can occur simultaneously. In practice, the PMM design is a bit more complicated because it has two parallel channels for yet higher throughput. The various subsystems are the following. a) The mechanical system was manufactured by the Anorad Corporation of Hauppauge NY to specifications drawn up by USNOFS astronomers and their consultant William van Altena. Its features include the following: i) 30x40-inch useful measuring area. ii) granite components for stability. iii) air bearings for removal of friction. iv) XY stage position sensed by laser interferometers. v) Z and A platforms for above/below stage instruments. vi) ball screw motion in X at 4 inch/second maximum speed. vii) brushless DC motors in Y at 2 inch/second maximum speed. viii) computer control of all motions. ix) two laser micrometers mounted on the Z stage to measure distance to photographic materials. x) two CCD cameras (discussed below). In addition, it has a single channel microphotometer system built by Perkin Elmer, but that system is not used for POSS plates. It is controlled by a dedicated PC that communicates to the outside world by an RS-232 interface. The PMM is housed in a Class 100,000 (nominal) clean room and the thermal control is a nominal plus or minus one degree Fahrenheit. Actual performance is much better over the 80 minutes needed to scan a pair of plates. The temperature is usually stable to +/-0.2 degrees and short term tests show a repeatability of 0.2 microns over areas the size of POSS plates. Thermal information is recorded during the scan and is part of the archive. b) The images are acquired and digitized by two CCD cameras made by the Kodak Remote Sensing Division (formerly Videk). Each has a format of 1394x1037 and a useful area of 1312x1033 pixels. The pixels are squares of 6.8 microns on a side, have no dead space between pixels (100 bad pixels in the array (Class 0). A flash analog to digital converter is part of the camera, and the image is read and digitized with 8-bit resolution at a rate of 100 nanoseconds per pixel. Printing-Nikkor lenses of 95 millimeter focal length are used to focus the sensor on the photographic plate with a magnification of 2:1. The resolution of these lenses exceeds 250 lines per millimeter and they have essentially zero geometric and chromatic distortion when used at 2:1. The illuminator consists of a photometrically stabilized light source, a circular neutral density filter to compensate for the diffuse density of the plate, a fiber bundle, and a Kuhler illuminator to minimize the diffuse component of illumination. Each camera's light path is separate except for the single light source. c) Each camera has its own dedicated computer and related peripherals. The digital output of the camera is fed to a 100 megabit per second optical fiber for transmission to the computer room where a matching receiver converts it back into an 8-bit wide, 10 megabyte per second parallel digital signal. This signal is interfaced to a Silicon Graphics 4D/440S computer using an Ironics 3272 Data Transporter attached to the VME bus. This system supports the synchronous transfer of 1.4 megabytes in 0.14 seconds with an undetectably small error rate. The 4D/440S supports DMA from the VME bus into its main memory without an additional buffer. Once in the computer, the PMM software (discussed below) does whatever is appropriate, and, if the user desires, the pixel data can be transmitted across a fiber optically linked SCSI bus to disk or tape drives located in the PMM room. This is particularly convenient for the operator. d) A DEC MicroVAX-II computer acts as system synchronizer, and does little more than coordinate all steps in the motion and processing. This operation is not as trivial as it sounds. e) The user interacts with the PMM using any X-window terminal by logging into the MicroVAX and starting the control program. The control program logs into the Anorad PC and each of the processing computers across RS-232 (it is too old to have X). These computers open X-windows on the users terminal, and all interaction with them (including image display) avoids the MicroVAX. A simple interpretive language was written for the MicroVAX, and plates are measured by executing sequences of commands. Sequences may be found in exec/c24/seqNNN.pmm. The sequence for measuring 4 UJ plates is seq485.pmm. B. Plate Measuring Sequence The sequence for measuring plates is designed to minimize human intervention. Each of the two platens holds four POSS plates. While one is being measured, the other is loaded so that the plates can come to thermal equilibrium. The measurement sequence consists of the following phases. a) The camera is positioned over the middle of the plate and the neutral density filter is set to maximum (D=3.0). A sequence of fixed length exposures is made as the density is reduced, and the optimum value for the exposure is found. Due to limitations of the camera interface, the exposure time has a granularity of one millisecond and must be in the range of 2 to 127 milliseconds. Once the optimum neutral density is found, it is kept at that value for the entire plate. Changes in diffuse density are followed by changing the exposure time. b) The Z-stage is fixed at a nominal value, and the plate pair (1/2 or 3/4) is scanned to obtain the distance between the Z laser micrometers and the surface of the plate. The XY stage is positioned at a Y value that will later be used for the digitization footprints, and then driven at high speed in X. As the stage moves, the micrometer and the Anorad PC are sampled to determine the Z distance as a function of X position. This procedure is repeated for the sequence of Y positions, and the 2-dimensional map of Z distance is obtained. c) The camera is positioned over the middle of the plate, and a sequence of exposures is taken at different values of the Z coordinate. For each exposure, a measure of the sky granularity is computed, and interpolation is used to find the Z coordinate that maximizes the granularity. This establishes the "best" focus in an impersonal manner, and it appears to be stable to plus or minus 50 microns in Z. d) A sequence of frames are taken of the central area of the plate with increments of 1.0 millimeter between each, and the standard star finding and centering algorithm is run on each frame. After all frames have been taken, the nominal value of the plate scale is used to identify unique stars seen on each of several frame. Once the set of measures is isolated, software computes a revised estimate of the plate scale. This revised estimate can be considered the difference between the layer of the emulsion that reflected the laser micrometer beam of the reference plate and of the current plate. When the plate is scanned, the Z stage is driven to the position appropriate for each footprint, which is the sum of the "best focus" plus the difference between the current location and the central location as determined by the laser micrometers. After the positions have been measured, a linear expansion is applied to the pixel coordinates for each star to remove the difference in the (observed-nominal) plate scale. At first glance, this algorithm seems quite complicated, but determination of the plate scale is critical to the astrometric integrity of the PMM. To measure to 0.1 arcsecond, the scale must be known to 0.008contribute to uncertainties in the plate scale. a) No technology better than a laser micrometer was found to measure the distance between the Z stage and the plate. Unfortunately, the laser is somewhat sensitive to the reflectance of the surface, and the range of diffuse densities encountered during the scanning of the UJ plate of about 0.1 to 2.5 causes an uncertainty of where the micrometer is measuring. The only competing technology, touch probes, was considered too risky for use with original POSS-I and -II plates. b) The POSS plates are not flat, and no reasonable plate hold-down mechanism was proposed. This problem is a minor annoyance for UJ and POSS-II plate because the typical +/- 200 microns could be removed by software. Unfortunately, the +/-1 or millimeter seen on the POSS-I plates causes the images to be out of focus, and a surface following algorithm is required. Unfortunately, the elaborate focus and scale determination routines developed to measure POSS-I and POSS-II plates were unreliable for measuring the UJ plates. Many UJ plates had diffuse densities so low that the sky and the noise in the sky were extremely difficult to measure. To the human observers, these plates seem as clear as window glass. Since the UJ exposures were only 3 minutes, many plates had so few stars in a single footprint that the scale determination routine got lost. In either case, the error induced by a lost algorithm was much larger than just measuring the focus on a good plate and using that value for the UJ plate. This was done, so the list in the preceding paragraph must be extended. c) The difference in focus between the current plate and that used to determine the CCD camera scale is not known. Note that the PMM should follow the current plate properly since that measurement is only the difference between the local and central value determined by the laser micrometer. What is not (or only poorly) known is the offset at the central location. C. Image Analysis Algorithms The mechanical and camera systems serve only one purpose: to deliver image data to the processing computers. The major precept of the PMM design is to do all image processing and analysis in real time. It was true when the PMM was designed, and is still true, that it takes much longer to read or write an image to storage devices (particularly those for archival storage) than it does to extract the desired information. Indeed, the original PMM design had no mechanism for saving the pixels. A substantial amount of thought and work has gone into the design of the image processing algorithms. This section gives an overview of the code, and the serious reader is encouraged to read the source code (located in exec/ and its subdirectories). When the MicroVAX notifies the computer that the mechanical motion has been completed, the computer commands one or more exposures to be taken. The code is written to take 1, 2, 3, 4, or 8 exposures depending on the value of GRABNORM. The routine exec/misc/f_autoexp is used most often because it takes the exposures, evaluates the sky background, and will re-take an exposure with a modified exposure time if certain limits are exceeded. Since the background is variable, this type of autoexposure routine is necessary. Note that it does not vary the setting of the neutral density filter used to illuminate the plate, so it has a limited range over which it can modify the exposure. Another problem related to taking an exposure is the presence of holes, tears, and the area around the sensitometer spots. Typically, the POSS plate sky background has a diffuse density larger than two, but where the emulsion is absent or hidden from the sky, the density can be very close to zero. These regions cause gross saturation of the CCD camera, and its behavior becomes extremely non-linear, even to the point of having decreasing signal level with increased exposure. To avoid this, the routine exec/misc/f_toasted takes a very short exposure to test for this condition before the normal exposure sequence is started. Flat field processing is done in the traditional manner, using bias and flat frames taken under controlled circumstances. The CCD cameras are quite linear and uniform, and the flat field processing does little more than take out the non-uniformities in the illuminator. Pixel data are converted from unsigned bytes into floating point numbers during the flat field processing, and all steps in the image analysis and reduction software are applicable to non-photographic data. The image processing is divided into a hierarchy based on accuracy, and there are three levels. The first, called the blob finder, is charged with finding areas that need further processing, and doing this with a relatively coarse accuracy of +/-1 pixel. The second is invoked to refine this guess to an accuracy of +/- 0.2 pixel and to provide improved estimators for the object's size and brightness. The third step is non-linear least squares processing, which produces the accurate estimators for image position, and moment and other image parameters. Each is discussed in greater detail in the following paragraphs. a) Blob finding: Many different algorithms have been proposed to find blobs in an image. (I prefer to use "blob" instead of "star" since we do not know in advance what sort of an object we have found.) The PMM algorithm was designed for very high speed. It is based on the concept that finding an image requires neither the spatial resolution nor the intensity resolution required to measure accurate image parameters. The first step of the blob finder is to block average the input image by a size PARMAGNIFY which can take on values of 1, 2, or 4, but all experience indicates that 4 is acceptable for PMM processing of POSS plates. (The driver for this processing is in exec/pfa124subs/bmark2_N.f where N takes on the values of 1, 2, or 4.) The larger the value of PARMAGNIFY, the faster the blob finder will operate. With PARMAGNIFY determined, the block average TINY image is computed and then subjected to a median filter to produce the SKY image of similar size. The histogram processing of the sky image determines the dispersion of the sky, a scaler that will be applied to the whole image. Then, the sky image and the sky dispersion are used to generate the DN1P image, an image whose pixel values are 1 if the TINY image was greater or equal to the SKY pixel plus PARSIGMA times the sky dispersion, or 0 if not. If the DN1P pixel is set to 1, the corresponding SKY pixel is set to zero indicating that it should not be used to compute local sky values. Another picture of reduced size is computed as well. The DN2P pixel is set to 1 if the TINY pixel is greater or equal to PARSAT, a number that represents the level at which an image is considered to be saturated. In practice, the number is about 230 instead of the maximum possible value of 255 that comes from the camera A/D converter. The logic behind the TINY, SKY, DN1P, and DN2P is the following. Most computers take many cycles to compute an IF statement, and these tend to negate look-ahead logic needed to make software execute quickly. By making images whose values are 0 or 1, additions and multiplications can replace many IF statements, and thereby increase the speed of the code. Our experience is that automatic blob finding is very expensive (slow) because of the complexity of the algorithm, and our efforts to run it in parallel mode were unsuccessful. Hence, optimization was needed in this part of the code to keep its bandwidth high. Given TINY, SKY, DN1P, and DN2P, blob finding can begin. The algorithm is based on the concept that we wish to find isolated, mostly circular objects. The algorithm considers a circular aperture and computes the area and perimeter based on the pixel values in either the DN1P or DN2P image. The area is the number of pixels that meet or exceed the detection criterion inside of the aperture, and the perimeter is the number of such pixels that cross from inside the aperture to outside the aperture. A detection is triggered when the area has a non-zero value and the perimeter is zero. This means that a blob has been isolated. Once a blob has been detected, its location and coarse magnitude are tallied and the pixels in DN1P or DN2P are set to zero so that it will not be detected again. This algorithm can be expedited in a variety of ways. First, the central pixel is tested to see if it is one. If not, the aperture is moved to the next pixel. This test corresponds to the assertion that the night sky is dark, and that a substantial number of pixels will be fail the detection threshold test. Next, explicit logic tests for small blobs. The logic contained in exec/blob/find124_N tests for all radius one and two pixel events, and special cases of 4 pixel events. The routine exec/blob/find3_N tests for all possible 3 pixel events. These cases are worth the effort because the apparent stellar luminosity function tells us that the vast majority of stars in the catalog will be faint (small), and that the processing for small blobs needs to be optimized. The processing is completed by examining the DN1P or DN2P image with progressively larger apertures, until all blobs are found or until an unreasonably large aperture is needed, which is an indicator either that a very bright object is in the field or there is something wrong with the image. In all cases, blob finding has been completed. As the blobs are detected, the routine exec/blob/plproc_N attempts to divide the blob into sub-blobs if required. This is not a true deconvolution because we have transmission and not intensity. This routine is intended to separate almost distinct blobs found in the outskirts of other blobs, and does not do a good job splitting close double stars. For the parameters used in UJ1.0, the splitter is far too aggressive and tends to break up well resolved objects into a series of distinct blobs. This is an area for algorithm development before beginning the scans of the deep Survey plates. Once the list of blobs has been assembled, the TINY, SKY, DN1P, and DN2P are no longer used. All further processing refers to the full resolution DATA image. In addition, the code shifts from scaler to parallel operation because it can consider each blob as a separate entity. Silicon Graphics implements parallel processing with the DOACROSS compiler directive for the pfa (Power FORTRAN Accelerator) compiler. Its function is to assign the next step of the DO statement to the next available CPU. This algorithm is quite effective for processing stars because it means that a big, complicated star will occupy one CPU for a while, but the other CPUs can continue processing other stars. Efficiencies between 3.5 and 3.8 were seen on the 4 CPU 4D/440S computer. b,c) Coarse and fine analysis are carried out sequentially by exec/fsubs/multiproc. The first step in done by exec/fsubs/proccenscan which examines the blob along 8 rays and determines the size and center of the blob. Then, the blob is fit by a circularly symmetric function by the routine exec/fsubs/marg and then various other image description parameters (moments, gradient, lumpy, etc.) are computed and packed into integers. The function selected was B + A/(EXP(z)+1) where z = c*((x-x0)**2 + (y-y0)**2 - r0**2). (Perhaps this is more familiar when called the Fermi-Dirac distribution function.) Because the PMM uses transmitted light, faint images look something like a Gaussian, but bright images have flat tops because they are saturated. Hence, the desired fitting function needs to transition between these two extremes in a smooth manner. A large number of numerical experiments were made, and they can be summarized by the following points. i) The production PMM code takes the sky value from the median SKY image rather than letting it be a free parameter in the fitting function. The failure mode for many normal and weird objects was found to be an unreasonably large value for the sky and a correspondingly tiny value for the amplitude. Fixing the sky forces the function to fit the image, and this is much more robust than letting the sky be a fit parameter. ii) Allowing the function to have different scale lengths in X and Y was found to be numerically unstable for too many stars. With 6 free parameters in the exponent, chi squared can be minimized by peculiar and bizarre combinations that bear little resemblance to physical objects. iii) Iteration could be terminated after 3 cycles without serious damage. If the object could be fit by the function, convergence is rapid and the parameter estimators at the end of the 3rd iteration were arbitrarily close to those obtained after many more iterations. If the object could not be fit by the function, the parameters obtained after 3 iterations were just as weird as those obtained with more iterations. iv) The best image analysis debugging tool was to subtract the fit from the DATA image and display the residuals as the PMM is scanning. This allows the human observer to get a good understanding of the types of images that are processed correctly, and where the analysis algorithm fails. This mode of operation is not possible on plate measuring machines that do not fit the pixel data. Therefore, a 5 parameter, circularly symmetric, fixed sky function was fit to all detections, and the position determined by this function is reported as the position of the object. Since most other high speed photographic plate measuring machines compute image moments, the PMM computes these as well. Our experience is that the image moments are less useful for star/galaxy separation than quantities obtained from least squares fitting, and the positions determined from the first image moments are distinctly less accurate than those determined by the fit. In addition, the image gradient, effective size, and a lumpiness parameter are also computed since these may assist in star/galaxy separation. All parameters are packed into 13 integers by the routine exec/fsubs/marg, and that code should be consulted for information concerning the proper decoding of these values. D. Catalog Products The distribution of PMM data should begin and end with the distribution of the raw catalog files. Unfortunately, cheap recording media are incompatible with the bulk. So far, over 440 CD-ROMs are needed to store these data, and the scanning is not yet done. Perhaps the digital video disk will make this problem go away. Until them, the PMM program will attempt to generate useful catalogs that contain subsets of the parent database. USNO-A: These catalogs are intended to be used for astrometric reference. They contain only the position and brightness of objects, and ignore such useful parameters as proper motion and star/galaxy classification. These are objects that measured well enough on each of two plates to pass the spatial correlation test based on a 2-arcsecond entrance aperture. V1.0 contains RA and Dec, and takes its astrometric calibration from GSC1.1 and is photometric calibration from the Tycho Input Catalog and from USNO CCD photometry. V1.1 is derived from V1.0 by using SLALIB to transform RA/Dec to Galactic L/B. The catalog is arranged in zones of B and is sorted on L. Because of intermediate storage requirements, the lookup tables between V1.1 and the GSC will not be computed. V2.0 is planned for late summer of 1997 after ESA releases the Hipparcos and Tycho catalogs. The astrometric calibration will be made with respect to Tycho, and Tycho will be used to calibrate the bright end of the photometry. Should STScI release GSCP-II (or significant chunks of it), this improved photometric calibration will be included, too. USNO-B: This catalog will extend USNO-A in several key areas. It will contain star/galaxy separation information and will contain proper motions. Note that these quantities will be computed from J/F plate data, so USNO-B will be incomplete in the north according to the production schedule of POSS-II, and proper motions will be impossible south of -42 due to missing second epoch survey data. Proper motions in the -36 and -42 zones can be computed from the Palomar Whiteoak extension. In addition, the plan is to use spatial coincidence data from the O+J and E+F survey comparisons to supplement the O+E requirement needed by USNO-A. Hence, there should be many more entries, and the limiting magnitude for objects with peculiar colors will be much deeper. UJ1.3 and beyond: The UJ plates (3-minute IIIa-J on POSS-II field centers) provide a useful set of astrometric standards at intermediate brightnesses. To the extent possible, UJ will be kept current and made available to those who request it. Pixels: The PMM pixel database is approaching 5 TBytes. Each of the PMM detections contains a pointer back to the frame and position of the pixel that triggered the detection loop. Current USNO policy is to release the pixel database as soon as there is a reasonable way to do so. Users with a particular urgency can contact Dave Monet and make a special request for access, but the logistics of searching and retrieving a specific frame from the archive on 8-mm tape will preclude all but the most important requests.
This is READ.AST, the file with the discussion of the astrometric calibration of USNO-A. Please refer to READ.ME for an introduction to the catalog. Summary: The astrometric calibration of USNO-A is based on the Space Telescope Science Institute's Guide Star Catalog version 1.1, hereinafter GSC. This is a temporary calibration, and it will be replaced with a calibration to the European Space Agency's Hipparcos and Tycho catalogs as soon as they become available (current estimate is June 1997). We believe that a typical astrometric error is about 0.25 arcseconds, but for stars a few magnitudes brighter than the plate limit and away from the corners, the error may be as small as 0.15 arcseconds. Coordinates are computed in the system of J2000 at the epoch of the survey blue plate. Proper motions were neither computed for nor applied to the coordinates in this catalog. Whenever possible, we have adopted Pat Wallace's SLALIB for computing quantities associated with position and angle. Details about these routines and permission to use them should be obtained from the author at Source Code: binary/acrs - projection of ACRS to survey plate coordinates binary/ppm - " " PPM " " " " " binary/gscgen - " " GSC " " " " " newbin/tychogen " " Tycho Input Catalog " " " " " binary/gsctaff - Taff-o-grams for various surveys binary/autogo - fit POSS-I O to projected GSC binary/autoge - " POSS-I E " " " " binary/autogb - " SRC-J " " " " binary/autogr - " ESO-R " " " " catalog.tar - electronic version of the various plate logs binary/ugapX - the various routines that make the catalog Strategy: Using the reference catalog (GSC1.1) and the information contained in the plate log (possi.cat and south.cat in catalog.tar), SLALIB is used to compute the observed place for each catalog star. The PMM coordinates are corrected for the nominal cubic distortion of the Schmidt telescope (using SLALIB's SLA_PCD, etc.) and compared to the projected catalog. A best fit using up to cubic terms is computed and the residuals are saved. After doing this for a significant number of plates, the residuals are binned according to their location on the plate, and an approximation for the systematic field distortion of the Schmidt telescope is determined. (These are called Taff-o-grams in the code in recognition of Larry Taff's demonstration of their significance.) The fitting procedure is repeated, this time including the systematic field distortion map, and this fit is adopted for the generation of the catalog. The Individual Plate Solutions: For a particular field, the plate log was consulted to get the various parameters (date, time, emulsion, etc.) for the plate. Unfortunately, there were a substantial number of typographical errors in the original versions of these logs, and every effort has been made to track down these errors and correct them. We believe that the versions contained in this CD-ROM set are more accurate than the ones we started with, and all of the errors that we could fix have been fixed. With the exposure data, SLALIB is used to compute the best estimator of where the stars should be found. In order, we used SLA_MAPQK, SLA_AOPQK, and SLA_DS2TP to go from catalog to apparent to observed to tangent plane coordinates. The PMM produces coordinates for each detection in integer hundredths of a micron on its focal plane. Actually, there is a systematic problem in the introduction of temperature and pressure into the PMM logic, and its version of a micron can be off by as much as one part in 10^5, but they are sufficiently close to microns for this discussion. The coordinates have had the individual platen zero points subtracted, and the nominal center of each plate appears at approximately (170,175) millimeters. SLALIB provides a utility for removing the nominal pin cushion distortion of a Schmidt telescope, and this correction is applied to the raw PMM coordinates. With the exception of systematic astrometric errors in the Schmidt telescope, the projected catalog and undistorted PMM coordinates ought to agree with each other. The mapping is done using cubic polynomials in X and Y, although linear terms are sufficient except when doing the full-plate solution. No sub-plate solutions are used: a single fit in X and Y is used to describe the whole plate. These solutions are saved as are the residuals computed for each match between the PMM and the reference catalog, and this process is repeated for every survey plate. When many solutions are available, the residuals are combined according to the position of the object on the plate by the code in binary/gsctaff. For USNO-A, a mean distortion pattern was computed for each of the three Schmidt telescopes involved. However, it is clear from examination of subsets of the data that there are significant differences in the shapes of the distortion pattern as a function of zenith distance (actually declination but most survey plates were taken near enough to the meridian). In future releases, we intend to use zonal versions of this correction. The residuals are binned in a 32x32 grid, and a 2-dimensional smoothing spline is used to expand this to a 65x65 grid. This corresponds to boxes about 5 millimeters in size on the plate. With the systematic correction determined, the astrometric solution is repeated using the same catalog projection but adding the systematic correction removal to the pin cushion distortion removal in the pre-processing of PMM coordinates before fitting. Again, a single cubic fit in each coordinate is used to describe the entire plate. Assembling the Catalog: Two separate astrometric fits go into each field. First, the red plate is mapped on to the blue plate, and then the blue plate is mapped on to the reference catalog. The code is complicated only because of the large number of detections in each field, and the importance of applying each fit in the proper order. This process is done in binary/ugap012, and extra software is inserted to verify that each step worked properly. The output of ugap012 is a set of rings on the sky that follow from the surveys being taken in rings of declination. Because of the relatively slow response of our CD-ROM jukebox that stores the raw catalogs, it takes about a week to do this phase of the preparation of USNO-A. The rings of various declinations are merged into zones of constant width by the code in binary/ugap3. The zones are examined for duplicate detections by the code in binary/ugap4. This program makes a list of all entries to be removed (the TAGs) and saves multiple observations of the same object in the sameXXXX.dat file for the photometric calibration. The important routine in ugap4 is nodup.f which finds the multiple detections. For USNO-A, the radius was taken to be 1 arcsecond. In the polar regions, the xynodup.f routine is used and the double detections are removed in coordinates on the tangent plane, and a radius of 15 microns was used. Finally, the code in binary/ugap5 removes the TAGged entries and produces the final catalog. This catalog incorporates the astrometric calibration, but not the photometric calibration. Routines to check each step appear in binary/ugap3x, binary/ugap4x, and binary/ugap5x. A powerful debugging tool is plotting the entire sky because the eye is very sensitive to systematic errors at plate boundaries, etc. Finally, the code in binary/ugap7 applies the photometric calibration, and the code in binary/ugap8 projects the catalog in Galactic coordinates. The partition of the catalog files on the various CD-ROMs is done in binary/ugap6.
This is READ.PHT, the file with a discussion of the photometric calibration of USNO-A. Please refer to READ.ME for an introduction to the catalog. Summary: The photometric calibration of USNO-A1.0 is about as poor as one can have and still claim that the magnitudes mean something. The calibration process is dominated by the lack of public domain photometric databases. In particular, this calibration was done without the final Hipparcos and Tycho catalogs, and without the Guide Star Photometric Catalog II. We have done the best job we could with the available data, and will recalibrate the catalog when significant databases become available. We believe that the internal magnitude estimators for stars are probably accurate to something like 0.15 magnitudes over the range of 12th to 19th, but that the systematic error arising from the plate-to-plate differences is at least 0.25 magnitudes in the North and perhaps as large as 0.5 magnitudes in the South. Users who are able to locally recalibrate USNO-A photometry are encouraged to do so since that will remove the systematic errors and leave only the measuring error. Source Code: Useful places to look for pieces of the calibration are the following: newbin/piphot - generation of the USNO CCD parallax program magnitudes newbin/reversion - mapping the parallax program to individual plates newbin/bc1 - mostly obsolete with the exception of generating a couple of input files for bc2 newbin/bc2 - calibration of the northern sky newbin/bc3 - calibration of the southern sky binary/ugap4 - find multiple detections of the same object binary/ugap7 - apply the calibration to the raw catalog Strategy: The calibration of USNO-A is divided into the calibration of the northern sky and then the calibration of the southern sky. In each case, the first step was to compute the plate-to-plate offsets and convert the magnitudes from a specific plate into a system that was valid for all plates (called the meta-magnitude system). The second step was to compute the transformation from the meta-magnitude system to pseudo- photographic magnitudes computed from CCD photometry and the Tycho Input Catalog. The Northern Calibration: Removal of the plate-to-plate differences begins with examination of the list of all objects found by the code in ugap4 to be multiple detections of the same object. For details, refer to the code, but it is sufficient to summarize this process as finding all objects that fall within a 1-arcsecond radius of another object. All objects inside this radius were considered to be the same object, and the code in ugap4 selects one for the catalog and saves all objects in the SAMExxxx.dat file. Code in bc1 looks at the SAMExxxx.dat files and computes the list of plates that overlap other plates and makes intermediate files of all stars that overlap a specific plate. Code in bc2 (parfit.f) then iterates a solution that starts at a zero offset (constant) or a zero offset and unit slope (linear) for each plate and computes the best fit for that plate to all of its neighbors. At the end of each iteration, all solutions are updated before the start of the next iteration. Typically, the solution is very close to the final value after about 5 iterations, but it was allowed to run for 17 iterations so that a stable solution was found for all plates. The original plan was to allow a linear solution for each plate, but after the difficulties encountered in the Southern solution, the solution was done allowing only a constant term. Visual examination of the calibration showed that both were essentially similar, so the constant one was selected. The plate-to-plate solutions are found in bc2/calcoef.XX files, where XX is the iteration number. Removal of the plate-to-plate offset before application of a transformation between internal and external magnitude systems was far more stable than doing the solution after such a transformation. The internal magnitude systems for each plate are surprisingly similar. Because of the lack of a suitable calibration database, we decided to use the B and V magnitudes from the Tycho Input Catalog to calibrate the bright end, and to use the V and I CCD photometry done at USNO on parallax fields for the faint end. Henden supplied tables for computing the color corrections which he derived from numerical integrations of spectrophotometric data and filter response curves. For Tycho data, only stars with B and V were accepted, and the Henden relationships were used to compute O(B,V), E(B,V), J(B,V), and F(B,V). Examination of the residuals to the photometric solution indicate that there are significant color terms remaining: the O/J solutions show less dispersion than the E/F solutions. To mitigate this problem, Tycho stars with B-V less than 0.5 or greater than 1.2 were ignored in the final solution. The USNO photometric database was complete for V and I, but many stars did not have B data. Because of this, we decided to ignore the B data when available, and to base the calibration on the V and I data alone. Dahn supplied a relationship between V-R and V-I, and a crude calibration of B-V as a function of V-R was used. These and the Henden tables can be found in newbin/piphot in the various .tbl files. Again, this calibration procedure left significant color terms. The E/F calibration shows less dispersion than the O/J calibration. With the ensemble of pseudo-photographic magnitudes for standard stars, the relationship between the meta-magnitude and the standard magnitude system was done by newbin/tcapply. The algorithm attempts to find a ridge line between the two systems, and then to fit a smoothing spline to it. This solution is provided to the user (newbin/bc2/tcnodes.?) who can examine, correct, and extrapolate it as appropriate. These new nodes (newbin/bc2/tcedit.?) are then fit with the smoothing spline and the final lookup tables (newbin/bc2/tclut.?) are produced. Although the blue and red solutions are done by the same code, they are completely independent of each other. It is possible for the PMM to produce magnitudes that don't make sense. In particular, the total flux can be zero or negative should the estimator of local sky contain some sort of contamination. These fluxes are mapped into 50.0 for the case of zero flux, and 50.1 through 75.0 for the case of negative flux. In the latter case, the flux is negated before taking the logarithm and 50 is added to the result. These magnitudes are ignored during the calibration process and passed directly from the PMM to the final catalog. At best, they serve as flags that something was wrong with a particular image. The Southern Calibration: The first step of the southern calibration is the same as that for the northern calibration, the removal of the plate-to-plate offsets. This is done in newbin/bc3/parfit and makes files soucoef.XX in a manner very similar to the northern solution. However, the first solution that allows a constant and slope for each plate was seen to quadratically grow for the red (F) solution but not the blue (J) solution. This was traced to a small but significant correlation between limiting magnitude and declination which drove the numerical instability. Solving for only a plate-to-plate offset showed the same instability. Therefore, an extra routine (newbin/bc3/damper.f) was inserted to remove this term after each iteration. The blue solution with and without this term was examined and found to be essentially the same, so we have some confidence that the red solution is reasonable, too. The source of this correlation is unknown, and should disappear with the inclusion of more calibration data. The calibration of the meta-magnitude system in the southern solution was made more difficult because there are no USNO parallax fields south of -20. Instead, a boundary condition that the southern and northern solutions should agree in the -30 degree zone was used. The list of same stars found by binary/ugap4 was used to identify those objects with northern and southern magnitudes, and the calibrated northern magnitudes were combined with the Tycho Input Catalog pseudo- J and F magnitudes to provide the calibrators for the southern meta-magnitude system. Because of all of the difficulties associated with the apparently incomplete removal of color terms based on broad band photometric indices, the decision was made to ignore differences between J and O, and F and E. This is a crude approximation, but one that was forced by the lack of appropriate calibration databases. As with the north, the calibration of the meta-magnitude system starts with nodes computed from a ridge line, and ends with a lookup table computed from nodes supplied by the user. The code is in newbin/bc3 and is nsapply.f, nsnodes.?, nsedit.?, and nslut.? in a manner similar to the northern solution. Other Matters: The Schmidt telescope vignetting function was ignored. Indeed, there are three such functions, but the lack of a suitable calibration database makes it almost impossible to solve for these functions from PMM data. The choice of zero vignetting function follows from Henden's analysis of the UJ1.0 data in which he could not independently verify the Palomar Schmidt vignetting function adopted by the Guide Star Catalog. Henden's analysis showed only a marginally significant function, and it was substantially smaller than that developed for the GSC. The northern calibration must be done first because of the reliance of the southern calibration on it. Both are then copied to binary/ugap7 where they are applied to the uncalibrated catalog and same files. Various other programs verified that the calibration was applied properly. The distinction between galaxy magnitudes and stellar magnitudes was ignored. This followed from the lack of star/galaxy separation information for POSS-I plates. The reductions being developed for USNO-B include star/galaxy separation, but they rely on the improved signal to noise ratio offered by the fine grain emulsions. Future releases of USNO-A will incorporate improved photometric calibration algorithms. The release of the Tycho catalog in 1997 will offer a dramatic improvement in the calibration of the bright end of the catalog as well as the transition from saturation around 12th magnitude. The release of GSPC-II will provide an important calibration database for the intermediate stars, especially in the south, but more work is needed to extend the calibration to 20th magnitude and beyond.
The original catalog consisted in 24 files, one for a 7.5° strip in declination. Each file was extended to a directory, named N000...N8230 and S0000...S8230, i.e. with the same conventions as those used for the GSC Catalog.
In each of these directories, there is one file for 30min (i.e. 7.5° at the Equator) in right asension; the total number of files is therefore 24×48 = 1152 files, with an average number of 20,000 (near the poles) to 800,000 objets per file. In each of these files, the range of the coordinates is then restricted to 7.5°, i.e. a maximal value of 2,700,000 when the coordinates are expressed in their original units of 10mas. The final grouping allowed to reduce one record to 7 or 8 bytes (the mean is close to 7).
The resulting catalog occupies only 3.6Gbytes, including all transformation and query software; the full 526×106 objects are tested in about 45 minutes (i.e. 5µs per object) on a Sparc-20 (72MHz).
A few benchmarks made on a Sparc-20 (72MHz) give the following average elapsed times (between 15 and 70for a search by position on the catalog, keeping the 10 closest stars (actually performed on the USNO-A1.0 which was converted in April 1997 with an almost identical software):
========================================================================= Search Tested stars Time required Reading time Radius (') per target (s) for 1 star (microsec) ------------------------------------------------------------------------- 2.5 14153 0.09 6.4 10.0 66394 0.24 3.6 30.0 201351 0.67 3.3 =========================================================================
A client/server access to the PMM USNO-A2.0 Catalog – as well as to other catalogues – is also available via the findpmm2 program which is part of the cdsclient package.
François Ochsenbein, <&CDS.home>
<&Viz.tailmenu /home/cds/httpd/Pages/VizieR/pmm/usno1.htx "index">