Success Story: Getting a Photo of the James Webb Space Telescope from Half a Million Miles Away

James Webb Space Telescope (NASA)

The James Webb Space Telescope (JWST) is the most powerful telescope yet developed. Its primary mirror is 6.5 m in diameter, with 6 times the light-gathering ability of Hubble. It is expected to see light that originated 100 million years after the Big Bang, when the first galaxies were forming.  Earth’s atmosphere would prevent JWST from doing this, so it will operate in the vacuum of space, at a point 1 million miles from Earth. It launched on December 25, 2021. 

On January 3, 2022, when JWST was passing through a point halfway to its destination, I had an evening of clear skies and decided to try something daunting: to capture a photo of JWST from its home planet.  I’m an amateur astrophotographer; although I have been keenly interested in astronomy and cosmology for most of my life, I got serious about shooting photographs of Things in Space™ a little more than a year ago. As a life-long photographer I had lots of experience and high-quality photographic equipment—I thought astrophotography would be simple.  Wrong.  It is an entirely different game compared to regular photography.  All of the equipment I use for my astrophotography was newly purchased in 2021. My equipment is described in some detail in a note at the bottom.

Several challenges had to be met in reaching my goal of capturing an image:

  1. Finding JWST. The space telescope does not have its own source of light; it doesn’t glow. It reflects light from the Sun, and at its current distance it is reflecting very little. It is a dim object, way below the power of the unaided eye to detect. You cannot simply look up and see it, you must know where in the sky to aim a telescope that can collect enough of the dim light that it is reflecting to make it visible. Fortunately I found a list of the hourly location of JWST that allowed me to point my telescope in its direction. 
  2. Collecting enough imaging data. JWST is a dim object that will appear among much brighter stars. I shot 60-sec exposures in the hope that this would allow me to collect enough of its reflected light to make the dim object visible. The Earth is rotating, so a perfectly stable camera focused on the heavens will result in an image in which the stars blur as they sweep across the sky. With the telescope that I used any exposure longer than 1 sec would result in obvious “star trailing.” Astrophotography requires a camera mount that rotates in a manner that compensates for the Earth’s rotation. My mount has allowed exposures as long as 2 min when I have it properly aligned with the Earth’s axis; additional equipment can monitor slight movement of stars relative to the field of view and make ongoing adjustments, allowing exposures as long as 6 – 10 min (I do not yet have this capability). 
  3. Differentiating JWST from the stars. Even if I use an exposure that makes the dim JWST visible, it will look exactly like a very dim star.  It will be recognizable as a craft hurtling through space only because its location will change relative to the constant background stars.  To see this change, multiple sequential exposures are needed. I shot 180 60-sec images, hoping that this would be sufficient to allow the movement of a very dim dot to become apparent as a streak among the stars.
  4. Finding that streak. As I said, JWST is very dim, and the stars are bright. The 180 exposures were “stacked” by a program called Siril, which analyses each image, discards images in which star trailing is apparent, aligns the stars across the images, and adds the images together. In this stacked image, even very dim stars can become easily visible (almost as if a single 180-min exposure had been taken). Of course, because JWST is in a different place in each image relative to the stars, the streak created by its apparent motion consists of 180 very dim dots in a line. Fortunately my friend Barbara Bunker has lots of experience with visual astronomy (in addition to doing astrophotography) so spotting a dim streak among the stars was simple for her (I am certain that I would not have found the streak).
  5. Ensuring that the streak is in fact JWST. Many things can cause a streak in a photograph of stars. Ruling out photographic issues, objects such as asteroids, satellites, aircraft, etc. can all lead to streaks. Fortunately aircraft have characteristic lights that make them easy to recognize in a photograph. Satellites tend to move rapidly enough as they orbit to create longer streaks than a small moving object very far from Earth. To differentiate our streak (switching to plural pronouns to recognize Barbara’s contribution—my story would have ended in frustration when I failed to find the streak) we needed some indication that it was where JWST was supposed to be.

All of these challenges were overcome. That British Astronomical Association site provided enough information to allow me to point my telescope in the right place. My mount and computer controlled camera allowed the requisite number of long-exposure images to be taken. Here’s the resulting image. See the streak?

My colleague Barbara was, remarkably, able to detect the streak within seconds of seeing this photograph (I’m still amazed). Here are images with the streak pointed out, and a cropped image that might make it more apparent.

To be sure that this was indeed created by a moving object, I made an animation using all of the images (think flip-book) in which a very small, barely apparent dot can be seen moving through the length of what would become the streak in the stacked images.  It begins near the tip of the lower arrow, and is just above the tip of the upper arrow at the end of the brief video. You might have to watch it repeatedly to convince yourself that you see it.

To determine that our streak was where JWST is supposed to be required some additional data. I knew that I had pointed the scope in approximately the right direction, but did our streak line up with the path of JWST? Another amateur astrophotographer, Blake Estes, had captured a photo of the JWST on December 30, 4 days earlier. While I can’t confirm that his photo is in fact JWST, it was widely circulated online and I conclude that it is accepted as accurate.

I used Stellarium, a widely touted astronomical program, to create in image of the heavens that included the region that Estes shot, and the region that I thought I had shot. I superimposed Estes’ image on the star chart created by Stellarium by aligning his stars with those shown on the Stellarium star chart. I then did the same with my image; this was more of a challenge because there is no “up” in space and my image was not oriented the same way as the Stellarium chart. However, with some rotation and zooming of my image, I was able to align and superimpose it.

Note that this uncertainty about the orientation of the image worked to our advantage in identifying our streak as JWST without bias. Had we known the orientation of the image prior to searching for the streak, and thus known the direction that JWST would be travelling, we (i.e., Barbara) might have been biased to look only for streaks that matched this expectation. Instead, our search for the streak was “blind,” such that our expectations couldn’t bias our finding.

Next, I drew a line through the “confirmed” JWST streak on the Estes photo, and did the same with the streak in our photo. They indeed appeared to align with each other, as you would expect if they were both created by an object travelling along a smooth trajectory. When a line was extended from the Estes streak through our image, it indeed came very close to our streak. Q.E.D.


NOTE about equipment:

Camera: I shot my images with a cooled astrophotography camera, the ZWO AIS533 MC Pro. Digital camera sensors heat up as an exposure is taken; a long exposure can create a lot of heat, which in turn generates what is called “thermal noise” in the image—that is, the heat results in a grainy or staticy image, obscuring fine detail. Cameras designed for the long exposures typical in astrophotography have a built-in sensor cooler to combat this noise; my camera can be cooled to as low as -15° C. 

Telescope: I use an Apertura 72mm FPL-53 Doublet APO Refractor. The 72mm referes to it’s aperture, the diameter of the lens that determines its light-gathering capacity. Bigger is always better, but bigger comes at additional cost. This is a relatively small aperture for a refractor. That term, “refractor,” means that a glass lens is used to gather and focus the light, as opposed to a curved mirror (as used in a reflector telescope). Both refractors and reflectors have their advantages. Reflectors (mirror) can be made much larger than refractors (glass lens) at less cost; large reflectors are favored by astronomers who observe visually and refer to their giant scopes as “light buckets” because of their ability to collect so many photons. This can be a problem for the long exposures needed in astrophotography, as even a slight gust of wind can shake the gigantic light bucket, ruining the exposure. Reflectors also avoid the problem of chromatic aberration; different wavelengths of light are bent at slightly different angles by glass lenses, resulting in slightly different focus points for red and blue light, and apparent color fringing around bright objects.  Manufacturers of lenses can use different types of glass (the “FPL-53” refers to the glass used) and different configurations of lenses (e.g., “Doublet”) to try to overcome this aberration. Mirrors don’t suffer from this problem. However, reflectors require frequent (every observing session, some would say) adjustment or “collimation” to ensure that the optics are aligned. The smaller targe tthat they provide for the wind, and the lack of constant adjustment, make refractors the favored tyoe of telescope for astrophotographers.

Mount: I mount the telescope and camera on a Sky-Watcher HEQ5 Pro mount. This precision instrument can rotate in a manner that exactly compensates for the rotation of the Earth, allowing the view through the telescope to remain constant over time, with no star trails. Such precise compensation requires that the axis of rotation of the mount is aligned with the axis of rotation of the Earth. The first 10 minutes or so of any astrophotography outing are devoted to ensuring that this alignment is precise. If the axis of the mount points at Polaris (the North Star) it will be close, but not precise enough; Polaris actually orbits the celestial North Pole (the point directly above the Earth’s axis), about 0.7° away from it (for comparison, the full Moon has a diameter of about  0.5°) so alignment requires positioning the mount’s axis so that it is pointing to the location in the sky where the actual celestial pole is relative to Polaris. A polar clock app will show exactly where Polaris is relative to the celestial pole at any time, and is used to accomplish this.  My mount also is computer controlled, with the capacity to aim at any particular place in the heavens if it has been properly aligned with the stars. This is a convenience, but not a necessity for astrophotography.

Software: I use a program called Siril to align and stack the images. Siril automatically discards images with too much star trailing (15 of my 180 images were discarded), lines up the stars and stacks the images. The resulting stacked image will be very dark (the heavens are dim) so additional processing is needed to “stretch” the image and reveal its beauty.  Siril is capable much of this additional processing, but I almost always import the final Siril image into Raw Therapee to make my final tweaks.

The Value of Averaging Noisy Data

Data are noisy. Measurements will be inaccurate for many reasons. Suppose you are given the task of measuring the height of a giraffe. Here are some scenarios that will demonstrate various sources of inaccuracy in your measurements (to skip the discussion of variability and noise affecting measurements, and to get right to the demonstration of how averaging can attenuate noise, go here.):

  1. The giraffe, sadly, was accidentally exposed to a blast of liquid nitrogen that froze her in place painlessly as she stood fully erect. You have a step-ladder tall enough to reach the giraffe’s head, a tape measure accurate to the nearest millimeter, and an assistant on the ground. You climb to the top of the ladder while your assistant holds the zero end of the tape measure against the ground, place the tape against the giraffe’s head keeping it as straight as possible, and read 5.372 m. 
  2. The giraffe has suffered the same fate as in #1. As you reach the top of the ladder you discover that your tape measure is only 5 m long. You carefully hold your right hand at the 5-m height as have your assistant release the tape so that you can pull it up to read the amount to be added to 5 m. You determine that the giraffe’s height is 5.42 m.
  3. Poor giraffe. This time you are alone. You place the end of the tape against the ground as you climb the ladder, and hope to keep it there as you climb. You discover that it is only a 5-m tape, so you hold your hand at the 5-m height and pull up the tape to determine the excess. You get a height of 5.316 m.
  4. In this scenario the giraffe is fine, but she is not too happy to have your assistant on the ground beside her and you climbing a ladder by her head. As she swings her head from side to side near you, you try to read the tape when her head swings by, and get a height of 5.5 m.
  5. In this case the zookeeper did not know you were coming, so he allowed the giraffe to have a double espresso just before you arrived. There’s no way you can safely climb a ladder anywhere near her. You have your brave and under-paid assistant enter the giraffe’s enclosure and stand as near to her as he can, and you take a photo of the two of them. You then measure your assistant and determine that he is 1.832 m tall. You use the photo to determine that the giraffe is 2.95 times as tall as your assistant, so you conclude that the giraffe is 5.404 m tall.
  6. And finally, in a case of true forgetfulness, you get to the zoo without the tape measure, and it’s nighttime. You have the keeper stand near the giraffe, but it’s too dark for a photo, so you estimate that the giraffe is 3 times his height. He tells you that he is 6 feet 2 inches tall. You do a quick mental conversion, putting the keeper at 1.9 m. The giraffe is estimated to be 5.7 m tall.

Six different measurements, six different answers. Which one is right? Clearly the first is probably closest to accurate, but was the tape straight? Was its bottom end properly against the ground? Could you really be sure it said 5.372 m and not 5.373 m? Was the tape measure manufactured and calibrated properly? The problems in measurement are clearly amplified in the other scenarios. In #2 and #3 how sure are you that your hand reflected the 5-m mark accurately? In #3 did you keep the tape end exactly on the ground?  In #4 and #5 the movement of the giraffe and/or your assistant will add some error. In #6 estimates as well as the zookeeper’s exaggeration of his height both corrupt your answer.

If you have the opportunity to make a measurement repeatedly using exactly the same procedure, these sources of variability will affect each measurement, but many causes of inaccuracy will sometimes cause the height to be too large, and other times too small. You hand will sometimes mark the 5-m position too high, and other times too low. The giraffe occasionally holds her head a bit low; at other times she jumps just as you measure. If a source of variability is equally likely to lead to over-and under-estimates of the correct number, then many repeated measurements when averaged will tend to converge on the correct result. Such a source of variability is said to be “random.”

Some other sources of variability are non-random; that is, they tend to lead to errors in the same direction every time. The zookeeper’s vanity will invariably lead to his overstating his height, for example. Averaging multiple measurements will not eliminate this error.

In the case of a signal that varies across space or time, recognizing the signal as distinct from its background can be a problem if there is lots of “noise,” or random variability, in the measurement. A weak radio signal might be hard to understand when embedded in lots of static, say. This is often described as a problem of distinguishing signal from noise. The same problem occurs in determining the brain activity triggered by a specific event (signal) against the background of all the other things the brain is doing at the same time (noise). A neuroscientist finds the “evoked potential,” the electrical signal caused by a particular event, by recording overall activity of the brain when the event occurs multiple times.  The triggered brain activity will occur each time in the same way, embedded in a background of presumably random noise. Averaging the many signals will cause the noise to average out (sometimes it is high, sometimes low) while the evoked potential, the same each time, reveals itself.

We do the same thing in astrophotography. An image of the night sky might contain many very weak details (signal) embedded in a background of noise, often caused by random electrical activity of the digital camera or air currents in the atmosphere deflecting light rays. A single photo will contain weak signal and lots of noise. If the signal is the same across time (and assuming that there is no supernova occurring this is probably the case) and the noise is random, then averaging many photos will allow the noise to cancel out, revealing the signal — in this case the image of the heavens.

FIGURE 1. An arbitrary “signal” that was hidden by random noise in each list of 100 numbers in the spreadsheet array.

To simulate the advantage provided by averaging many images (in astrophotography this is called “stacking” the images) I created a spreadsheet consisting of 100 lists of random numbers between -50 and 50. To each of these lists I added a list of numbers that represented a patterned image (see Figure 1) comprised of numbers between 40 and 60. This yielded 100 lists that ranged from -10 to 110, with a signal of maximum magnitude of 20 units (60-40) embedded in noise of magnitude 100 (50 – (-50)). Each of these 100 lists of numbers appears pretty random — I would argue that it is not possible to recognize the signal in any one of these lists (see Figures 2, 3, & 4).

Figure 2. An example of random noise obscuring the signal. Signal is shown in black, red curve is Noise and Signal combined.

Figure 3. Another example of noise obscuring the signal, Signal, black; Signal + Noise, green.

Figure 4. Signal, black; Signal + Noise, blue.

Each of these 100 lists can be thought of as a single very noisy photograph – in fact a photograph in which the noise is so great that it totally obscures the image.  The spreadsheet allows me to average these 100 lists. If I average 5 of them (Figure 5), the noise is attenuated a little bit, but the signal is still hard to discern. However, if all 100 are averaged, the noise is greatly attenuated and the averaged image very faithfully approximates the underlying signal (Figure 6).

Figure 5. An average of 5 noisy signals (orange) shown against the original signal (black).

Figure 6. All 100 noisy signals averaged (orange), compared to the original signal.

Stacking procedures in astrophotography will average images in this manner, reducing random noise.  Astrophotographers reduce random noise created by the atmosphere from their sky photos (“lights”) by averaging many lights – many photos of the same target. “Darks” are photos shot at the same time and under the same conditions as the many sky photos (same lens, same exposure, same ISO…) but the lens is covered so no light gets in. These darks contain what the camera records as total darkness under the conditions when the lights were taken, plus random and non-random camera noise related to the exposure, to defective pixels, etc.; subtracting the darks from the averaged lights will leave only the signal and some other non-random noise. The remaining non-random noise can be removed through the use of two other kinds of images. “Bias” frames are photos taken at the fastest possible shutter speed (a light might involve an exposure of several minutes; the bias frames will be exposed at 1/4,000 sec or so) again with no light coming into the lens. The bias frames contain info about noise created at the level of individual electrons reading the various pixels of the camera sensors; you don’t want to interpret this noise as part of the image. Finally, “flats” are images taken with a plain, even, diffuse white light coming through the lens focused as it was for the lights, properly exposed to create a white or light grey image. The flats allow any aberrations caused by dust on the lens or camera sensor, or unevenness in the light distribution caused by the lens (“vignetting”) to be corrected in the image. So to summarize the astrophotography process, lights are averaged to remove random atmospheric noise, darks are subtracted to remove camera noise related to the exposure, bias frames are subtracted to eliminate electronic noise, and flats correct for the effects of dust and vignetting. A nice discussion of all of this by can be found at NightSkyPix.

If you read this far, I hope you got something out of this. I’m only beginning to learn how to accomplish all of this is astrophotography, but I prepared this discussion to illustrate the benefits.  At a minimum, I hope you understand how, in general, averaging data can help to reduce variability, and in specific how stacking images can reduce noise.  With regard to behavioral data, the subject of my scientific career, averaging across the people or nonhuman animals being studied is of value only if there is an underlying signal to be revealed. This is not necessarily always the case – see for example Murray Sidman’s argument against averaging behavioral data for learning curves. 

Feel free to email me at wjwilson@albion.edu with any comments or questions.

 

 

Clouds Rolling In

I wanted to photograph some Leonid meteors on November 18, 2020.  Weather forecast the night before showed that the sky would be clear at the peak time – around 5:00 AM when Leo is high in the sky; so much for forecasts.

A Leonid meteor, near the treeline.

I saw one meteor, headed “north” from Leo, and not captured in a photo. I caught one in a photo before the clouds covered the sky; here’s the photo, and the meteor is visible in the video at about the 7.5 sec mark – above the treeline just before a faint aircraft flies from right to left across the video.

See the video here.

Deaths v Cases – MI COVID data

Deaths and Cases over time

Is there a relationship between the number of new cases reported and the number of subsequent deaths? It’s a difficult question because of uncertainties in the data (sparse testing early on, COVID deaths likely under-reported, etc.). Here’s an attempt at an analysis. 

First – here’s a graph showing the number of new cases and the number of deaths throughout the bulk of the pandemic. Deaths are scaled to make their change over time more apparent – read the number of deaths from the y-axis on the right side. First, note that the two variables do indeed tend to change together. However, the scaled deaths early on are much higher than the number of new cases, and the scaled deaths later are lower than new cases. This suggests what many have suspected – that testing was probably missing many cases early in the pandemic: serious cases, those that were symptomatic and more likely to lead to death, were being recorded, and asymptomatic cases were probably being missed.

Here’s my new look at the data regarding the relationship between cases and deaths. This was sparked by a question from my friend Cliff Harris: “can you find a reliable correlation between cases and deaths? For instance, is there any number of days x where, the # of deaths/(# cases x days previous) is close to constant?” The graphs below address the correlation question directly, and suggest an answer to his question about x.

I split the data somewhat arbitrarily into early and late periods, corresponding approximately to the point where the Cases and Death curves cross in my graph above. I did this under the assumption that a smaller number of tests early in the pandemic might produce different results than are seen with the larger and perhaps more reliable testing done later.

These graphs plot Cases against Deaths, and include information about the regression lines for the early and late data. It is clear that there is a relationship between cases and deaths, and it is also clear that this relationship differs if one compares early data with later data. The largest daily death counts are associated with low numbers of new cases early in the pandemic (blue points), when cases were probably largely undetected; late in the pandemic (red points), when testing was more widespread and more cases were reported, the medical community had learned more about COVID-19 and was better able  to prevent death.  More interestingly, each graph varies the “lag” between the Cases and Deaths. If Lag=0, the graph represents the relationship between Cases on a particular day and the number of deaths reported that day; Lag=5 shows the relationship between Cases and Deaths that are reported 5 days later, and so on.  The Lag=0 and Lag=5 graphs include the recently reported exceedingly high number of cases, tending to increase the linearity of the data; these high numbers do not appear in the Lag=10 (or greater) graphs because we are not yet 10 days out from these high numbers. (Click on a graph to enlarge it.)

 

 

 

One thing to note is that the linear relationship between Cases and Deaths breaks down for the Early data starting around Lag=15, perhaps because of limited testing resulting in undercounting of Cases in this early phase. The linear relationship is largely maintained up through Lag=25 for the Late phase.

Here are the correlation coefficients (r) for the various lags, both Early and Late:

  EARLY LATE
Lag r r
0 0.777 0.908
5 0.806 0.897
10 0.710 0.864
15 0.614 0.770
20 0.523 0.814
25 0.458 0.686
30 0.427 0.483
35 0.381 0.359

In the late phase, when testing is more prevalent and the recent outliers are removed, lags of 10 through 20 yield correlation coefficients ranging from r=.77 to r=.86. These values suggest that about 2/3 of the variability in the number of deaths reported on a given day is accounted for by the number of new cases reported 10 – 20 days earlier. This is a strong relationship, but not a perfect relationship: there are other factors that account for 1/3 of the variability. These might well include different time courses of the illness: two diagnoses on Day 1 that result in deaths on Days 15 and 18 would reduce the predictive value of New Cases in predicting deaths exactly 15 days out. Nonetheless, this analysis suggests that the number of deaths will be related to the number of cases reported 10 – 20 days earlier.

So my short and imprecise answer to Cliff’s question: 10 < x < 20.

[Addendum: additional info related to Cliff’s question: During the time when the number of cases was most stable, roughly 8/1 through 9/27, the value # of deaths/(# cases x days previous) is about 0.0147, meaning that whether you choose a lag of 10, 15, or 20 days, some 1.5% of the people diagnosed will die that many days later. Note that this is lower than the state’s reported Case Fatality Rate of 3.2% because of the imprecision inherent in predicting the exact course of the illness.]

(Disclaimer: My case data come from the MI.gov site that reports daily new cases. My death data were extracted from the Cases and Deaths by County by Date
of Onset of Symptoms and Date of Death spreadsheet that the state makes available for download; this spreadsheet offers deaths by county for each day, but does not offer a statewide death count for each day, so I had to calculate that. I accept all responsibility for any errors in this regard.)

Comet C/2020 M3 (Atlas)

Comet C/2020 M3 (Atlas) is passing near Orion. Don’t hurry out to see it; the view is not as spectacular as Comet NEOWise earlier this year – in fact you probably won’t see Atlas without binoculars. I shot some photos on 11/11/2020.

Orion with comet indicated by red arrow

Some light pollution from Albion, MI is apparent in the lower left part of the image. This is image is cropped slightly from a photo taken at 25 sec, f/2.8, ISO 2500, 64mm (128 mm ff equivalent).

This is a very-zoomed-in gif comprised of three frames: the first from large image, taken at ~10:30 PM EST, and two additional images taken at about 10:50 and 11:10. You will be able to see the greenish comet move relative to the background stars over the course of the 40 minutes. Sadly, you’ll also be able to see the effect of dew building up on my lens; I have a lens heater to prevent this, but I left its power source at home. 🙁 

 

 

 

 

 

 

Andromeda Galaxy (M31)

I tried capturing the Andromeda Galaxy (M31), 60-sec exposures, f/1.8, 800 ISO, 56 mm (112 mm equivalent) Sigma lens on my Olympus OM-D E-M1 mk ii. Six photos stacked in these images (a total of 6 min of exposure), one uncropped and one cropped. I’m pretty happy with this. I still need to learn to stack a bit better, and there are all sorts of tricks for bringing out detail in the galaxy that I don’t yet know, but for a first effort I’m pleased. You can easily see the companion galaxy M110 above Andromeda, and M32 is apparent, but looks pretty much like a fuzzy star, a close below Andromeda.

It’s so frustrating to me to try to see this with my naked eye. With a dark sky, and when I’m dark-adapted, I can make it out with my peripheral vision. Damn cones in my fovea just can’t manage to do it – at best I can convince myself that there’s something there.

One more – 20 stacked images, and cropped a bit more tightly.

Perseids 2020

The Perseid meteor shower happens every year around August 12. I hoped to get some nice meteor shots, but sadly I managed only one.

Copyrighted image – permission required for any re-use.

 

I also created a gif of the images that were captured in trying to shoot meteors – 145 20-sec exposures, with 1 sec in between. Looking south in Marengo Township, MI, from about 11:20 PM – 12:10 AM August 12-13, 2020. Here’s a low-res version.

Night Moves 2. About 51 minutes looking south from ~23:20 8/12 – 00:11 8/13. In addition to the various aircraft and the one bright meteor over the barn, if you look closely you might see two very faint meteors (or maybe they're satellites?) near the center of the frame.

Posted by Jeff Wilson on Friday, August 14, 2020

 

(Download a high-res gif here.  Might take a while to download.)  

Weird tandem Satellite?

Comet NEOWISE below the Big Dipper. The bright streak in the lower right is the International Space Station passing through the shot.

I shot some photos of Comet NEOWISE around 11:00 PM Eastern Daylight Time on 7/23/2020. The comet is past its peak, and relatively low in the sky, so with even rural Michigan light pollution it was not especially photogenic. I then turned my camera toward Cassiopeia, in the hopes of capturing the Andromeda Galaxy (M31) for the first time. I got some pictures, but with my wide angle lens, and with M31 being fairly low in the sky at that time, the photos were not impressive.

Faint tandem objects and brighter Starlink 1098 about to “enter” Cassiopeia.

In viewing the photos, though, I notice something odd. There are very many satellites orbiting Earth, and they often photobomb star images (this is not odd, just annoying). In several shots of Cassiopeia, though, I noticed a pair of very faint objects, moving in tandem from southwest to northeast (from the top of the image toward the bottom; [EDIT: I had originally stated that the movement was from south to north, but upon reflecting on the camera orientation I have corrected that]). They had no flashing lights (satellites typically do not) and they moved at a constant speed over the course of 8 10-sec photographs taken sequentially (with 10-sec dark frames interspersed, and a 0.5 sec delay between each dark frame and the next image); the objects were thus photographed over the course of 164 sec.

Detail from other image – tandem objects are a bit easier to see.

I found them only after examining the photos an hour or so after taking the photos. Stellarium-Web is a great program for identifying objects in the sky, including an extensive database of satellites. However, these tandem objects do not appear. However, the time of my observation can be pretty accurately determined, as Starlink 1098 (one of Tesla’s many, many internet satellites) passed into Cassiopeia from southwest to northeast at essentially the same time as the two objects from my vantage point (42.261578, -84.862236). Both Starlink 1098 and the objects were in multiple photos, so it’s possible to determine the the tandem objects were moving at approximately the same angular speed as Starlink. See an annotated image here.

All of my photos that include the objects are here in somewhat reduced resolution, and here in the highest resolution that I have. Astrophotography is not my forte; a better photographer might produce better images from what the camera gave me.

The  animation  below  might  make  the  objects  easier  to  see,  as  our  eyes  are  very  good  at  detecting  motion.  They enter at the top of the image, about 1/4 of the way across from the left, and proceed straight down. Starlink 1098 enters from the left side, near the top, and crosses diagonally down through the image.  An aircraft appears in the top center and proceeds diagonally down to the right. Starlink and the faint tandem objects pass through Cassiopeia (near the center of the image) at about the same time, with Starlink entering just before the objects.

This screenshot from Stellarium-web shows that Starlink was at this position at about 23:07:30 on this date. 

I am really curious now. I’ll keep trying to determine what I saw, but if anyone has an idea please email me at wjwilson@albion.edu to share your thoughts.

 

Invertebrate Behavior is Fascinating

I spent the better part of a couple hours today watching a stinkbug (Halyomorpha halys). It’s pretty clear that its behavior is programmed pretty rigidly, in a manner that must have worked very well in a world without human-designed objects. This approach is failing this particular bug today.

I did not witness the beginning of the bug’s travels, but the present situation is that the bug is crawling back and forth along the top edge of an interior screen on my screened porch. A round trip (it’s a distance of about 1 m) takes around 5 min 45 sec (I measured three round trips and averaged – longest was about 6 min 10 sec and included an especially long [more than 1 min] turn around at one end). 

The bug crawls to the left until it hits the corner. It then turns downward for about 5 cm. Then it turns around, climbs back to the top and proceeds over to the right side of the screen. Then, once again, it goes down for about 5 cm until it reverses its direction. 

It seems to be executing a very simple program that consists of two components:

  • Crawl.
  • Crawl toward the dark.

Once the bug found itself on the screen, crawling toward the dark would cause the bug to crawl upwards; the major source of light is the great outdoors but it’s overcast today and the porch is shaded by trees. I haven’t measured, but it’s likely that the light energy coming in through the screen is pretty even from side to side and even from top to bottom. However, on the interior side the ceiling is darker than the floor, so from the bug’s perspective upwards is toward a darker region than downwards.

Once the bug hits the top edge, it is well shielded from any light from above. It crawls in either direction. When it hits the corner and then crawls downward, it finds itself entering a lighter area, so it turns around. And so on…

I first noticed the bug around 11:00 AM, and it’s still going now at about 1:40 PM. That means it has made at least 25 round trips, and this had been occurring  before I noticed. If it persists for another hour or two I’ll rescue it and take it outside. [Edit – 1:55 and it’s gone.  2:00 PM – I see that it has made its way to another screen, about 0.5 m from the top and near an edge. It’s now crawling up along that edge… and it has reached to top and is moving across.]

The program probably served this bug’s ancestors well in the pre-engineered world of irregular surfaces. Simple if-then commands executed by disparate ganglia got those earlier bugs around effectively, and because those ganglia are not unified into a central nervous system there’s no reflection, or in this case, boredom. Execute the program and survival will follow. That is, until perfectly horizontal surfaces and right angles appeared.  I don’t think stinkbugs experience existential crises.

 

History of Mother’s (Mothers’) Day

Today, Mother’s Day 2020 (5/10/2020) I posted a cartoon from The New Yorker on Facebook. This prompted a response from a Facebook friend:

It would still be “Mother’s Day”… even with two mothers, it doesn’t make the holiday name change. Just sayin

This response led me to a fascinating dive into the history of the holiday (I am an academic, after all).

Modern US Mother’s Day is often traced to Anna Jarvis‘ desire to celebrate her mother, who died in 1905.  In 1908 Jarvis held a ceremony honoring her mother and all mothers at Andrews Methodist Episcopal Church in Grafton, WV, the church she attended as a child and in which both she and her mother had been active.  She urged people to wear white carnations in honor of their mothers. Jarvis wrote that the idea for the holiday was planted in her mind by her own mother ending a Sunday school teaching in 1876 by saying

I hope and pray that someone, sometime, will found a memorial mothers [sic] day commemorating her for the matchless service she renders to humanity in every field of life. She is entitled to it. (Anatolini, p. 25)

Her mother, Ann Maria Reeves Jarvis, interestingly, was the founder of Mothers’ Day Work Clubs (Anatolini p. 27) – note the placement of the apostrophe. These Clubs worked to enhance health and living conditions among the poor in West Virginia. After the Civil War, in 1868, Ann Maria Reeves Jarvis held a “Mothers Friendship Day” (no apostrophe by many accounts) whose goal was to “bring together families that had been divided by the conflict” (legacyproject.org/guides/mdhistory).

The governor of West Virginia, on April 26, 1910, declared that “Mothers’ Day” should be observed on May 8, 1910, and that “all persons attend church on that day and wear a white carnation.” Disregarding the clear violation of separation of church and state (i.e., a governor mandating church attendance), this might well be the first official declaration of the holiday by a government official, and the holiday was described with the plural possessive.

In 1912 Anna Jarvis copyrighted the phrase “Second Sunday in May, Mother’s Day” and in 1914 President Woodrow Wilson proclaimed Mother’s Day a day to fly the flag “as a public expression of our love and reverence for the mothers of our country.” Jarvis detested the commercialization of the holiday, and spent much of the rest of her life working against it. According to an interview on NPR, in 1943 she was committed to a sanitarium, where she remained until her death, with the sanitarium bills paid by greeting card manufacturers and florists who were presumably more than happy to have her drive to end the holiday ended. 


So, thus far we have the person normally credited with the establishment of the holiday, Anna Jarvis, being driven by a desire to honor her own mother and her mother’s desire for such an observance.  The result was the first official recognition of the holiday, as Mothers’ Day, in 1910, and President Wilson’s proclamation of Mother’s Day in 1914. There is a deeper, story, though.


Julia Ward Howe, an abolitionist who worked for women’s suffrage and who is probably best known for writing the lyrics of The Battle Hymn of the Republic, is often credited with having called for Mothers’ Day with her 1870 Appeal to womanhood throughout the world  (often called The Mother’s Day Proclamation although she did not call it that nor did she refer to “Mother’s”–or “Mothers'”–Day in that piece). She did, however, shortly thereafter call for the organization of a holiday celebrating mothers. She recalls this in her 1899 autobiography as a desire for

… a festival which should be observed as mothers’ day, and which should be dedicated to the advocacy of peace doctrines. (Howe, 1899)

Note that Howe used the plural possessive “mothers'” in describing her holiday.  An early celebration was documented by the New York Times documenting the first anniversary of one of these observances in 1874. The Times referred to it in the singular: “Mother’s Day.” Howe’s use of the plural possessive occurred in her Reminiscences, written in 1899 – well before Jarvis established the holiday and copyrighted the term “Mother’s Day.” So here is a first acknowledgement of the celebration of a Mother’s Day (the New York Times used the singular despite Howe’s use of the plural) having occurred in 1873.


There are other claimants to the origination of the holiday:

Harriet Stoddard Lee is sometimes credited with establishing the holiday in California in 1903, when she convinced a gathering of the Native Daughters of the Golden West to set aside a day to honor mothers (an idea that she once claimed to have implemented earlier when she was a school teacher). Her involvement in the establishment of the holiday is documented in the US Congressional Record, May 5, 1966, pp 9994 ff.

California lore suggests that the day was officially recognized by Governor Gillette in a proclamation around 1909, but the only documentation I can find for this is an article in a 1909 edition of The California Weekly indicating that Gillette could not officially recognize the holiday  (Note the plural possessive.) Gillette encouraged men to wear a white rose to honor their mothers, similar to the practice of Jarvis  urging people to wear white carnations in honor of mothers. This is sufficient to document that there was discussion of the holiday in California, but it does not link Lee to the effort. Even if she could be definitively linked to the holiday, her professed involvement came well after Julia Ward Howe’s efforts.

Albion, Michigan also claims a role in the establishment of the holiday, arguing that the first known observance of the holiday occurred there. In the 1880s the Albion Methodist Church started celebrating Mother’s Day in honor of all mothers, initiated by a desire to honor Juliet Calhoun Blakeley, who stepped into the pulpit on a Sunday in 1877 to complete a sermon for a distraught minister who could not continue.  The establishment of the holiday is recognized by a state historical marker. Albion historian Frank Passic documents the event in his History of Albion, Michigan, and it is also documented here. This early celebration came shortly after Howe’s earliest documented event.


So what’s the take-home message? Julia Ward Howe might rightly be considered the originator of the holiday. She pushed for a celebration of mothers in the 1870s, and organized Mothers’ Day events in Boston and New York City annually. Her idea, though, never gained traction, but as a well-known writer and suffragette, she undoubtedly influenced the thinking of others. It is likely (I’ll keep digging for evidence) that both Lee in California and Jarvis in West Virginia had read her Appeal to womanhood throughout the world. This work would also probably have been known to the good people of Albion’s Methodist Church. And because Howe described it in the plural as Mothers’ Day, perhaps we should honor that mother of six by following her practice (and that of The California Weekly in 1909 and of the first official declaration by the West Virginia governor in 1910). Wherever you place the apostrophe, honor your mother(s)!


Anatolini, Katherine Lane (2009). Memorializing Motherhood: Anna Jarvis and the Struggle for Control of Mother’s Day (PhD Diss). West Virginia University.

 

WordPress Themes