Random Events Determine Your Life

Thinking about how so many of life’s major events occur for totally random reasons. Some examples:

~ On my second visit to the university where I had been accepted I had an uncomfortable experience due largely to the partying behavior of the guy who was hosting me. Turned me off totally to that school, so I applied to Haverford at my mom’s urging.

~ I met the first woman I truly loved because I played pool one night during my first week at Haverford. I likely became a psych major (and all that followed from that) because of her.

~ I met my first wife when we both rented rooms in the same house during the summer before I headed to grad school. 

~ My PhD mentor Stefan Soltysik visited Art Arnold’s comparative psych seminar for one session; I might otherwise have not ever met him. When my relationships with two other profs turned sour (I’ll maintain that this was not my fault; people who know the two I’m talking about would likely agree) Soltysik was a clear choice. My relationship with him led to my ties to the Nencki Institute in Poland, and the two wonderful sabbaticals I spent there.

~ My first tenure-track job at IPFW was largely due to chance: I applied to about 40 academic positions and they were the only ones to interview me and (foolishly) offered me a job. 

~ If not for being at IPFW, I wouldn’t have met the woman who would become my second wife after my first marriage failed.

~ At an early meeting of the American Psychological Society in 1991 I heard Julio Ramirez talk about his approach to running his undergraduate neuroscience lab. This changed the way I ran my lab (I became much freer in giving students latitude), and it led me to attend his session at the following Society for Neuroscience meeting at which Faculty for Undergraduate Neuroscience was born. If I hadn’t heard his talk at APS, FUN might instead be called LURN (ugh… League for Undergraduate Resources in Neuroscience), and my friendships with many wonderful FUN colleagues would not have been cemented.

~ I happened to meet a prof from Albion College when I gave a talk at a teaching of psychology meeting.  This led to him inviting me to give the keynote talk at their undergrad psychology research conference, where, over lunch, I learned that three of their senior psych profs would be retiring, including the neuro guy. If not for that, I would not have ended up at Albion.

~ COVID (totally unanticipated by all) convinced me to retire a year earlier than I had anticipated, and shut down the venues (live music and rodeos) that I had been photographing.  Perhaps as a result of that, my lifelong interest in astronomy led me to begin my journey into astrophotography.

~ The first major purchase a serious astrophotographer makes is a good mount. I decided on the one I wanted, and happened to see a post in a FB astrophotography page by another person considering the same mount. My reply to that post has led to one of my closest friendships, which would not have occurred had the timing of our purchases differed by a few days, or had either of us decided on a different mount.

~ The landowner who berated me for setting up my telescope on her property (miles from anyone or anything) led to my wonderful relationship with Frontière Farm House. Their decision to sell their farm led me to find my new property (Terra Nova) where I will do my astrophotography and eventually build a retirement home.

Life’s events are largely driven by random, unexpected, and unpredictable forces. If free will exists (and the neuroscience is still out on this question), it works within the confines of decisions made for you by these forces. My life would have been totally different if not for chance.

Protected: History of 21257 H Drive South, Homer

This content is password protected. To view it please enter your password below:

Success Story: Getting a Photo of the James Webb Space Telescope from Half a Million Miles Away

James Webb Space Telescope (NASA)

The James Webb Space Telescope (JWST) is the most powerful telescope yet developed. Its primary mirror is 6.5 m in diameter, with 6 times the light-gathering ability of Hubble. It is expected to see light that originated 100 million years after the Big Bang, when the first galaxies were forming.  Earth’s atmosphere would prevent JWST from doing this, so it will operate in the vacuum of space, at a point 1 million miles from Earth. It launched on December 25, 2021. 

On January 3, 2022, when JWST was passing through a point halfway to its destination, I had an evening of clear skies and decided to try something daunting: to capture a photo of JWST from its home planet.  I’m an amateur astrophotographer; although I have been keenly interested in astronomy and cosmology for most of my life, I got serious about shooting photographs of Things in Space™ a little more than a year ago. As a life-long photographer I had lots of experience and high-quality photographic equipment—I thought astrophotography would be simple.  Wrong.  It is an entirely different game compared to regular photography.  All of the equipment I use for my astrophotography was newly purchased in 2021. My equipment is described in some detail in a note at the bottom.

Several challenges had to be met in reaching my goal of capturing an image:

  1. Finding JWST. The space telescope does not have its own source of light; it doesn’t glow. It reflects light from the Sun, and at its current distance it is reflecting very little. It is a dim object, way below the power of the unaided eye to detect. You cannot simply look up and see it, you must know where in the sky to aim a telescope that can collect enough of the dim light that it is reflecting to make it visible. Fortunately I found a list of the hourly location of JWST that allowed me to point my telescope in its direction. 
  2. Collecting enough imaging data. JWST is a dim object that will appear among much brighter stars. I shot 60-sec exposures in the hope that this would allow me to collect enough of its reflected light to make the dim object visible. The Earth is rotating, so a perfectly stable camera focused on the heavens will result in an image in which the stars blur as they sweep across the sky. With the telescope that I used any exposure longer than 1 sec would result in obvious “star trailing.” Astrophotography requires a camera mount that rotates in a manner that compensates for the Earth’s rotation. My mount has allowed exposures as long as 2 min when I have it properly aligned with the Earth’s axis; additional equipment can monitor slight movement of stars relative to the field of view and make ongoing adjustments, allowing exposures as long as 6 – 10 min (I do not yet have this capability). 
  3. Differentiating JWST from the stars. Even if I use an exposure that makes the dim JWST visible, it will look exactly like a very dim star.  It will be recognizable as a craft hurtling through space only because its location will change relative to the constant background stars.  To see this change, multiple sequential exposures are needed. I shot 180 60-sec images, hoping that this would be sufficient to allow the movement of a very dim dot to become apparent as a streak among the stars.
  4. Finding that streak. As I said, JWST is very dim, and the stars are bright. The 180 exposures were “stacked” by a program called Siril, which analyses each image, discards images in which star trailing is apparent, aligns the stars across the images, and adds the images together. In this stacked image, even very dim stars can become easily visible (almost as if a single 180-min exposure had been taken). Of course, because JWST is in a different place in each image relative to the stars, the streak created by its apparent motion consists of 180 very dim dots in a line. Fortunately my friend Barbara Bunker has lots of experience with visual astronomy (in addition to doing astrophotography) so spotting a dim streak among the stars was simple for her (I am certain that I would not have found the streak).
  5. Ensuring that the streak is in fact JWST. Many things can cause a streak in a photograph of stars. Ruling out photographic issues, objects such as asteroids, satellites, aircraft, etc. can all lead to streaks. Fortunately aircraft have characteristic lights that make them easy to recognize in a photograph. Satellites tend to move rapidly enough as they orbit to create longer streaks than a small moving object very far from Earth. To differentiate our streak (switching to plural pronouns to recognize Barbara’s contribution—my story would have ended in frustration when I failed to find the streak) we needed some indication that it was where JWST was supposed to be.

All of these challenges were overcome. That British Astronomical Association site provided enough information to allow me to point my telescope in the right place. My mount and computer controlled camera allowed the requisite number of long-exposure images to be taken. Here’s the resulting image. See the streak?

My colleague Barbara was, remarkably, able to detect the streak within seconds of seeing this photograph (I’m still amazed). Here are images with the streak pointed out, and a cropped image that might make it more apparent.

To be sure that this was indeed created by a moving object, I made an animation using all of the images (think flip-book) in which a very small, barely apparent dot can be seen moving through the length of what would become the streak in the stacked images.  It begins near the tip of the lower arrow, and is just above the tip of the upper arrow at the end of the brief video. You might have to watch it repeatedly to convince yourself that you see it.

To determine that our streak was where JWST is supposed to be required some additional data. I knew that I had pointed the scope in approximately the right direction, but did our streak line up with the path of JWST? Another amateur astrophotographer, Blake Estes, had captured a photo of the JWST on December 30, 4 days earlier. While I can’t confirm that his photo is in fact JWST, it was widely circulated online and I conclude that it is accepted as accurate.

I used Stellarium, a widely touted astronomical program, to create in image of the heavens that included the region that Estes shot, and the region that I thought I had shot. I superimposed Estes’ image on the star chart created by Stellarium by aligning his stars with those shown on the Stellarium star chart. I then did the same with my image; this was more of a challenge because there is no “up” in space and my image was not oriented the same way as the Stellarium chart. However, with some rotation and zooming of my image, I was able to align and superimpose it.

Note that this uncertainty about the orientation of the image worked to our advantage in identifying our streak as JWST without bias. Had we known the orientation of the image prior to searching for the streak, and thus known the direction that JWST would be travelling, we (i.e., Barbara) might have been biased to look only for streaks that matched this expectation. Instead, our search for the streak was “blind,” such that our expectations couldn’t bias our finding.

Next, I drew a line through the “confirmed” JWST streak on the Estes photo, and did the same with the streak in our photo. They indeed appeared to align with each other, as you would expect if they were both created by an object travelling along a smooth trajectory. When a line was extended from the Estes streak through our image, it indeed came very close to our streak. Q.E.D.

The Team (Composite photo. Barbara is in Colorado, I’m in Michigan. We’ve never met IRL.)


NOTE about equipment:

Camera: I shot my images with a cooled astrophotography camera, the ZWO AIS533 MC Pro. Digital camera sensors heat up as an exposure is taken; a long exposure can create a lot of heat, which in turn generates what is called “thermal noise” in the image—that is, the heat results in a grainy or staticy image, obscuring fine detail. Cameras designed for the long exposures typical in astrophotography have a built-in sensor cooler to combat this noise; my camera can be cooled to as low as -15° C. 

Telescope: I use an Apertura 72mm FPL-53 Doublet APO Refractor. The 72mm referes to it’s aperture, the diameter of the lens that determines its light-gathering capacity. Bigger is always better, but bigger comes at additional cost. This is a relatively small aperture for a refractor. That term, “refractor,” means that a glass lens is used to gather and focus the light, as opposed to a curved mirror (as used in a reflector telescope). Both refractors and reflectors have their advantages. Reflectors (mirror) can be made much larger than refractors (glass lens) at less cost; large reflectors are favored by astronomers who observe visually and refer to their giant scopes as “light buckets” because of their ability to collect so many photons. This can be a problem for the long exposures needed in astrophotography, as even a slight gust of wind can shake the gigantic light bucket, ruining the exposure. Reflectors also avoid the problem of chromatic aberration; different wavelengths of light are bent at slightly different angles by glass lenses, resulting in slightly different focus points for red and blue light, and apparent color fringing around bright objects.  Manufacturers of lenses can use different types of glass (the “FPL-53” refers to the glass used) and different configurations of lenses (e.g., “Doublet”) to try to overcome this aberration. Mirrors don’t suffer from this problem. However, reflectors require frequent (every observing session, some would say) adjustment or “collimation” to ensure that the optics are aligned. The smaller targe tthat they provide for the wind, and the lack of constant adjustment, make refractors the favored tyoe of telescope for astrophotographers.

Mount: I mount the telescope and camera on a Sky-Watcher HEQ5 Pro mount. This precision instrument can rotate in a manner that exactly compensates for the rotation of the Earth, allowing the view through the telescope to remain constant over time, with no star trails. Such precise compensation requires that the axis of rotation of the mount is aligned with the axis of rotation of the Earth. The first 10 minutes or so of any astrophotography outing are devoted to ensuring that this alignment is precise. If the axis of the mount points at Polaris (the North Star) it will be close, but not precise enough; Polaris actually orbits the celestial North Pole (the point directly above the Earth’s axis), about 0.7° away from it (for comparison, the full Moon has a diameter of about  0.5°) so alignment requires positioning the mount’s axis so that it is pointing to the location in the sky where the actual celestial pole is relative to Polaris. A polar clock app will show exactly where Polaris is relative to the celestial pole at any time, and is used to accomplish this.  My mount also is computer controlled, with the capacity to aim at any particular place in the heavens if it has been properly aligned with the stars. This is a convenience, but not a necessity for astrophotography.

Software: I use a program called Siril to align and stack the images. Siril automatically discards images with too much star trailing (15 of my 180 images were discarded), lines up the stars and stacks the images. The resulting stacked image will be very dark (the heavens are dim) so additional processing is needed to “stretch” the image and reveal its beauty.  Siril is capable much of this additional processing, but I almost always import the final Siril image into Raw Therapee to make my final tweaks.

The Value of Averaging Noisy Data

Data are noisy. Measurements will be inaccurate for many reasons. Suppose you are given the task of measuring the height of a giraffe. Here are some scenarios that will demonstrate various sources of inaccuracy in your measurements (to skip the discussion of variability and noise affecting measurements, and to get right to the demonstration of how averaging can attenuate noise, go here.):

  1. The giraffe, sadly, was accidentally exposed to a blast of liquid nitrogen that froze her in place painlessly as she stood fully erect. You have a step-ladder tall enough to reach the giraffe’s head, a tape measure accurate to the nearest millimeter, and an assistant on the ground. You climb to the top of the ladder while your assistant holds the zero end of the tape measure against the ground, place the tape against the giraffe’s head keeping it as straight as possible, and read 5.372 m. 
  2. The giraffe has suffered the same fate as in #1. As you reach the top of the ladder you discover that your tape measure is only 5 m long. You carefully hold your right hand at the 5-m height as have your assistant release the tape so that you can pull it up to read the amount to be added to 5 m. You determine that the giraffe’s height is 5.42 m.
  3. Poor giraffe. This time you are alone. You place the end of the tape against the ground as you climb the ladder, and hope to keep it there as you climb. You discover that it is only a 5-m tape, so you hold your hand at the 5-m height and pull up the tape to determine the excess. You get a height of 5.316 m.
  4. In this scenario the giraffe is fine, but she is not too happy to have your assistant on the ground beside her and you climbing a ladder by her head. As she swings her head from side to side near you, you try to read the tape when her head swings by, and get a height of 5.5 m.
  5. In this case the zookeeper did not know you were coming, so he allowed the giraffe to have a double espresso just before you arrived. There’s no way you can safely climb a ladder anywhere near her. You have your brave and under-paid assistant enter the giraffe’s enclosure and stand as near to her as he can, and you take a photo of the two of them. You then measure your assistant and determine that he is 1.832 m tall. You use the photo to determine that the giraffe is 2.95 times as tall as your assistant, so you conclude that the giraffe is 5.404 m tall.
  6. And finally, in a case of true forgetfulness, you get to the zoo without the tape measure, and it’s nighttime. You have the keeper stand near the giraffe, but it’s too dark for a photo, so you estimate that the giraffe is 3 times his height. He tells you that he is 6 feet 2 inches tall. You do a quick mental conversion, putting the keeper at 1.9 m. The giraffe is estimated to be 5.7 m tall.

Six different measurements, six different answers. Which one is right? Clearly the first is probably closest to accurate, but was the tape straight? Was its bottom end properly against the ground? Could you really be sure it said 5.372 m and not 5.373 m? Was the tape measure manufactured and calibrated properly? The problems in measurement are clearly amplified in the other scenarios. In #2 and #3 how sure are you that your hand reflected the 5-m mark accurately? In #3 did you keep the tape end exactly on the ground?  In #4 and #5 the movement of the giraffe and/or your assistant will add some error. In #6 estimates as well as the zookeeper’s exaggeration of his height both corrupt your answer.

If you have the opportunity to make a measurement repeatedly using exactly the same procedure, these sources of variability will affect each measurement, but many causes of inaccuracy will sometimes cause the height to be too large, and other times too small. You hand will sometimes mark the 5-m position too high, and other times too low. The giraffe occasionally holds her head a bit low; at other times she jumps just as you measure. If a source of variability is equally likely to lead to over-and under-estimates of the correct number, then many repeated measurements when averaged will tend to converge on the correct result. Such a source of variability is said to be “random.”

Some other sources of variability are non-random; that is, they tend to lead to errors in the same direction every time. The zookeeper’s vanity will invariably lead to his overstating his height, for example. Averaging multiple measurements will not eliminate this error.

In the case of a signal that varies across space or time, recognizing the signal as distinct from its background can be a problem if there is lots of “noise,” or random variability, in the measurement. A weak radio signal might be hard to understand when embedded in lots of static, say. This is often described as a problem of distinguishing signal from noise. The same problem occurs in determining the brain activity triggered by a specific event (signal) against the background of all the other things the brain is doing at the same time (noise). A neuroscientist finds the “evoked potential,” the electrical signal caused by a particular event, by recording overall activity of the brain when the event occurs multiple times.  The triggered brain activity will occur each time in the same way, embedded in a background of presumably random noise. Averaging the many signals will cause the noise to average out (sometimes it is high, sometimes low) while the evoked potential, the same each time, reveals itself.

We do the same thing in astrophotography. An image of the night sky might contain many very weak details (signal) embedded in a background of noise, often caused by random electrical activity of the digital camera or air currents in the atmosphere deflecting light rays. A single photo will contain weak signal and lots of noise. If the signal is the same across time (and assuming that there is no supernova occurring this is probably the case) and the noise is random, then averaging many photos will allow the noise to cancel out, revealing the signal — in this case the image of the heavens.

FIGURE 1. An arbitrary “signal” that was hidden by random noise in each list of 100 numbers in the spreadsheet array.

To simulate the advantage provided by averaging many images (in astrophotography this is called “stacking” the images) I created a spreadsheet consisting of 100 lists of random numbers between -50 and 50. To each of these lists I added a list of numbers that represented a patterned image (see Figure 1) comprised of numbers between 40 and 60. This yielded 100 lists that ranged from -10 to 110, with a signal of maximum magnitude of 20 units (60-40) embedded in noise of magnitude 100 (50 – (-50)). Each of these 100 lists of numbers appears pretty random — I would argue that it is not possible to recognize the signal in any one of these lists (see Figures 2, 3, & 4).

Figure 2. An example of random noise obscuring the signal. Signal is shown in black, red curve is Noise and Signal combined.

Figure 3. Another example of noise obscuring the signal, Signal, black; Signal + Noise, green.

Figure 4. Signal, black; Signal + Noise, blue.

Each of these 100 lists can be thought of as a single very noisy photograph – in fact a photograph in which the noise is so great that it totally obscures the image.  The spreadsheet allows me to average these 100 lists. If I average 5 of them (Figure 5), the noise is attenuated a little bit, but the signal is still hard to discern. However, if all 100 are averaged, the noise is greatly attenuated and the averaged image very faithfully approximates the underlying signal (Figure 6).

Figure 5. An average of 5 noisy signals (orange) shown against the original signal (black).

Figure 6. All 100 noisy signals averaged (orange), compared to the original signal.

Stacking procedures in astrophotography will average images in this manner, reducing random noise.  Astrophotographers reduce random noise created by the atmosphere from their sky photos (“lights”) by averaging many lights – many photos of the same target. “Darks” are photos shot at the same time and under the same conditions as the many sky photos (same lens, same exposure, same ISO…) but the lens is covered so no light gets in. These darks contain what the camera records as total darkness under the conditions when the lights were taken, plus random and non-random camera noise related to the exposure, to defective pixels, etc.; subtracting the darks from the averaged lights will leave only the signal and some other non-random noise. The remaining non-random noise can be removed through the use of two other kinds of images. “Bias” frames are photos taken at the fastest possible shutter speed (a light might involve an exposure of several minutes; the bias frames will be exposed at 1/4,000 sec or so) again with no light coming into the lens. The bias frames contain info about noise created at the level of individual electrons reading the various pixels of the camera sensors; you don’t want to interpret this noise as part of the image. Finally, “flats” are images taken with a plain, even, diffuse white light coming through the lens focused as it was for the lights, properly exposed to create a white or light grey image. The flats allow any aberrations caused by dust on the lens or camera sensor, or unevenness in the light distribution caused by the lens (“vignetting”) to be corrected in the image. So to summarize the astrophotography process, lights are averaged to remove random atmospheric noise, darks are subtracted to remove camera noise related to the exposure, bias frames are subtracted to eliminate electronic noise, and flats correct for the effects of dust and vignetting. A nice discussion of all of this by can be found at NightSkyPix.

If you read this far, I hope you got something out of this. I’m only beginning to learn how to accomplish all of this in astrophotography, but I prepared this discussion to illustrate the benefits.  At a minimum, I hope you understand how, in general, averaging data can help to reduce variability, and in specific how stacking images can reduce noise.  With regard to behavioral data, the subject of my scientific career, averaging across the people or nonhuman animals being studied is of value only if there is an underlying signal to be revealed. This is not necessarily always the case – see for example Murray Sidman’s argument against averaging behavioral data for learning curves. 

Feel free to email me at wjwilson@albion.edu with any comments or questions.

 

 

Clouds Rolling In

I wanted to photograph some Leonid meteors on November 18, 2020.  Weather forecast the night before showed that the sky would be clear at the peak time – around 5:00 AM when Leo is high in the sky; so much for forecasts.

A Leonid meteor, near the treeline.

I saw one meteor, headed “north” from Leo, and not captured in a photo. I caught one in a photo before the clouds covered the sky; here’s the photo, and the meteor is visible in the video at about the 7.5 sec mark – above the treeline just before a faint aircraft flies from right to left across the video.

See the video here.

Deaths v Cases – MI COVID data

Deaths and Cases over time

Is there a relationship between the number of new cases reported and the number of subsequent deaths? It’s a difficult question because of uncertainties in the data (sparse testing early on, COVID deaths likely under-reported, etc.). Here’s an attempt at an analysis. 

First – here’s a graph showing the number of new cases and the number of deaths throughout the bulk of the pandemic. Deaths are scaled to make their change over time more apparent – read the number of deaths from the y-axis on the right side. First, note that the two variables do indeed tend to change together. However, the scaled deaths early on are much higher than the number of new cases, and the scaled deaths later are lower than new cases. This suggests what many have suspected – that testing was probably missing many cases early in the pandemic: serious cases, those that were symptomatic and more likely to lead to death, were being recorded, and asymptomatic cases were probably being missed.

Here’s my new look at the data regarding the relationship between cases and deaths. This was sparked by a question from my friend Cliff Harris: “can you find a reliable correlation between cases and deaths? For instance, is there any number of days x where, the # of deaths/(# cases x days previous) is close to constant?” The graphs below address the correlation question directly, and suggest an answer to his question about x.

I split the data somewhat arbitrarily into early and late periods, corresponding approximately to the point where the Cases and Death curves cross in my graph above. I did this under the assumption that a smaller number of tests early in the pandemic might produce different results than are seen with the larger and perhaps more reliable testing done later.

These graphs plot Cases against Deaths, and include information about the regression lines for the early and late data. It is clear that there is a relationship between cases and deaths, and it is also clear that this relationship differs if one compares early data with later data. The largest daily death counts are associated with low numbers of new cases early in the pandemic (blue points), when cases were probably largely undetected; late in the pandemic (red points), when testing was more widespread and more cases were reported, the medical community had learned more about COVID-19 and was better able  to prevent death.  More interestingly, each graph varies the “lag” between the Cases and Deaths. If Lag=0, the graph represents the relationship between Cases on a particular day and the number of deaths reported that day; Lag=5 shows the relationship between Cases and Deaths that are reported 5 days later, and so on.  The Lag=0 and Lag=5 graphs include the recently reported exceedingly high number of cases, tending to increase the linearity of the data; these high numbers do not appear in the Lag=10 (or greater) graphs because we are not yet 10 days out from these high numbers. (Click on a graph to enlarge it.)

 

 

 

One thing to note is that the linear relationship between Cases and Deaths breaks down for the Early data starting around Lag=15, perhaps because of limited testing resulting in undercounting of Cases in this early phase. The linear relationship is largely maintained up through Lag=25 for the Late phase.

Here are the correlation coefficients (r) for the various lags, both Early and Late:

  EARLY LATE
Lag r r
0 0.777 0.908
5 0.806 0.897
10 0.710 0.864
15 0.614 0.770
20 0.523 0.814
25 0.458 0.686
30 0.427 0.483
35 0.381 0.359

In the late phase, when testing is more prevalent and the recent outliers are removed, lags of 10 through 20 yield correlation coefficients ranging from r=.77 to r=.86. These values suggest that about 2/3 of the variability in the number of deaths reported on a given day is accounted for by the number of new cases reported 10 – 20 days earlier. This is a strong relationship, but not a perfect relationship: there are other factors that account for 1/3 of the variability. These might well include different time courses of the illness: two diagnoses on Day 1 that result in deaths on Days 15 and 18 would reduce the predictive value of New Cases in predicting deaths exactly 15 days out. Nonetheless, this analysis suggests that the number of deaths will be related to the number of cases reported 10 – 20 days earlier.

So my short and imprecise answer to Cliff’s question: 10 < x < 20.

[Addendum: additional info related to Cliff’s question: During the time when the number of cases was most stable, roughly 8/1 through 9/27, the value # of deaths/(# cases x days previous) is about 0.0147, meaning that whether you choose a lag of 10, 15, or 20 days, some 1.5% of the people diagnosed will die that many days later. Note that this is lower than the state’s reported Case Fatality Rate of 3.2% because of the imprecision inherent in predicting the exact course of the illness.]

(Disclaimer: My case data come from the MI.gov site that reports daily new cases. My death data were extracted from the Cases and Deaths by County by Date
of Onset of Symptoms and Date of Death spreadsheet that the state makes available for download; this spreadsheet offers deaths by county for each day, but does not offer a statewide death count for each day, so I had to calculate that. I accept all responsibility for any errors in this regard.)

Comet C/2020 M3 (Atlas)

Comet C/2020 M3 (Atlas) is passing near Orion. Don’t hurry out to see it; the view is not as spectacular as Comet NEOWise earlier this year – in fact you probably won’t see Atlas without binoculars. I shot some photos on 11/11/2020.

Orion with comet indicated by red arrow

Some light pollution from Albion, MI is apparent in the lower left part of the image. This is image is cropped slightly from a photo taken at 25 sec, f/2.8, ISO 2500, 64mm (128 mm ff equivalent).

This is a very-zoomed-in gif comprised of three frames: the first from large image, taken at ~10:30 PM EST, and two additional images taken at about 10:50 and 11:10. You will be able to see the greenish comet move relative to the background stars over the course of the 40 minutes. Sadly, you’ll also be able to see the effect of dew building up on my lens; I have a lens heater to prevent this, but I left its power source at home. 🙁 

 

 

 

 

 

 

Andromeda Galaxy (M31)

I tried capturing the Andromeda Galaxy (M31), 60-sec exposures, f/1.8, 800 ISO, 56 mm (112 mm equivalent) Sigma lens on my Olympus OM-D E-M1 mk ii. Six photos stacked in these images (a total of 6 min of exposure), one uncropped and one cropped. I’m pretty happy with this. I still need to learn to stack a bit better, and there are all sorts of tricks for bringing out detail in the galaxy that I don’t yet know, but for a first effort I’m pleased. You can easily see the companion galaxy M110 above Andromeda, and M32 is apparent, but looks pretty much like a fuzzy star, a close below Andromeda.

It’s so frustrating to me to try to see this with my naked eye. With a dark sky, and when I’m dark-adapted, I can make it out with my peripheral vision. Damn cones in my fovea just can’t manage to do it – at best I can convince myself that there’s something there.

One more – 20 stacked images, and cropped a bit more tightly.

Perseids 2020

The Perseid meteor shower happens every year around August 12. I hoped to get some nice meteor shots, but sadly I managed only one.

Copyrighted image – permission required for any re-use.

 

I also created a gif of the images that were captured in trying to shoot meteors – 145 20-sec exposures, with 1 sec in between. Looking south in Marengo Township, MI, from about 11:20 PM – 12:10 AM August 12-13, 2020. Here’s a low-res version.

Night Moves 2. About 51 minutes looking south from ~23:20 8/12 – 00:11 8/13. In addition to the various aircraft and the one bright meteor over the barn, if you look closely you might see two very faint meteors (or maybe they're satellites?) near the center of the frame.

Posted by Jeff Wilson on Friday, August 14, 2020

 

(Download a high-res gif here.  Might take a while to download.)  

Weird tandem Satellite?

Comet NEOWISE below the Big Dipper. The bright streak in the lower right is the International Space Station passing through the shot.

I shot some photos of Comet NEOWISE around 11:00 PM Eastern Daylight Time on 7/23/2020. The comet is past its peak, and relatively low in the sky, so with even rural Michigan light pollution it was not especially photogenic. I then turned my camera toward Cassiopeia, in the hopes of capturing the Andromeda Galaxy (M31) for the first time. I got some pictures, but with my wide angle lens, and with M31 being fairly low in the sky at that time, the photos were not impressive.

Faint tandem objects and brighter Starlink 1098 about to “enter” Cassiopeia.

In viewing the photos, though, I notice something odd. There are very many satellites orbiting Earth, and they often photobomb star images (this is not odd, just annoying). In several shots of Cassiopeia, though, I noticed a pair of very faint objects, moving in tandem from southwest to northeast (from the top of the image toward the bottom; [EDIT: I had originally stated that the movement was from south to north, but upon reflecting on the camera orientation I have corrected that]). They had no flashing lights (satellites typically do not) and they moved at a constant speed over the course of 8 10-sec photographs taken sequentially (with 10-sec dark frames interspersed, and a 0.5 sec delay between each dark frame and the next image); the objects were thus photographed over the course of 164 sec.

Detail from other image – tandem objects are a bit easier to see.

I found them only after examining the photos an hour or so after taking the photos. Stellarium-Web is a great program for identifying objects in the sky, including an extensive database of satellites. However, these tandem objects do not appear. However, the time of my observation can be pretty accurately determined, as Starlink 1098 (one of Tesla’s many, many internet satellites) passed into Cassiopeia from southwest to northeast at essentially the same time as the two objects from my vantage point (42.261578, -84.862236). Both Starlink 1098 and the objects were in multiple photos, so it’s possible to determine the the tandem objects were moving at approximately the same angular speed as Starlink. See an annotated image here.

All of my photos that include the objects are here in somewhat reduced resolution, and here in the highest resolution that I have. Astrophotography is not my forte; a better photographer might produce better images from what the camera gave me.

The  animation  below  might  make  the  objects  easier  to  see,  as  our  eyes  are  very  good  at  detecting  motion.  They enter at the top of the image, about 1/4 of the way across from the left, and proceed straight down. Starlink 1098 enters from the left side, near the top, and crosses diagonally down through the image.  An aircraft appears in the top center and proceeds diagonally down to the right. Starlink and the faint tandem objects pass through Cassiopeia (near the center of the image) at about the same time, with Starlink entering just before the objects.

This screenshot from Stellarium-web shows that Starlink was at this position at about 23:07:30 on this date. 

I am really curious now. I’ll keep trying to determine what I saw, but if anyone has an idea please email me at wjwilson@albion.edu to share your thoughts.

 

WordPress Themes