Questions that need better answers: the Universe has no center

As part of a new project to rekindle my love of basic physics and challenge myself, I asked my Twitter followers to send me questions that they felt were never adequately answered, or were usually answered with the same pat explanation that was not intellectually satisfying. I was then sent so many questions that I had to close replies to the thread in order to make answering them possible. I am gradually working my way through them, and this is the first of a series of posts that will attempt to answer these questions in a more satisfying way. I plan on working my way through this list for free, and then have it form the basis for a larger, ongoing project where I take on some of what I see as limitations of balancing accessible popsci explanations with practical pedagogical examples.


EPISODE ONE: THE UNIVERSE HAS NO CENTER

Replies:

@AnhHLe2702: What does the universe expand into?

@MikeRutland2: The explanation for the expansion of the universe having no origin and inflation being faster than c

Expansion doesn’t have a single origin so much as the whole of space-time is expanding together. This is usually a problem that is introduced by how diagrams are shown in books, how we conceptualize surfaces based on our daily experience, and also our own human perception of what ‘expansion’ means.

The most common analogy is usually the balloon. Draw some dots on a balloon, and inflate it. The distance between the dots over the surface of the balloon increases. This is the origin of the misconception that the expansion of spacetime (the surface of the balloon) has a centre.

When people look at this demo, they see the solid sphere - the balloon surface and the air inside it, surrounded by the air outside of it. When we don’t think about it too deeply and accept that ‘well, the Universe doesn’t have a centre like the balloon does' the analogy does OK, but they quickly fall apart.

When I use the analogy, most of the audience happily accepts them, but anyone with a passing knowledge of physics quickly interrogates me for more detail, which they rightfully should.

Technically, when we use the analogy we should only talk about the balloon’s surface as a (limited, 2D) representation of spacetime.

Personally I always struggled with spheres. All the spheres I encountered in my life are 3D objects. However, mathematically a sphere is not a 3D object - it is a 2D surface embedded in a third dimension. So when we use the balloon - which everyone sees as a 3D object - we are primed to see the third dimension because having a third dimension is critical for humans to interact with the world. What we should be focussing on is only the surface of the balloon (a 2D object). In this 2D analogy, only the surface of the balloon exists. For anyone struggling with this concept Flatland is an excellent book to build the intuition.


Explanation

The best way to imagine the expansion of the Universe, at least in my mind, is to imagine a 2D infinite sheet with grid lines drawn in it, intersecting at grid points - an infinite sheet of paper from a math workbook. Remember there is no depth in this representation - you can only exist in the 2D plane defined by the infinite sheet. Asking ‘but what about the third dimension’ is redundant, because there is none in this case - we have defined our analogy to be 2D only.

If the sheet is infinite, it has no centre.

Expansion is the lengthening of the distance between all the grid points on the sheet (stretched uniformly, at the same rate, in both x and y directions). The size of each square on the sheet then increases (and every square increases at the same rate), thus there is no centre to the expansion.


You can then extend this into a third dimension by turning all your squares into cubes (infinitely, up and down) and then the volume of every element cube in your 3D grid is then increasing uniformly at the same rate.

In this analogy, you can imagine the rate at which the grid is being stretched as varying as a function of time. And right at the very beginning of time, all of the grid existed (still infinite), it was just that the grid points are infinitesimally close together (so the grid is still infinite, and there is still no centre!).

As for expansion being faster than the speed of light, changing the grid spacing is not actually transmitting any information, so you are permitted to increase the spacing as fast as you please as the grid points are only to act as a guide for us, they have no physical property. Only objects that move from grid point to grid point have a speed limit.

If anything is still unclear - I encourage feedback! You can contact me via Twitter (@fipanther)

Reproducibility

The reproducibility crisis is seen by some scientists, especially in the physical sciences, as more of a biology and psychology problem. Small sample sizes resulting in conclusions that cannot be found in larger studies or by repeating the same experiment.

Physics is, at its very heart, reproducible. The laws of physics don’t (as far as we know) change from moment to moment (time translation), a concept baked into the standard model of particle physics. However, as physicists we cannot simulate these things perfectly, and the assumptions and simplifications we make can sometimes cause us problems.

When was the last time you ran a piece of software you wrote twice on the same data, and compared the outputs? Never? I don’t want to alarm you, but it may be something you want to consider.

From the perspective of high performance computing, it can be daunting to check reproducibility of large simulations or blocks of data analysis. Computing time is expensive in both time, money and (on facilities that draw from non-renewable energy sources) the environment. However, checking reproducibility is an important step in determining whether our programs are working correctly. Even small variations in computer programs that should be perfectly reproducible can affect science.

It’s up to individuals to determine what level of variation is acceptable (e.g. floating point precision accounts for errors in around 1 in 10^7 calculations at single precision) without affecting science. But how do you go about checking your work is reproducible on large scales?

  1. Establishing a suite of tests that check outputs for consistency

  2. Establishing how to test computer programs is as time consuming as writing them. Some collaborations enforce external reviewing of all science code and will have standardised tests, but if you are not in one of these collaborations how do you do this. Some suggestions:

    • Establish a test data set. It should ideally be a subset of the data you want to actually run on. You can also feed in data that is ‘ideal’ rather than ‘realistic’ (e.g. if your program is designed to use coloured noise with the odd non-stationary feature, feed in Gaussian white noise). You also want a test data set that touches all areas of the codebase - if you have a lot of conditional code, you need to define a test set for every condition.

      1. What are the major data products going into your program? These should be checked for consistency (i.e. ensure the data looks the same going in. How you do this is up to you)

      2. What are the data products that come out? If it is a science product, how do you determine that the outputs are the same when you do the same thing twice? Ideally, if you can show this as some sort of plot rather than just diff-ing files, it can help diagnose issues

      3. What are the intermediate products? Your program is probably going through several stages - identify tests that can be switched on and off to dump intermediate information and check that it is the same for each run with the same inputs*

  3. Implementing the suite of tests

    • Tests don’t work if you don’t implement them. Actually checking reproducibility (establish a ‘base’ git branch, for example, and then compare non-science changes to your code like optimisation back to this) should be a fundamental part of the development process.

  4. Interrogating issues that arise from testing

    • A good suite of tests will produce products (plots, reports) that can help you diagnose issues. You can even use issues that surface to define new and improved tests.

    • Consider simplifying things if human error is a problem

    • If you find you need to make lots of little tweaks here and there to get something to be perfectly reproducible, all these need to be noted down. Especially if you plan on publicly releasing your code. These tweaks should not be hidden away. And ensure that the version of the code and it’s initialisation that was actually used to obtain the result is shared, not just an idealised earlier version before you made a ‘minor tweak’ to perfect something. Especially if you plan on saying the result can be reproduced.

    • If you are releasing things publicly, also note down the hardware that was used, and how long the code took to actually execute whatever it does.

      • POSSIBLY CONTROVERSIAL OPINION: Do not include examples on your github/gitlab that are not easily tractable unless you give disclaimers in the documentation as to run times and hardware requirements. No 1-month-to-run example scripts without warning that it takes a long time, especially if it’s mingled in with an example that completes within ~10 minutes. I have encountered more than one piece of software which does this

      • POSSIBLY CONTROVERSIAL OPINION: If it’s released on GitHub, ensure you’ve actually tested all your use cases, not just the mode that you use all the time. Untested code should not be flagged for public release, and if you get a bug report, the answer to that report is not to just ignore it because you never use the software in that configuration. If it is untested, it should not be released without a warning.

      • Not so controversial: stop pushing untested changes to master to ‘tidy up’. You’re making a mess for the next person.


Once you’ve identified an issue, it needs to be resolved. Do not cross your fingers and hope it will go away. Even if it cannot be resolved (floating point precision etc), it needs to be adequately understood so that it doesn’t cause a knock on effect anywhere. Reproducible (within acceptability criteria you define, and these should not be overstretched) is a necessary requirement for your code to be producing the ‘correct’ result (but NOT sufficient! You should also compare your results to what is expected! Just because it reproduces, doesn’t mean it is right. However, just because it’s ‘right’ doesn’t mean it reproduces).


I’ve amassed the following list over the past few months. It may be extended in the future as I find other issues in my own work.

  1. Is there a variation in the raw data or pre-processing?

  2. Sounds stupid, but the most obvious sanity check is to diff the information going in to your computation. Human error is real. Is a random seed being initialised differently? Did a flag get set differently? Did you accidentally modify something you shouldn’t have?

    • Are you assuming something about your data that isn’t true? For example, are you assuming that you have stationary, Gaussian coloured noise when there is actually a large non-stationary feature in the data? If you are trying to whiten data with non-stationary features, it can cause problems down the line (see point 3).

  3. Is the source of the variation in your algorithm?

    • Is there a race condition somewhere? Are there multiple asynchronous tasks running that sometimes finish at slightly different times when run two different times? Example: Your program dumps data to a file every ~3 minutes that is read in by other processes every ~30 minutes. Unless you carefully sync everything up, sometimes a file will write before or after the read in process if you dump files by wall time rather than say number of samples analysed, however this is necessary for a lot of real-time applications.

    • Have you introduced some sort of randomness somewhere? E.g. a random seed that always initialises differently? In this case, if everything converges at the end to the same answer, you’re probably OK, but if you’re here then it probably isn't

  4. Is the source of variation in another library you are using?

    • Machine precision is the most obvious culprit. GPUs/CUDA are known to have problems. If you cut down to single precision, you’ll have more issues. You can tell if this is the core problem, or part of it, by looking at the level of variation. If you have differences on the order of 1 in 10^7 (for single precision), this is pretty consistent with machine precision problems. While not being bitwise reproducible, this should not break your science.

    • FFTW has modes that do not yield the same output on every run, and you get ~floating point precision from them. FFTW_PATIENT and FFT_MEASURE vary depending on performance, so if you run the same thing twice and on one day the computer is busier, it will be different, but for most applications should not break the science. For FFTW3 to be bitwise reproducible, you need to either use FFTW_ESTIMATE or set up a WISDOM file that pre-computes how to slice up the Fourier transform, specific to the hardware you’re using. FFTW_ESTIMATE seems to be the optimal middle ground if you run on different hardware, though

    • CUDA libraries sometimes cause issues beyond just the floating point error associated with the GPU itself - consider switching out atomicAdd.

    • If it happened one time, and you’re using GPUs, it was probably a cosmic ray

  5. Is the source of the variation your post processing and checking?

    • Check you have not inadvertently introduced some sort of issue when you post-process information. For example, reading in an output of your program, and then doing something with it that involves a random seed that is initialising differently each time?



While this isn’t my finest blog post ever, nor my most polished, I wanted to share some information. I imagine this list will actually grow and evolve over time. If you find something to be added, I am also happy to do so with appropriate credit to be given to you (you can DM me on Twitter, @fipanther).


*A note on processes that are random by design (yes, if you’re using any popular inference software, this is directed at you): Inference should still be reproducible. You should still get identical results - e.g the same posterior - once your algorithm converges. More information about testing convergence can be found in this very informative blog post. If you are making any inference code public, and the intention is that it is reproducible, detailed information should also be provided so that anyone can get it to work. If it is only reproducible on a large compute cluster, that information should also be included - never assume the user knows this implicitly. Many people see public code, and assume it will run out of the box on their own machine.

Re: Cover Letters

I have written a lot about job applications over at Space Australia, but one of the most enduring questions I keep seeing is ‘how do I write a cover letter?’

Regardless of whether you are applying for an academic job or an industry one, the cover letter is easily the most misunderstood part of the application. Many think that because applications are so extensive, a cover letter is no longer required in the days of online applications.

Some things to bear in mind (especially in the English-speaking world, but goes for most professional settings):

  • The cover letter will probably be the first thing the selection committee looks at. It should compel them to read the rest of your application.

  • The cover letter should indicate you have taken thought and care in preparing your application for this job specifically.

  • The cover letter should demonstrate your ability to communicate professionally, clearly and formally. It should be written as a letter, not as you would write an email.

  • The cover letter should not repeat the content of your application.

Letter-writing 101

For those who didn’t suffer the indignity of having to practice formal letter writing in 1998 by writing a letter to then-incumbent British Prime Minister Tony Blair, this is how you write a general letter to a selection committee:

  • Header: Address of the person you are writing to on the left (so it would show through the window of a hypothetical envelope). On the line below, right-aligned, is the date on which you write the letter. Use the LaTeX command \today to avoid embarrassment. Technically you should also include your address in the header, but this seems to have fallen out of practice as for a word processed letter, both addresses should be left-justified and this takes a huge amount of space.

  • Salutation: Dear <name of person doing the hiring>. If the person doing the hiring is Professor Y, then the salutation is ‘Dear Professor Y’. Not ‘Hi’, ideally not ’To whom it may concern’ and especially not ‘Dear sir/madam’ or ‘Dear sir’. People have generally got out of the habit of using a formal salutation with emails, but for a cover letter it shows professionalism and gives the impression you took care to investigate who was doing the hiring (i.e. not writing ‘Dear Sir’ when the person doing the hiring is a woman).

  • Content: see below.

  • Ending: I always end with ’thank you for considering my application for this role. Kind regards, <my name>’. There are some technical rules around certain salutations being combined with specific sign-offs, but the rules are considered old-fashioned outside of highly formal communications and nobody is going to whip you for using the wrong one.


Cover Letter Content

Your cover letter should be treated as though it is the first part of you application packet that the hiring committee is going to read. It should state what application materials you include, who you are, your qualifications, your current role, and information specific to the job you’re applying for.

A new cover letter should be written for each job you apply for, regardless of whether you are applying to industry or academic positions. Do not recycle a generic cover letter.

I will dissect one of my cover letters for a successful application:

Paragraph 1: Statement about application

please find attached my application for the role of Research Associate in the gravitational wave group at UWA. Also included is my current CV and publication list.

I state what position I apply for, the group and institute I apply for, and what other attachments I have uploaded. This is important just in case something doesn’t upload, the committee can contact me. It also indicates I have read the job ad and know what position I am applying for.

Paragraph 2: Who I am and my qualifications

I have received approval to graduate in December 2019 with my PhD. My PhD thesis was titled ’The origin of Galactic antimatter’. I have a total of eight refereed publications, four of them first author. I received my BSc in physics and math with first class honours from the University of Auckland in New Zealand in 2015, and I completed my PhD under the supervision of Associate Professor Roland Crocker at the Australian National University. My current position is as an Associate Lecturer and researcher in physics at the University of New South Wales Canberra where I work alongside Dr Ivo Seitenzahl. I have over seven years of teaching experience (graduate and undergraduate) and have mentored undergraduate research students.

Indicate the qualification required for the job you are applying for. If you need a PhD, you need to state you have it, the title of your thesis and your supervisor. I also indicate the number of refereed published papers I have authored (which is needed for an academic position). I don’t include my h-index because having a h-index of 6 is not particularly impressive and may actually hurt my application. I also state what my current position is as it is relevant to my application for a research position, and that I have teaching experience as I am applying to a university department with a teaching focus. Because I have a research position, I will also be expected to supervise students, so I mention I have experience here.

If I had impressive grants, I would mention them here.National-level funding. Not winning something in High School or department-level grants. The university level grants can go in your CV. Nobody needs to know you were dux of your high school or head boy/girl if you are applying for a PhD, or industry job unless you are coming directly out of high school.

If I were to apply for an industry position, I would probably remove the bit about publications and instead highlight project management and software development experience. Instead of teaching, I would mention leadership. I would ensure to include words that link to the key competencies required for the job (e.g. if it was ‘experience in C programming language’, I would say that I have experience working with a large C codebase for real-time signals processing).

The main thing is to be truthful (do not inflate your experience - I only include refereed publications, I don’t add my conference proceedings to that number), to highlight ‘good’ statistics (i.e. not including metrics that look mediocre compared to your peers, like h-index in my case), and to state the things that make me qualified for the position.

In my opinion, only the past five years of experience are really relevant. If you are more than 2-3 years out of high school and applying for something that requires a university degree, your high school credentials are not important.

Paragraph 3: Specific experiences relevant to the job

I have significant experience in signals processing and multiwavelength followup of transients. In particular, my expertize pertains to heavily background-dominated data from the INTERGRAL space telescope. I also have extensive expertize in high-energy and theoretical astrophysics, and physical modelling of astrophysical phenomena.

I leave the typo in there to be totally truthful. I got this job despite the typo, but I do not recommend ever having typos.

The job I was applying for was in signals processing and multimessenger astro. I highlight what expertise I have in the job area. This is despite the fact I have no experience with gravitational wave data. Before I wrote my application I narrowed down what technical (not astro specific) skills I had that were transferrable. Do /not/ say “I have no experience in blah but am excited to learn”. Instead say (if it is truthful - do NOT lie here) “I have experience in x and have developed <transferrable skill> that I can apply to <thing you want to do>”.

Paragraph 4: Rule of three

Firstly, I have a strong academic and research background in signals processing and multiwavelength followup observations with optical and gamma-ray observatories, with strong collaborative ties in these fields both within Australia and beyond.

I have substantial experience in software engineering and the development of software to both analyse astrophysical data and to develop physically motivated models of astrophysical phenomena, including GPU acceleration and the use of machine learning techniques.

Finally, UWA has an exceptional track record of producing exciting cutting edge research in astronomy, especially in the development of low-latency pipelines for GW detection and is home to world renowned experts in the fields of gravitational-wave detection, optical and radio astronomy. I believe my experience in gamma-ray astronomy would complement UWA’s existing expertize. UWA provides an exciting environment with a strong commitment to diversity in which the proposed research can be carried out, as well as the opportunity to contribute to supervision of students, teaching and outreach within the department of physics and astronomy.

In this section I indicate several things: I read the job ad thoroughly, I have thought about how I would fit in and why my skills are needed, and that I have done some background reading about the group and the university. Sometimes in this section I mention people in the department I am applying to by name if I think I could collaborate with them. The three bullet points are what I use to indicate I am not just slamming together a generic application, and I have actually thought about what doing this job could entail. It requires doing background reading and research, trawling through department and university websites and reading things like mission statements. The upside of this is that my application is now highly tailored to the job. Using three gives a fairly nice rhythm to the letter as well.

And that is how I write a cover letter, which comes to one page (just over if you include formatting).

Note how I do not repeat the content that would be in my research proposal or CV, except for a very brief mention in my three bullet points. I aim to make the selection committee invested enough from my application to actually read the research proposal.

Mileage may vary with this kind of template - it took me a while and a lot of looking at other people’s applications to get to the point I feel confident in writing cover letters. So don’t take this advice as the only way to write a cover letter. Get examples from people who have been successful (and, if I dare say so, unsuccessful) in job applications to take a good look at what they are doing and not doing.

There are also some other bits of advice floating around that you can take or leave. I personally do not mention anything about family, citizenship, gender or anything else, however I have heard people advise that if you are applying to the same institution as your partner to mention it in your cover letter so the university can get wheels moving regarding spousal hires. I would take this advice with a grain of salt, especially for PhD and postdoc applications (it is only really relevant for tenure-track positions, and even then be very cautious). Regarding citizenships and visas, this information is communicated privately with the university via your application form: the person doing the hiring does not need to know this information as it is technically confidential with HR (to this end, do not attach your visa as one of the application materials. HR will contact you privately to get it).

Good luck with your applications - I have my fingers crossed that you get the jobs you are applying for.

Resources for prospective GW hunters

Lately I’ve noticed I am getting a fairly steady stream of questions from a variety of people - ranging from high school students to colleagues - about how they can interact with LIGO-Virgo data themselves. This is a master post linking to the resources that I usually suggest people take a look at if they are interested in the analyses.

For any fellow LVK members, if you have resources you would like included here, please email me directly! This page is under continual development and maintenance.

Citizen Science

Gravity Spy is a citizen science project where you classify glitches in LIGO data. The work helps the detector characterization team and the rest of the collaboration to understand and mitigate glitches that interfere with GW detection. Gravity Spy is a great option for any primary or high school students, or anyone who just wants to help out with LSC science through a nice, easy to understand interface. Gravity Spy is run by Northwestern University, LIGO researchers at Caltech, crowd-sourced science researchers at Syracuse University and Zooniverse researchers.

GW Detection

The Gravitational Wave Open Science Center is your one stop shop for all things gravitational wave. They host excellent tutorials, have easy to use python notebooks you can use as well as a repository of all the LVK data products that are open access.

GWOSC is usually the first place I send my students when they start a project in our group. Their tutorials are excellent, and require a basic knowledge of the Python programming language.

GWOSC also links to public data that is accessed through the LSC document control center system. As well as providing raw strain data that can be used by other teams outside the LVK wishing to test detection pipelines, the provide processed data products that are useful for astronomers. For example, you can access the parameter estimation results, which include the skymaps for events released in the public catalogs.

They also provide access to code snippets that can be used to read the formatting of these products quickly and easily using Python.

During observing runs, information on the latest public alerts is provided through GraceDB. GraceDB lists important information from the detection pipelines and data products including skymaps and preliminary parameter estimation produced with latencies of minutes to days after detections are made. More information for astronomers interested in followup can be found in the Low Latency documentation.

There are five online detection pipelines as of the end of O3. They are GstLAL, PyCBC Live, SPIIR, MBTA (all of which are modelled searches - we look for signals whose form are known) and cWB (searching for transients we do not have a model for).

Of the modelled searches, GstLAL and SPIIR both perform time-domain searches (that is, when we do our matched filtering approximation, we match the time domain strain and the time domain template). PyCBC Live and MBTA both perform frequency-domain searches. If you require a bibtex file containing references to the latest pipeline papers please contact me - I will update this webpage soon with a downloadable bibtex file that will be maintained on a weekly basis.

You can see a public overview of some of the latest SPIIR developments via this video, presented at the 2021 ANITA conference. The first 2 minutes are missing but you get the general idea.

Software Packages

A lot of these software packages are inter-dependent. This is not an exhaustive list but a starting point for the prospective GW hunter. Want me to include your software here? Send me an email or Twitter DM:

PyCBC - simulating signals, parameter estimation, everything you could ever want for GW science

Ligo.Skymap - fantastic resource if you want to plot skymaps.

LALSuite - Some more hardcore LIGO software stuff, good for the really serious researcher

Bilby - used for parameter estimation. More user-friendly than LALInference on it’s own.

GWpy - Just really useful, I use it for things like converting GPS times

Riroriro - non-LVK but related, good for students who are working their way up to PyCBC

Baffled

In the past couple of years, scientists have spent an increasing amount of time being baffled. While there should be a certain degree of forgiveness for those baffled by the events of the past six months, it seems physicists in particular are baffled by results from experiments we designed and papers we have written. 

Maybe there is a minority of truly baffled scientists out there, progressing through their daily zoom meetings in a perpetual state of shock and awe. The popular media certainly makes it seem that we bounce around the pinball machine of the ivory tower from discovery to discovery without much preconception of what we are even looking for. It seems on an almost weekly basis a new headline alerts us to a new, not at all preempted or expected result. 

Science can be exciting. Science should be driven by the unexpected. But to imply that we are continuously in a state of bafflement about the results of experiments of our own design is disingenuous. It’s a part of modern science reporting that draws clicks and devalues public trust in what we do. 

Any inference (say a measurement) relies on our prior belief, and the method used to collect the data to make that inference also relies on prior belief. We don’t (in most cases) build experiments blindly. Often, an unexpected result is unexpected because it was simply lower down the list of probable events that could happen. 

Say you are told by weather forecasters that tomorrow there is an 80% chance that it will rain at some point during the day. You pack your umbrella and raincoat. Perhaps you drive to work. Your prior belief, informed by the weather forecast, is that it is more likely to rain than not. Consequently, when you head home and notice that the ground is completely dry and there is not a cloud in the sky, you are pleasantly surprised. Some of you may even be somewhat baffled. After all, the weather forecast told you that there was an 80% chance that there would be some rain at some point. 

In science, discovery is often driven by the 20% of the time it doesn’t rain. Usually, we didn’t make a better prediction of all the possible outcomes for two reasons: a lack of available data, or a model that inadequately represents reality. 

Yesterday’s announcement of an unusual compact binary coalescence (LIGO-speak for the event where a neutron star merges with either another neutron star or a black hole, or a black hole merges with another black hole) prompted multiple breathless reports of bafflement and amazement. However, just because the merger of two compact objects with vastly different masses (and one with a mass that challenges current ideas about compact objects) was unexpected, it does not mean that the measurement should not be trusted or should be chalked up to experimental errors. 

I am often the first person to critique analysis that draws expansive conclusions from tenuous ‘detections’, especially those heralded by publication in the highest impact journals. In this case, I would argue that any reports of bafflement are greatly overstated. It is widely known that astronomers understanding of how binary (and multiple) star systems evolve is a developing field of research. For the exact progenitor of GW190814 to be unclear at the moment is not without precedent: in the case of many Type Ia supernovae (the ones used for precision cosmology), we have no clear idea of what the progenitors of these events look like or how the explosion mechanism works. Closer to home, the progenitor of SN1987A was directly observed in archival images. The Blue supergiant progenitor was not predicted by stellar evolution models to result in SNe II at the time, however binary stellar evolution models reveal channels through which these stars can give rise to supernova explosions. 

It is worth pointing out that the observation of GW190814 does not rely on a single pipeline or analysis to derive the properties of the compact objects. The event was independently identified by all four online detection pipelines (including SPIIR, the pipeline I now work on. I was writing my research proposal for my current job when the event occurred!), all of which actually provide information on the chirp mass (amongst many other things) of the event. Further parameter estimation yields an exquisite analysis that many three-sigma discoveries in astrophysics should envy. 

It should go without saying that every event detected by LIGO undergoes rigorous vetting. It is nigh on impossible for a glitch from the detector to go as far as generating such a detailed analysis as has gone into this discovery.

While this observation is unexpected, it tells us something valuable, and that is that our current understanding of the physics of compact objects requires further investigation. I don’t think anyone involved in this field of research is truly baffled, which has become the new science reporting shorthand for a real eureka moment. And like a eureka moment, it’s far more likely our response is ‘that’s interesting’.

Is moving overseas a requirement for success in academia?

‘If you want to be successful in academia, you need to be willing to move to wherever the jobs are. It’s a sacrifice you have to make’

I’ve lost count of the number of advice columns, job application advice talks and conversations with mentors which begin with some variation on this theme. The academic job market is saturated with a large quantity of extremely talented individuals vying for a small number of job opportunities. For almost all young academics, staying in one location, or even in one country, is not only seen to be impossible, but actively discouraged. The universal advice given to everyone is move overseas.

Does moving countries for a job opportunity automatically make an individual more worthy of the job in question, irrespective of ability or experience? Should we be discouraging individuals from attempting to build a solid foundation for their personal lives on which their career can be build in favour of uprooting one’s entire existence?

I have emigrated twice. But what did emigrating teach me? I was exposed to new cultures and experiences I would not otherwise have experienced. It made me more tolerant, more understanding of difference and much more appreciative of the power of diversity. For me, most importantly, emigrating (once in particular for a ‘better’ opportunity as a PhD student) taught me that self-development and growth as a scientist does not necessarily come from moving to a new location.

However, it is rarely acknowledged that for many of us, emigration is a deeply traumatic experience. Disenfranchised grief is rarely discussed, but an extremely common result of emigration and not an experience I would wish on anybody. Is it right to force ’self-development’ through trauma?


When we give advice to young academics, we have to also acknowledge that the ability to emigrate and move freely is a significant privilege. The personal circumstances of individuals who are part of minority groups will often make emigration inaccessible. Can an individual with a disability or chronic health condition get the appropriate medical care and insurance when far from home? What about the financial burden of emigration? What about those academics who have families, who would be required to uproot children in order to continue their careers? Not all countries are accepting and welcoming of transgender individuals or those in same-sex relationships, and to emigrate would put individuals in danger or at risk of persecution. Not to mention the fact that many immigrants in various countries may be subject to racial abuse and at worse, violence.

The advice that ‘it is best to move overseas for your career’ comes from a place of privilege, and this option is not available to everyone. The narrative that the best academics have spent time overseas is disadvantaging underrepresented minorities for whom freedom of movement isn’t an option. I think dismantling this narrative is one way to achieving greater equity and representation in the academic community.

I think we need to change the narrative that one has to emigrate to be successful. Personally, I believe that a willingness to learn new things, explore new challenges, and build an exciting research program doesn’t necessarily result from moving far from one’s home. All of these things come from within the individual, and from the relationship between the individual and their mentor. They are independent of location. What’s more, it is entirely possible for an individual to become ‘entrenched’ in a way of thinking being despite moving halfway across the world. Equivalently, it’s entirely possible for someone with a solid foundation in a familiar location to take on new challenges and actively seek out new experiences and career development.

The idea that young academics have to uproot their entire lives, place important personal relationships under strain, and in some cases subject themselves and their families to the trauma of relocation in order to be ’successful’ or demonstrate ‘devotion’ or ‘commitment’ to their careers is wrong. Sometimes, we need to define ’success’ as recognising ones own priorities. If that priority is not uprooting one’s entire life, abandoning a developed support network and choosing to pursue a job opportunity close to ‘home’, then we need to understand that and work to accomodate it.

Let’s end the narrative of requiring the abandoning of one’s home for academic ’success’. Sometimes, a solid, unshakable foundation is the best place for someone to learn, grow and flourish in their career.

Postscript: I was fortunate enough to find a job in the same country as I completed my PhD, and I was also fortunate enough to emigrate between my undergraduate degree and my PhD.

However, I was also ready to walk away from my academic career if I could not find an academic job here in Australia. This made finding an academic position orders of magnitude harder. This blog is my personal opinion, which is that getting a job in a country where you are already established is harder largely because there is an expectation that academics have to be highly mobile and willing to uproot their families.

I’d like to dismantle this, as I think it comes from the fact that academics were traditionally single men or men whose families had to adhere to the whims of the father figure. Freedom of movement is an accessibility issue that is limiting opportunities for URMs in academia.

I have been waiting to publish this blog since I was asked a pointed question about “what was wrong with all my collaborators in Germany and why wasn’t I moving there?” during an interview for a job I ultimately did not get at my PhD institution. The question seemed to imply there was something wrong with me seeking a job at an institution where I felt at home. I challenged it in the interview with many of the points I stated here. I hope if anyone is asked a similar question, this will give them the confidence to challenge an unnecessary narrative we’ve been buying into for too long.

Supernovae for stars of all ages!

Every second, somewhere in the Universe, a star explodes.

And astronomers, being human, like to try and sort each of these exploding stars into categories, based on their observed properties so we can understand them better.

This flowchart shows how astronomers figure out which category most exploding stars belong in.

Choose your own adventure, supernova style!

Choose your own adventure, supernova style!


From this flowchart, it looks like there is far more variety in so-called ‘core-collapse supernovae’, or dying massive stars - type Ib/c and type II - than there is in type Ia or ‘thermonuclear’ supernovae. As a result, many astronomers mistakenly believe that type Ia supernovae are ‘well understood’. It doesn’t always help that we also use them as distance markers to measure how far away distant galaxies are. In fact, type Ia supernovae were used to show that the expansion rate of the universe is accelerating!


However, while we know that some type Ia supernovae - the so-called ’normal’ type Ia supernovae - make good distance indicators, we don’t know a lot about them. For instance, we know that they must involve the explosion of at least one carbon-oxygen white dwarf star that reaches a mass close to 1.4 solar masses because it gobbles up material from a binary companion, and that they glow brightly and then fade away because radioactive nickel is made when this explosion takes place. However, we aren’t sure exactly how the explosion happens, or what the exploding star system looked like before it died.

And that’s just the normal type Ia supernovae. Just like core-collapse supernovae, there is actually a whole zoo of varieties of thermonuclear supernovae that look slightly weird. One of these varieties is called the ‘SN1991bg-like supernova’, or 91bg-like SN for short. These supernovae can’t be used as distance indicators because the relationship between their maximum brightness and how fast they decline cannot be standardised in the same way as normal Type Ia supernova. They are much fainter, and fade away much faster. This means they make much less radioactive nickel.

We also know they make some elements that normal Type Ia supernovae don’t usually produce. The spectra of 91bg-like SNe show strong absorption lines that can only be due to the presence of titanium in the thermonuclear ash. (see the link for a definition of the term ‘ash’ in this context!). These two pieces of information - that 91bg-like supernovae don’t make a lot of nickel, and they make a lot of titanium - gives us an idea of what the stars may have looked like in the moments before they died, and during their deaths.

What we really want to know though, is what made the stars get to that point? From the supernova explosion alone, it is hard to tell how old the star system was when it finally exploded. Knowing the ages of star systems that explode as supernovae is important not only to understand how common different types of supernova explosions are, but also to inform other astronomers who model when and where the chemical elements are formed in galaxies throughout cosmic time. What’s more, there may be a relationship between the age of a system exploding as a supernova, and it’s intrinsic brightness, that can introduce biases in our distance measurements in cosmology.

So how do you measure how old a star system was that makes a particular supernova? One method is to look at the stars around the location the supernova exploded. On relatively large scales of a few thousand light years, stars in a spiral galaxy tend to stick in the groups they were born in. So if we can look at just the light from this population of stars, we can attempt to measure how old the stars are.

The ‘fingerprint’ left by stars of a particular age in a stellar population, from 100 million yrs (top) to 15 billion years (bottom). While the bottom one technically overshoots the age of the universe, this happens because there are lots of uncerta…

The ‘fingerprint’ left by stars of a particular age in a stellar population, from 100 million yrs (top) to 15 billion years (bottom). While the bottom one technically overshoots the age of the universe, this happens because there are lots of uncertainties in modelling stellar spectra. This is why sometimes stars are reported to have ages older than the universe - this isn’t because of some kind of scientific conspiracy or new physics, it’s just because we can’t model how elements emit light in the atmospheres of stars that well!

This is possible because the spectrum of light from stars of a particular age carries a specific fingerprint. We can try and match the fingerprint light of the stars close to the supernova to fingerprints of particular types of stars we have on file to calculate how old the stars are.

Me with the red arm of the WiFeS spectrograph inside the 2.3m telescope at Siding Spring Observatory

Me with the red arm of the WiFeS spectrograph inside the 2.3m telescope at Siding Spring Observatory

That’s what I did in my most recent paper! In this paper, published soon in Publications of the Astronomical Society of Australia, we used a special camera called WiFeS, attached to the ANU 2.3m telescope at Siding Spring observatory in New South Wales, Australia, to take pictures of galaxies where 91bg-like supernovae exploded. WiFeS is short for the ‘Wide Field Spectrograph’ - it’s a special camera that can take pictures where each pixel of the image contains the spectrum of light associated with that pixel. This is a technique astronomers call ‘Integral Field Spectroscopy’.

By only looking at the light in the pixels immediately surrounding the location in a galaxy where a supernova occurred, an area with a real radius of around a thousand light years in each galaxy, my co-authors and I were able to calculate that the average age of the stars that would have been born around the same time as the stars that exploded was greater than six billion years!

On its own, this may not sound that exciting, but it turns out that there are no other types of supernovae known to occur so long after star formation. Most stars that die as core-collapse supernovae die only a few million years after they form, and most thermonuclear supernovae occur about a billion years after star formation. So these are the oldest star systems found so far to explode as supernovae!

This also means that new chemical elements can be made long after stars form - in particular, titanium. Most simulations assume that only core-collapse supernovae produce titanium, and they do it only a few million years after massive stars form. While 91bg-like supernovae occur at a lower rate than core collapse supernovae, this is more evidence that new chemical elements can be formed long after most astrophysicists assume that nucleosynthesis has mostly switched off!

The work also confirmed the long-held belief that 91bg-like supernovae are associated with old stellar populations (they occur at a much higher rate in elliptical galaxies containing old stars, than in spiral galaxies), as well as quantifying just how old ‘old’ really is! And this might give us a hint at what sort of star systems give rise to these supernovae as they must be binary systems that live a really long time, bringing us just a little closer to understanding the mystery of what sort of stars really end their lives as thermonuclear supernovae.

3-D Scanning A Dying Star - With Optical Light!

If astronomy was a beauty contest, supernova remnants would almost certainly win first prize. These beautiful objects are actually the remains of a dying star. With careful study, it’s possible for astronomers to use their own forensic tools to figure out what kind of star died, and why.

The glowing remains of a core-collapse supernova in the Large Magellanic Cloud (source: ESO/https://apod.nasa.gov/apod/ap180930.html)

The glowing remains of a core-collapse supernova in the Large Magellanic Cloud (source: ESO/https://apod.nasa.gov/apod/ap180930.html)


Just as our understanding of life and death has evolved here on Earth with the development of sophisticated tools, so has our understanding of the lives and deaths of stars. One of the key tools used by doctors and forensic scientists today is ‘Computer-Aided Tomography’ or ‘CAT’ scans (sometimes also referred to as ‘CT’ scans).

Archeologists have begun to use this technology to study the mummified remains preserved by the Ancient Egyptians. Computer-aided tomography pieces together a three-dimensional image of the inside of the mummified body using X-rays, meaning [TW/CW: link contains images of a 3D scan of mummified human remains] the internal structure of the preserved remains can be studied without harming them. This kind of technology is important as it allows us to study irreplaceable artifacts and human remains with the respect and care they deserve. The same technology can be used to rapidly scan the human body to detect internal injuries, such as bleeding on the brain or in the abdomen.

But what about the thousands of years old remains of a dying star?

When a star dies, it hurls material outwards into the interstellar medium, along with powerful shock waves, moving at tens of thousands of kilometres per second, forming a supernova remnant. When the outgoing shockwave slams into the interstellar medium, a ‘reverse shock’ is generated. This shockwave travels inwards with respect to the expansion of the supernova remnant.

As the reverse shock travels inwards, it heats and ionises the supernova ejecta to millions of degrees centigrade. When supernova ejecta material, which contains a lot of iron, is heated to these temperatures, x-ray emission lines are produced. The doppler shift of these emission lines tells you information about the speed at which the supernova ejecta is moving, and hence you can map where the supernova ejecta is in space, and how fast it is moving. Combining all of this information together using a computer, you can then build an internal map of the structure of the supernova ejecta, along with its temperature and velocity. This is another example of computer-aided tomography!

However, this gets very tricky with x-rays because x-rays are very hard to focus accurately. Most supernova remnants are so far away that they appear as little more than point sources to X-ray telescopes, so while we can measure the amount of x-ray emitting material present, and its average temperature, it is hard or impossible to map exactly where it is, or how fast it is moving.

Unlike x-rays, optical light is easy to focus (that’s why we can see so well!). So can we use optical light emitted by the shocked supernova ejecta to map where the ejecta is and how fast it’s moving?

Highly ionised iron will happily emit a lot of x-rays, but can they emit optical light too? They can, but they do so at a much lower rate. The optical light emitted by Fe XIV ions (that’s an iron atom with 13 of its electrons stripped away by the reverse shock) comes from a mechanism called ‘forbidden emission’. This means that if we considered the types of electron transitions allowed by the standard rules of quantum mechanics, the transition just wouldn’t happen. However, if we include a bit of extra physics, such as magnetic dipoles, then the emission can happen, but at a much lower rate than a non-forbidden transition. We usually write the name of the ion producing the emission line in square brackets if the emission line is forbidden - i.e. [Fe XIV].

Because [Fe XIV] emission is so faint, it is very hard to see - you need to use a very big telescope, looking at the supernova remnant for a very long time. Fortunately, astronomers have a Very Large Telescope (the VLT, because in astronomy acronyms either lack imagination completely or use far too much imagination). The VLT is made up of four telescopes, each with a mirror eight meters in diameter - I don’t think Australian Geographic have got around to stocking those yet!

The three supernova remnants studied by Dr Seitenzahl and his colleagues in this paper. Blue represents interstellar gas (hydrogen) heated by the outgoing shockwave, red the x-rays emitted by iron heated by the ingoing reverse shock, and in green th…

The three supernova remnants studied by Dr Seitenzahl and his colleagues in this paper. Blue represents interstellar gas (hydrogen) heated by the outgoing shockwave, red the x-rays emitted by iron heated by the ingoing reverse shock, and in green the [Fe XIV] emission from the same iron-rich ejecta, observed using the MUSE instrument. This is the first time optical [Fe XIV] emission assoicated with the reverse shock in a Type Ia supernova remnant has been observed

Dr Ivo Seitenzahl and his team of researchers used one of the VLT telescopes, equipped with an instrument called MUSE, to look for [Fe XIV] in several supernova remnants (free downloadable version here) . MUSE is able to take not only a picture of the supernova remnant, but is able to extract a spectrum from each pixel. That way, the scientists can study which parts of the supernova emits different colours of light. Because the VLT is so big, and MUSE is so sensitive, this work is the first time optical light has been used to map the structure of the reverse-shocked ejecta from a type Ia supernova by observing the green light coming from [Fe XIV] emission.

By studying how fast this iron-rich ejecta is moving in three different supernova remnants, the scientists were able to determine just what kind of stars might have exploded - one of the great mysteries of these kinds of exploding star is that we don’t know what the stars looked like before they went supernova.

So just like x-ray tomography on Earth allows us to peer inside the human body and ancient artefacts to learn more about them, this optical tomography allows us to peer inside and reveal the structures left behind by cataclysmic cosmic explosions, and Dr Seitenzahl and his team have created a brand new tool that will allow us to better understand just what makes a star explode!

Lindau Nobel Laureates Meeting: Closing Ceremony Speech

At the beginning of July this year, I had the privilege of not just attending the 69th Lindau Nobel Laureates meeting, but also the amazing opportunity to speak on behalf of the young scientists at the closing ceremony. Here is the thank you I wrote to thank the Nobel Laureates, organizing committee and the young scientists for an incredible week in Lindau.


image1.jpeg


Nobel laureates, young scientists, distinguished guests. 

We began our journey this week as strangers - each of us arrived in Lindau from all over the world, each with our own unique story and perspectives, eager to learn. Now, at the culmination of a vibrant scientific and social program, we leave not just as friends, but as members of the same family - the Lindau Alumni family - and as part of one another’s story forever. 

On behalf of the young scientists, I wish to thank the organisers, the council, and Countess Bernadotte for their hospitality and tireless efforts to make the 69th Lindau Nobel laureates Meeting such a wonderful experience for all the attendees. 


I also wish to thank this year’s international host country, South Africa, for bringing us together in a celebration of music and dance at the international dinner. I am sure this is a memory we will all treasure for a long time to come. 

I have asked the young scientists to do the impossible: to sum up in a single word their experience this week. The words I have heard again and again are enlightening, empowering, inspiring, wild, humbling, amazing.   


This week, we have heard inspirational stories of resilience, determination, creativity, passion, and the discovery of the unexpected. 


Isaac Newton once said, ‘if I have seen further, it is by standing on the shoulders of giants’. This week, we have been fortunate enough to be invited to stand alongside the Nobel laureates, and to see further through their own eyes. 

On behalf of the young scientists, I thank you for generously giving your time to share with us your personal journeys of discovery, your unique perspectives of our world, and for inspiring every person here.  


As we all leave Lindau, all of the young scientists here will take the lessons we have learned with us in Lindau - to move forward together, united by our curiosity about our wondrous universe, not divided by fear of the unknown. 

I am proud to stand here to represent not just the young scientists, but the Lindau Aussies, and also my home, New Zealand. 

I would like to leave you with a thought from my home country


he aha te mea nui o te Ao?

he tangata, he tangata, he tangata



what is the most important thing in the world?

It is people, it is people, it is people. 

This for me is the essence of the success of the Lindau meeting: it is people, each and every one of you. 

This week, we have learned to build bridges, not borders, between us. A better world for all people is within our grasp, if we work together to break down barriers, defy expectations, and to write our own stories, daring to take a leap into the unknown. 

Thesis Bootcamp, or how I wrote my entire thesis in a single weekend

There are many advice blogs describing how to write a PhD thesis, probably about as many as there are PhD students. While the experience of producing a thesis unifies all PhD students in the end, each PhD thesis is unique. Even two theses on the same broad topic (say, radio astronomy) may look completely different, so what helps one person write their thesis may actually hold back another person. 

That being said, there are some more general pieces of advice that can be given, especially based on the experience of writing a thesis. I handed in my PhD thesis just over a month ago - around 200 pages and 60,000 words representing three and a half years of research work. I also wrote the entire first draft in a single weekend - and this is how I did it. 

The Australian National University Thesis Bootcamp (#ANUTBC) is a unique program run by Dr Inger Mewburn (@thesiswhisperer) and her dedicated team of helpers. Over the course of two and a half days, they work together with PhD students to overcome the fears and concerns that hold them back from writing, while providing a safe and comfortable environment for us to just ’shut up and write’. There’s added incentive for showing up - with neat prizes for every 5000 words written. At the end of the weekend, every student leaves having written a minimum of 5000 words. At the November bootcamp I attended, four people (including myself) wrote over 20,000 words (and recieved their coveted gold brick)! 



Lots of universities have a similar program. So if you’re wondering exactly what makes it so good, here’s my take!

You learn how to write. Really write

When it comes to writing, it becomes tempting to constantly self-edit. We delete and re-write sentences as we go, searching for the perfect phrasing. This may not be too un-economical if you are writing say, a blog post or even a research paper for publication. However, your thesis is long-form writing. If you agonise over the placement of every word, it means agonising over the placement of anywhere from 20,000 - 100,000 words.

Learning to write without the constant need to self-edit and critique isn’t easy. For many of us, it means breaking the habits of a lifetime. The first exercise at #ANUTBC was to write an introductory paragraph with a series of prompt sentences without the use of the delete key. Sounds easy? 


Actually, it’s downright impossible. But this five minute exercise is enough to begin to break down the barriers of constant self-editing that hold us back from writing fluently

You stop being afraid of imperfections

One of the big parts of #ANUTBC is keeping a running total of words written, which means all words written. No material is deleted during #ANUTBC, and editing isn’t allowed. The three days are designed for you to write as much of your thesis as you can without the usual distractions (all food is provided, for example), not to write a polished, finished thesis. I found that the more I wrote, the more I felt OK with my writing not being perfect.

Usually I’d only be satisfied (or even comfortable) with a very polished piece of writing, but seeing sections become complete gave me a real sense of accomplishment (even if they needed future editing) and motivation to write even more. 

The preparation required helps you see your thesis as a whole

The PhD thesis really is the sum of its parts, and when you’re working on your research project it’s easy to only see those parts. Prior to #ANUTBC, it is required that each participant creates a thesis roadmap and meet with a learning advisor to discuss your writing style and what you want to get out of #ANUTBC. These appointments are unique to each student, but for me, I wanted to discuss how my thesis came together to combine my very different papers into a single document. 



The roadmap was the single most useful thing I did as it gave me a breakdown of all my section, subsection and sub-subsection headings for my whole thesis intro, connecting chapters and conclusions. With the roadmap, I could see how my thesis came together, and if I got stuck writing one section, I could move on to another and not waste time rifling through references. 



You are in an environment which is highly motivating

There’s a reason that they don’t train military recruits as individuals. While each person at #ANUTBC was working individually, the staff created an atmosphere that felt more like we were all working toward a common goal. This was accentuated by the activity on the Sunday morning where we were paired up with another PhD student with a completely different research topic, and were given 15 minutes to come up with a pitch for a unique piece of research that combined our research areas. 

With the total word count for the weekend being made up of everyone’s contribution, and the shared mealtimes and activities, the whole thing felt more like we were working toward a common goal. It’s easy to get lonely on your PhD journey and lose motivation, especially when writing your thesis, but #ANUTBC directly counteracted that with the planned activities, and the positive attitude of the bootcamp leaders and Inger’s infectious enthusiasm!

It works for all subjects, and any style of thesis

I was very skeptical about #ANUTBC before I attended. I even went as far as to say to friends that I wasn’t sure it would even be productive for me, as I was doing thesis by compilation and I wouldn’t have access to my references. 

I was wrong. 

It’s easy to underestimate thesis by compilation (where your chapters are made up of already published research you produced during your PhD). However, it can sometimes be harder to write the introduction and connecting material when you have already published the bulk of your PhD work, and it’s often surprising the amount of introductory and connecting material that is required to make the thesis a cohesive whole. Thesis by compilation is not, contrary to popular belief, simply stapling together your published papers. The thesis still needs to make sense as a single document. 

Inger and her team are experienced in dealing with students from all faculties and schools, and are more than familiar with the unique challenges of thesis by compilation, making #ANUTBC is the perfect place to work on your thesis by compilation.

While the weekend was exhausting and at times difficult - there is very little respite and downtime, and if you are unwell like I was (I had glandular fever) and you need some time alone to recuperate from constant social interaction, it can be very challenging - it was a very rewarding experience and I recommend it to all students getting toward the end of their PhD at The Australian National University, and it is definitely worth investigating whether your institution offers a similar opportunity. 

Happy PhD writing, everyone!





Why I won't be removing the death metaphors from my astronomy work

In the months after my Grandmother and Grandfather’s death earlier this year, I took a dive into finding out more about the idea of death positivity, and how to spread it. This was partly in response to feeling I had nobody I could talk to about my grandmother’s death (aside from close family). More recently, I’ve been finding out more about becoming a funeral celebrant (fun fact: there is a large market for people who want a ‘humanist’ funeral service as opposed to a religious one, and not that many humanist celebrants out there). 

So, I was already thinking quite a bit about how our view of the universe can help us understand death and dying. And then I came across this really interesting tweet: 

https://twitter.com/sarahkendrew

https://twitter.com/sarahkendrew



I don’t entirely disagree. Astronomy has decided to co-opt some pretty gross terminology which I definitely think can be viewed as violent: galaxy harassment? Probably one we can do away with. However, the idea that death metaphors in astronomy are intrinsically violent? I don’t agree. 

In the Western world, with our deeply ingrained taboo around death and dying, we often think of death as inherently violent. When we encounter death in popular culture, it is usually as a consequence of violence. Medical dramas, shows such as “CSI” and “Criminal Minds”, and even popular soaps such as “Coronation Street” and “Home and Away” have regularly hinged their attention grabbing storylines off violent death. News stories about violent death are often sensationalised and satisfy our natural curiosity around death without actually confronting the taboos that limit us from openly talking about it.

Consequently, most people will be exposed most frequently to death by violence. Even if you experience the peaceful death of a close family member on a couple of occasions, the frequency with which you are exposed to the idea of violent death via the media leads to the conflation of death and violence. 



So how can we use death metaphors in astronomy to undo this connotation? I spend almost all of my working life thinking about how stars die, and the processes that occur after their death. Dying stars are the factories that produce the building blocks of life. All of the things which will be left over after you die - the calcium from your bones, the iron in your blood and every other atom in your body - were all forged in the heart of a dying star. A process of creation, not one of violence. I also think about what happens to the star after the immediate moment of death. This is often the aspect of death that fills people with the most dread and horror. When it comes to death in the cosmos, the remains of a star can create something incredibly beautiful - planetary nebulae, a supernova that outshines an entire galaxy, or the beautiful outflows we see from galaxies that produce a large number of dying stars. 

In Western Culture, the processes that follow death are not seen as beautiful - we go to incredible lengths to cover up the natural processes of decomposition, where our atoms are returned again to the universe just like that dying star. Our fear of the natural consequences of death has not only lead to a large number of misconceptions about the corpse (for example, that corpses are disease vectors, or that it is illegal to not have a body embalmed, or that you cannot be involved in the care of the corpse) but the rise of the industry surrounding the beautification of the corpse.

A beautiful star corpse, spacetelescope.org

A beautiful star corpse, spacetelescope.org

By talking about the incredible beauty and remarkable things that can happen in the cosmic corpses of distant stars, I really hope that I can make people take a second to think that the natural processes that occur after our deaths are beautiful in their own way and take away some of the fear and stigma that surrounds talking about death. Oh, and if I can convince someone to actually write a death plan to help their family, and choose a more ecologically sound method of interrment, as well as saving some pennies by diverging from the narratives of the Western funeral industry, that’s just bonus. 


Find out more about The Order of the Good Death, and ‘death positivity’

[Doctor Who spoilers below!]

p.s. When it comes to death narratives in popular Western culture, I honestly think Doctor Who probably gets it the best through it’s overarching narrative (and doesn’t usually show the results of any violent deaths) - the whole concept of rebirth is pretty great, and [spoilers] 10/10 for whoever decided to show a wicker casket in the first episode of this new season, which is an excellent, affordable and eco-friendly option if you’re in the market for a casket. 

Stop writing your conference talk during the conference

One of my current hobbies during conferences is to sit at the back of the room and count the number of people who are finishing, perfecting or, lets be honest, just starting to write their conference talk. And if we’re even more honest, I’m sure everyone at some point has been guilty of leaving finishing their slides until the session (talk) immediately before theirs, myself included. 

 

But because this is common, does it mean this a good idea? Probably not. Preparing your slides during conference sessions means

a) you aren’t getting the full benefit of listening to the conference talks, which is usually the number 2 reason you’re at the conference aside from the free food,

b) you are causing yourself anxiety by rushing to finish slides,

c) you haven’t had time to script and practice your talk.

 

The first two are obvious downsides. If you’re focussing on creating your own slides, you aren’t concentrating on anyone else’s. And we all know that leaving things until the last minute can create unnecessary anxiety. But what about c? 

 

Writing talks, especially longer talks (30+ minutes), let alone practicing the talk, can take significant time away from your research. This is probably the number one reason that devoting time to talk writing tends to get pushed to the last minute. However, there are some good reasons to not only take the extra time to prepare your talk slides, but to script and practice your talk too.

 

Captive audience:

While research output and productivity is measured by number of papers produced and citation metrics, you will never have a captive audience for your paper (except perhaps the reviewer). You can’t lock people you want to see your work in a room and force them to dig through 30 pages of content. Although you could lure people in with the promise of free barista coffee and then lock the door, I think this counts as kidnapping.

 

On the other hand, a conference talk or colloquium gives you the perfect (legal) opportunity to lock people you want to see your work in a room. What’s more, you also now have the opportunity to ensure they take the correct (i.e. your) take home message away from the paper. 

 

For students especially, another thing to bear in mind is that it is highly likely your future employer will be in this captive audience. Which is why it’s important to remember that not everyone in the captive audience is necessarily going to be captivated by your talk. Which brings me to the next point…

 

Your talk is a performance:

Not only is giving the conference talk an opportunity to showcase your research, it’s also an opportunity to showcase your skills as a communicator. Communication skills are a vital soft skill in almost all jobs these days. This is especially true of the short, one minute presentations that are usually given to those who are presenting posters at a conference. A good short presentation describing your work, which is presented confidently and in a way that invites people to find out more about your research, will take your poster from excess luggage to an actual worthwhile pursuit. 

 

Why you should script your talk

The idea of scripting talks is controversial. When scripting a talk, its very easy to fall into the trap of writing everything down and reading from a piece of paper (hint: do not do this at a conference). For most people, it becomes easy to avoid this pitfall by only noting down bullet points that you need to cover related to each slide. However, this still relies on a certain degree of improvisation. 

 

Its worth noting that even ‘improvised’ comedy shows on TV, and live TV broadcasts like “BBC Stargazing Live" are largely scripted. In a past life, I tried my hand at stand up comedy, and quickly realised the advantages of scripting (in detail) exactly what I planned to say. The goal with many of these ‘shows’, however, is to appear as unscripted as possible. So, is it possible to write a script for your talk and leave everybody none the wiser? Yes, and for some it comes naturally while for others, it may take extra practice… 

 

Why you absolutely must practice your talk

A well practiced script will, after a while, begin to resemble improvisation. The best conference talk I ever gave, a 20 minute invited talk, was scripted down to the last word. By the time I came to give the talk, I had thrown the paper script in the bin, along with all its directions: when to pause, when to walk around in front of the screen. Even if I was interrupted with a question, my script was so well practiced that I could simply return to my script as though nothing had happened. I probably spent around 15 hours in total practicing, from the lecture theatre at my research school to the car on the way to work. 

 

Ok, this is excessive, but its what I personally do. Being able to give a good conference talk is part of what I do. Regardless of how much detail you put into a script, you must practice and you must practice until you are confident that you can deliver the talk in the environment of the conference. This means actually practicing the talk in a lecture theatre if you can, not sitting behind your computer mumbling to yourself. While this is awkward to begin with, you will quickly identify the parts of the presentation where you stumble and hesitate, and you can work to fix this. 

 

 

Another reason to practice your talk is to avoid the one cardinal sin of the conference presentation: timing. A talk that runs over time only irritates people: you are the only thing standing between them and the buffet table. As soon as an astronomer hears the tell-tale clanking of the catering company putting out the donuts and coffee cups, it’s over. 

 

The shorter the talk, the worse it is if you run over time: I often think that if a 50 minute talk runs 5 minutes over, it’s ok. If a 10 minute talk runs 5 minutes over, you’ve already taken your time plus an extra half. Practicing and timing your talk is extremely important, but it only works if (as I said above) you’re practicing under the same conditions as you’re presenting. 

 

If you’re at a conference, you will have the session chair timing you and providing you countdown reminders (5 minutes to go, 2 minutes, finished, usually). Please, for the love of God, look at the damn session chair. As nervous as you may be, it’s a nightmare to be a session chair and to spend 5 minutes trying to get the speaker’s attention to tell them they’re finished. Different session chairs have different levels of aggression over this: some will stand up and take the microphone away (I’ve seen this done) and some will just let you go on forever. Whatever you do, keep the chair and their timing signs in your field of view, and try to acknowledge them no matter how nervous you are. 

 

Overall, I think giving talks is important, be it at conferences or elsewhere. However, a poorly-rehearsed, planned and executed talk can actually cause more harm than good. Being an academic isn’t just about researching. Preparing for talks and conferences is work, despite the fact it isn’t your primary task. Devoting appropriate time to writing and rehearsing your talk can make all the difference to your career and opportunities. For the minutes or hour you are on stage, you have a great amount of power to communicate your science, so use it wisely. 

But first, coffee...

I think what sealed my fate in choosing an institution to call home for my PhD was the coffee machine. Mt Stromlo Observatory has three. Two sit in ‘Possum Hall’, a common area named because while the concrete for the floor was being poured, a rogue possum got in and left footprints (which remain to this day). 

IMG_3748.png

 

I started drinking coffee around the age of 16, when I got my first job in a stationery store. Feeling like a proper grown-up, I decided that ordering a latte from the cafe down the road befitted my new grown-up status. Over the years, I went from drinking mostly hot milk, to straight black coffee. Partly because of an intolerance to dairy that I developed over the last 4 years and partly because I prefer the taste. 

 

Coffee, or hot beverages in general - not all scientists drink coffee, drives science. Offices may have water-coolers, but we have the coffee machine. At Mt Stromlo, I was very taken with the fact that almost everyone: IT, senior scientists, postdocs and students, emerge from their offices at 10:30am and line up for the coffee machine. Friendships are made (and on at least one occasion, broken) in the coffee line , collaborations spring from casual questions. The line is longer than it needs to be, because everyone knows which coffee machine is better. 

Henry Zovaro forgets to catch his coffee in his mug, invoking a true scientific paradox: "what do I need to do if I need coffee to figure out how to get my coffee?"

Henry Zovaro forgets to catch his coffee in his mug, invoking a true scientific paradox: "what do I need to do if I need coffee to figure out how to get my coffee?"

 

My relationship with coffee machines in universities really began where I did my undergraduate years. In the physics tearoom, the coffee machine was an annoyance. Every 10 minutes or so, it would loudly blow steam through its valves. The milk tubes would block up and have to be unblocked by the weary atom-optics PhD students I spend my time with. However, the coffee machine was the focal point for all my friendships - each morning I knew I could sit and discuss problems I was stuck on with my research with the other students. I knew that I could wander down at 3pm and talk to the senior professors. The coffee machine was my first introduction to academic politics and collaboration. Ironically, I didn’t drink the coffee out of the machine, instead choosing to consume so much instant coffee that I gave myself chronic heartburn toward the end of my honors year.

 

I travel a lot - probably more than average because my collaborators are dispersed around Europe. The most important thing when I arrive at an institution is finding out where the best coffee is. In Europe, this is a challenge. Australia is world-reknowned for having exceptional coffee. A friend who recently moved to Turin for a postdoc has even informed me that Australian coffee is superior to Italian coffee. I hope for his sake, the Italian government hasn’t bugged his phone. Recently I landed in Belfast - the first order of business for myself and my host was to avail me of the coffee situation. 

 

Having lived in Australia for a time himself, he recognised the importance of the coffee not just being available, but being good. My campus tour featured several cafes, and a run-down of the coffee quality. I enjoy finding common ground with people I work with, and often this common ground is coffee, rather than growing up under the watchful eye of parents who are dentists, emigrating multiple times and having a large portion of one’s family living in Aberdeenshire (which is the case with my Belfast host). Being British by birth, it’s even better if that common ground is something you can complain about. If you’re at Queens University Belfast, go to Junction, in the law building. 

 

Not all Australian coffee is good. If it’s coming out of a machine in an astronomy department, it can be hit and miss. I’ll never forget the time I accidentally ordered a double espresso from the machine at the University of Western Australia node of ICRAR and was presented with something that looked like dirty engine oil, tasted like kerosene and, in the brief moments it took me to swallow it, enabled me to see through time. 

 

By a similar token, not all European coffee is bad, but it takes some time to figure out what’s what. Watch the locals. If you’re ever at the Max Planck institute for Extraterrestrial Physics in Garching, use the coffee machine in the very corner of the ground floor coffee nook - it produces twice as much coffee as the one on the left. If you’re visiting MPA, next door, make the trip and spend the 40c to get coffee from this machine because whatever came out of the urn in a sad, grey trickle has no business calling itself coffee. 

 

I’m also someone who will buy coffee for other people. One of my supervisors and I now have a tradition of trading buying one another coffee (cuts down on queue time). However, I tend to do this at known and reliable cafes. Last year, on a visit to Germany, someone kindly offered to buy me a coffee from the brand new barista coffee cart at the canteen (which has since disappeared). Somehow, my ‘black coffee’ request turned into a burnt shot of espresso. I’m one of a group of people who don’t really taste bitter things as being unpleasant (evolutionary speaking, I should be long dead. Being repulsed by bitter tastes is a protective thing) but this coffee was genuinely nasty, and honestly I’m glad the barista coffee cart disappeared from the IPP canteen. The saving grace, or so I thought, was that the espresso came with a piece of chocolate on the side. Biting into it, to wash away the taste of the coffee and not offend the person who bought it for me, who was also talking to me about job opportunities, I realised that it was in fact a chocolate-covered coffee bean. Not wanting to hurt my chance at a job in the research group, I ate it with a smile. 

 

I think sitting down with a hot drink (usually coffee) and a biscuit is what drives a lot of modern science. Even in this day and age when we mostly communicate via email, sitting face to face with our colleagues is so important. Taking half an hour for a genuine human connection over a cup of coffee isn’t a waste of time, especially for students, and should be encouraged. All great scientific ideas start somewhere, and I’d bet that the majority of them started over a cup of coffee. 

How to handle your PhD: Part the second

Welcome back to part two of this blog+YouTube video double-whammy. If you haven't read and watched Part One, make sure you go back and do so. And don't forget to watch part two of the video!

And without further ado, let's continue the information dump on PhD-ing!

Science communication and outreach

Communicating your science is a really important skill to learn. Both David and I recommend getting involved in science communication (scicomm for short). The skills you pick up communicating your science to a general audience will also help you communicate your science to other researchers. We also need to be able to explain to the public and funding agencies what we do, why we do it, why it’s important and most importantly why they should be excited about it.

Some PhD students want to do science communication and outreach, but their supervisors may dismiss this as a waste of their time. In this situation, you may want to have a discussion with your supervisor about the benefits of scicomm, and also to point out that you want to be a scientist who communicates science, not necessarily the next Neil DeGrasse Tyson or Brian Cox (who largely prioritise entertaining communication over their scientist role), although there’s nothing wrong with aiming to be the next Face of Science!

Some examples you could bring up include: 

  • Scicomm improves communication skills for conferences and grant applications
  • It builds your profile as a researcher
  • Helps build your research groups profile and publicises your research
  • Some scicomm opportunities may include a monetary reward (David won $1000 through a science communication competition early on in his PhD!)

Some words of caution though - if you find your scicomm time is cutting into your research time, you may need to evaluate how effectively you are spending your time. If you are enjoying the scicomm way more than the research itself, consider looking into the graduate programs that specialise in scicomm.  Also, its a good idea to include scicomm experience in your CV, however make sure you are selective about the activities you choose. Adding every single thing youve done can look like excessive CV padding. Pick the particularly significant events youve been involved with. In Davids case, this may be the times hes won prizes. In my case, I highlight things like my involvement in the Cassini Mission grand finale as opposed to every podcast Ive done.

Here's a gratuitous picture of Saturn from the Cassini mission. Science!

Here's a gratuitous picture of Saturn from the Cassini mission. Science!

 

Those who can, do. Those who need money while doing it, teach.

Like scicomm, teaching also helps build communication skills as well as leadership skills. This is also a good way to attract new students to you research group. If you plan to stay in academia, the majority of academic roles will involve teaching in some capacity, whether it is teaching lecture courses or supervising students. Building teaching experience as a student is a good idea as this is likely to be one of the only times in your career you will have ‘spare time’ before it gets eaten up by writing grant applications, supervising students and dealing with a growing mountain of emails. And, as a student, teaching a class can be a welcome change from your usual desk or workbench!

Some jobs also require you to have some teaching experience and write what is known as a teaching statement, which will describe your teaching style and how it helps students learn. For this reason, it is a good idea to request student assessments of your teaching if your university offers this.

If you do want to pursue a research career, do consider trying to co-supervise an undergraduate or masters research student. Often, nobody will teach you how to be a good supervisor, but by working together with your own supervisor to mentor a younger student you can start to develop the skills you need.

There are some teaching qualifications you can obtain for university teaching. Individual universities often have courses you can take (they are frequently listed as ‘professional development’ courses and they are usually open to PhD students).

As with scicomm, sometimes you may find that teaching overwhelms your research. If this is the case, it may be a good time to re-evaulate your work load together with your supervisor. At the end of the day, its your research that gets you your PhD, not teaching.

 

When it starts go wrong - how to deal with problems

The kinds of problems that can impair the progress of your PhD are many and varied and there’s no way we can cover each one. They can range from your supervisor leaving, to being unable to access certain resources, to personal problems both inside and outside the institution. The key thing to know is what options and resources are available to you to deal with any problems that arise.

Your institution will often have all sorts of support structures for their staff and students: Counsellors, financial aid, a diversity and equity committee and many other services. It’s good to learn about these when you start your PhD, and certainly look for them if a problem does come up that you can’t solve yourself.

Your supervisor will often be the first person you can go to for help with a wide range of problems - this is why it’s important to have a good relationship with your supervisor. In general, you can help your supervisor to help you. Supervisors are often very busy people and are relying on you to be somewhat independent. You can make things more efficient by preparing as much as you can. If you are going to have a big meeting with them, put together an agenda. If you’re coming to them with a problem, try and have as much of the information they will need as you can already to hand.

Sometimes you can find yourself in a situation where the problem has been created by some kind of conflict with your supervisor. This may be due to something like not agreeing on whether or not a paper is ready for submission, but other issues can and do arise. This is why I recommend (and many universities require) having a panel of supervisors who can help mediate in these kind of situations. Another possibility is developing a relationship with a mentor who can step in to mediate if a disagreement occurs.

Choosing a mentor can be a bit tricky, but ideally you should look for

  • Someone in a similar field, but outside your immediate field and supervisory panel who can provide some perspective on your progress
  • Someone who has a career trajectory you would like to emulate
  • Someone who may be able to provide a reference for you, whose name will be familiar to hiring panel
  • Someone who can advocate if there is a disagreement with your supervisors

During David’s PhD, there was a compulsory mentor scheme implemented for PhD students, where they were required to meet at least once every six months with a mentor outside of our immediate research team. He was offered a choice of mentors and picked someone who he thought would provide a very different opinion to that of my supervisors. This has helped him immensely and it has been very good to have someone who has provided a contrasting voice to his supervisors.

One very common problem for PhD students is managing stress. It’s very common for stress and imposter syndrome to spill over into bigger problems like anxiety and depression. You may be surprised to find out that others around you may struggle with their mental health at times too (not necessarily limited to anxiety and depression). The important thing is that if you are feeling overwhelmed that you ask for help: all universities have counselling facilities, general practitioners and you can also access help outside the university through community programs. There are more and more online resources too. Organisations like Livin and Black Dog Institute (which has a host of excellent resources related to bipolar disorder in particular) are good places to start. Your friends can also be a port of call - in summary, don’t be afraid to reach out if you’re having a tough time!

 

Dealing with the inevitable: failure.

Failures do happen, and will happen to everybody at some point. At some point you are going to have to deal with a conference talk or a paper being rejected, not getting a grant you needed, or, in rare cases, rejection of your thesis. For smaller failures, like rejection of a paper, it’s important to remember that these things happen to everybody, it does not reflect negatively on you as a researcher, and to move on to the next journal, grant, or conference. For bigger stuff, like a major grant, or even your PhD, take time to mourn, then move on. Give yourself 24 hours to feel sad about the situation, maybe even take the rest of the day off and eat ice cream for dinner, but after 24 hours, it’s time to move on, sit back down, and work out the next step, work out how you are going to fix the problem.

One common place people being to feel “failure” is around receiving reviewer comments on their first paper. You will probably go through the five stages of grief: a mix of emotions, along with some anxiety or fear. This is quite normal - even experienced researchers don’t always enjoy reading feedback from a particularly harsh reviewer. Learning how to deal with the feelings you might experience about a reviewer response is part of the research process. I’ll be writing a blog in the near future with my tips and advice on the peer review process!

 

Doing what you came here to do: writing your thesis

The most important thing you can do when it comes to your thesis is start. I don’t think it’s ever too early to open a document, and start writing down bullet points about what you are doing. One other good reason for doing this is so you don’t forget all of the things you have done for your thesis. I also think it’s a good thing to keep everything you write. Even if you think it’s trash, hold onto it in a Google doc - you may find pieces that can be used and rewritten at a later date.

The most important thing is to actually start writing, even if what you write isn’t that good. Your supervisors will help you edit and refine your thesis and papers, and will provide feedback on your writing to help you improve.

You should also schedule some semi-regular time for writing, which is often a skill that gets de-prioritized in a science undergraduate degree and is hence a skill you will likely have to work harder at developing. Starting a blog (or taking to a microblogging platform like Twitter) can actually help improve your scientific writing skills. Another way to improve your writing is to read more scientific papers and theses in your field. This will help you ‘hear’ the kind of voice you need to be writing in for publications and your thesis.

When it comes to writing, like many aspects of research, you will find that ‘perfect’ is often the enemy of ‘good’ or even ‘finished’. You can spend so long making things just so that you may never actually finish the thing you started. At the end of the day, your PhD thesis has to be finished in order to be submitted, and that doesn’t mean it will be perfect. Perfectionism is a good trait when handled correctly, but it can lead to problems including, in some cases, anxiety and depression. This podcast from the ABC is a great discussion of how to work with perfectionism. If you do find this getting in the way of your work (which has happened to me), you could even work through this resource together with your supervisor.

 

Bonus round: Twitter

This info didnt end up in the video, but both David and I could talk for months about Twitter. Ive been using Twitter for almost ten years now, and Im a certified Twitter addict (I have to turn off notifications or else I spend all day replying to people and scrolling endlessly). David was more reluctant, but looking back wishes hed started using it earlier.

One really great reason to get on Twitter is related to networking. More and more scientists are getting involved in Twitter, and it’s a fantastic way to communicate with scientists during, before and after conferences in particular. In fact, David and I met briefly at a conference in 2016, followed each other on Twitter, and David has been annoying me ever since (his words, not mine)!

My big recommendation if you do get a Twitter account is to be yourself. Personally, I mostly avoid making political statements and for students, I would recommend this approach. However, getting involved in politics seems to work very well for people like Katie Mack and Twitter user @skullsinthestars. While David and I share science, we also share some details about our lives both related to  (and not related to) research as well as the odd cute animal. One thing I like to try and achieve with Twitter is to communicate some of the science ideas I find interesting as well as showing that I have a life outside of astronomy and work.

Your can follow Fiona at @FiPanther and David at @DRG_physics. Both Twitter feeds are full of cool science, comments on the life of a researcher, and the occasional cute animal.

 

Overall, there can be a lot to consider when you decide to pursue a PhD, but both David and I would agree that it has been mostly a positive experience. While not every minute will be filled with fun, everything you encounter during your PhD can have a positive outcome. Even the hard bits can be viewed as a learning experience and a way to build the resilience you will need for a successful research career.

 

Wishing you luck, free food and kind, empathetic reviewers,

Fiona and David

How to handle your PhD: Part One

Welcome to something a little bit different - this is the first time I’ve worked on a  blog post in collaboration with someone else (because I’m a real control freak about content), and what’s more, it’s the first time I’ve written a blog post in conjunction with a YouTube video.

I’d like to introduce you all to David Gozzard. David recently completed his PhD at the University of Western Australia. We met at a conference in 2016, and have stayed in touch (predominantly through Twitter) since then. You can follow David on Twitter here, and don’t forget to check out his blog and YouTube channel too! 

Somehow this is the only photo I have that has both of us in it. I really miss my 2016 haircut now.

Somehow this is the only photo I have that has both of us in it. I really miss my 2016 haircut now.

 

Earlier this year, David and I recorded two videos worth of footage of us talking about some of the ways we’ve navigated out PhD experience, and collecting together some advice we often give students who are just starting out (or even further though) their own PhD. You can check out part one of the video here, and below are some notes based on the video that you can come back to later!

You can watch part one of the YouTube video here

Step One: Choosing your supervisor

I’ve always pointed out that your relationship with your supervisor is likely to be one of the most significant during your PhD (or maybe this is a sad reflection on my life). You will be working with your primary supervisor for at least three years, and during this time you will have successes, failures, agreements and disagreements.

 Consequently, picking a supervisor based on the fact they are just a big name in their field isn’t necessarily the best idea. As I point out in the video, a good litmus test for me was picking a person that I got along with well outside of a research context: someone I could sit down, have a cup of tea or coffee with, and talk about something other than research.

As David points out, choosing a good supervisor for you should play a part in the PhD project you ultimately pick. While it’s a good idea to have a rough idea of the field you’re interested in, a good supervisor can tailor a project to complement both of your strengths.  

Both David and I were lucky enough to have spent time with our respective supervisors prior to committing to a long term supervisor-student relationship. A summer project or internship can be a good opportunity to ‘try before you buy’ with regards to working with a particular supervisor. Failing this, it can be a good idea to talk to a prospective supervisor’s existing students. They will give you insight into what day to day life working with the supervisor is actually like. 

 

Welcome to the first day of the rest of your life...

So you've arrived for your first day as a PhD student. Don't worry, the shine will soon rub off and you'll be just as salty as that one student who's been here for like 7 years in no time. In the meantime, enjoy the abundance of enthusiasm while it lasts.

David strongly recommends checking out online resources from iThinkWell. Run by Hugh Kearns, from Flinders University, these resources have been compiled and developed over many years to help out new PhD students. Hugh has both a Facebook and Twitter page.

When you start your PhD, you may be moving to a new institution - this was my experience. However, the following still rings true if you are remaining at the institution you got your undergrad degree at like David did: your first week or so should be spent acquainting yourself with other new and existing students, and researchers. The relationships you will develop with other researchers as a PhD student will be quite different to those you had as an undergraduate student. It is likely you will have to identify a supervisory panel of 2-5 people in addition to your primary supervisor, so this is a good time to make connections!

As David says, you need to not only get to know your supervisors, but your fellow students, other researchers and the admin staff too. Everyone around you has a different set of skills and knowledge and can help you. Don’t be afraid to ask questions of these people whether it’s about some technical aspect of research, or how to connect to the workplace printer or how to work the coffee machine. Most people are more than happy to help!

 

Reading papers: why you gotta have time fo' that

So you’ve started your PhD and chosen a supervisor. Chances are, you’re going to be spending a lot of time reading papers to get to know your area of research better. When you first start reading papers it can seem daunting, and it can be time consuming, and you will find that you won’t understand everything you read in the first weeks and months (and maybe years) so it’s important to go back to significant papers over the course of your PhD. As David says, you can make notes on what you understood, as he did, and then go back to them months later. this can be a great way to track progress! Another good idea is to seek out PhD theses from your institution and around the world, to get an idea of what a PhD thesis looks like and what might be expected of you.

Reading papers is the perfect way to stay up to date with what is happening in your field, as science never stands still. Getting involved in, or organizing something like, a journal club is a great way to keep up to date with the science that is happening around you. While these can seem like a waste of time, it is a good way to build reading papers into your day or week, and watching and listening to more senior students and academic can give you some tricks for understanding and evaluating papers, and direct you to papers that are more important. 

 

Your PhD is about more than research! Work-Life balance!

This section is definitely more of a case of “do as we say, not as we did”. Both David and I have been guilty of not taking adequate time away from work over the course of our PhDs. For many people, the idea of taking time off can make you feel more stressed,  not relaxed like a holiday should. However, it’s also worth remembering that it usually takes five to seven days for your brain to switch into “holiday mode”. It’s very important to make the most of your vacation time and use this time to rest, relax and recuperate. Not only have you earned the rest, you need it!

In academia you will see people who appear to be working every hour God sends, but it’s important to remember that the key work here is ‘appear’. Neither of us have met a person who can sustain that level of work effectively and efficiently, and not taking adequate breaks or holidays ultimately leads to burn out and inefficient work practices. Often, spending more time at work simply leads to spending more time on Facebook or YouTube. Even senior professors don’t and can’t work all day every day! You’re better off spending 6-8 hours actually working at work, and then taking your evening and weekends off to do things that you find relaxing and enjoyable.

Many students come from the undergrad mindset of assignments that have to be finished by a set time, and many people develop bad habits during this time of staying up until the early hours of the morning to finish work. As a PhD student, it’s more about being able to sustain effort in your work for a full three plus years, and you should view it more like a 9-5 job. You will probably have your own office space - use this to your advantage. Try and only work at work if possible, and if you create a work space at home, for God’s sake move it out of your bedroom! And one more thing: stop checking emails when you leave work for the day, and especially don’t check emails before bed!

But who’s going to remind you to take time off work? As well as holding yourself personally accountable, making friends both at work and outside are important for your mental health and wellbeing. If you’ve moved to a new place, however, this can feel especially daunting. Fortunately, there are not lots of apps and websites for meeting new people for social events, and you’ll soon realise you aren’t alone in this endeavour. A new student in David’s office (and myself, actually) have had a lot of success with Meetup. You can use it for thinks like sport (think ultimate frisbee, soccer, paddle boarding and rock climbing) but there are usually lots of other groups in your area who might meet for drinks, to go for a hike, play boardgames or around a common interest such as photography.

Most universities also have large numbers of student clubs and associations that you will be able to get involved in, and the variety of clubs is usually endless, and if you can’t find one for your passion, start one! David also found his university’s Postgraduate Students Association really good - this is a good place to make friends and discuss PhD related problems with other students who may be further along in their studies, and have a little more experience.

Making friends within your PhD program can seem quite daunting sometimes: you might be coming in to a group where people already know each other quite well. I recommend going along to daily or weekly gatherings like morning teas, finding out when people usually gather for lunch and where they do so, as well as going to things like colloquiua that might be occurring. When I first moved to ANU, I set myself the challenge of meeting one new person every week. It did take a bit of courage, but I’ve ended up with a fantastic group of PhD buddies. Oh, and don’t restrict yourself to other students! Some of my best friends at work are members of the IT department (always good to have onside in case of a computer meltdown) and other academics, right from emeritus professors down to undergrad students. 

 

Time for a trip: Your first conference and how to make the most of it. 

I recently wrote a fairly comprehensive blog post about how to make the most of conferences. I really wrote it in response to a few people asking me how I ended up getting to go to a conference as an invited speaker.

Conferences are an ideal chance to grow your network beyond your institution. If you want to stay in academia, your network will be the support structure for your whole career, and in many ways will shape your career. Make sure you talk to and meet with not only other students, but more senior academics at the conference you go to. Don’t be afraid to approach senior, well known individuals in your field - however, if they are very busy it can be a good idea to contact them prior to the conference and nail down a time to chat for half an hour during the conference.

I would also always recommend giving a talk where you can. No matter how early you are in your PhD (I saw someone one month in give a talk at a recent conference). And most importantly, keep your talk to time. This means practicing your talk before the conference, in front of an audience. This last piece of advice was something we really should have thought about when creating these videos and blog posts, so this will conclude part one. You can access part 2 of the video here, and part 2 of the blog is coming soon!

 

Thunderbolts and lightning, very, very frightening...

From a conversation with Roland Crocker and Chris Lidman

 

The summer of 2014-2015 was an interesting one. I spent around two months in Canberra at the end of my honors degree, working on a eight-week summer project that somehow stretched into an entire PhD (the paper describing the work we started that summer, all five pages of it, was published this month in Monthly Notices of the Royal Astronomical Society Letters). It was also a summer where seemingly every night, intense thunderstorms would build up around Canberra, resulting in several young scientists nearly getting electrocuted while playing ultimate frisbee on more than one occasion. After two quieter summers, Canberra is again being treated to dramatic thunderstorms. The city is perfectly situated in a region where hot, dry air from the Australian interior slams into cooler, moister air driven up from the South, which, when combined with strong daytime heating, generates spectacular storm systems that tend to sweep across the city from South-West to North-East. 

 

A typical thunderstorm contains a quantity of thermodynamic energy equivalent to around 10 15-kiloton nuclear bombs. A single bolt of lightning discharges around a million volts, with electrical currents peaking between 10,000 - 40,000 amps. A single lightning bolt heats the air to around 30,000 degrees Celsius, five times hotter than the surface of the sun. Particles in the air become instantly ionized, electrons ripped away by the immense electrical discharge, resulting in the bright flash of light. Heating of the air results in sudden expansion, which generates the sound we hear as thunder. The further away a thunderstorm is, the deeper the rumble can sound. This is because low pitched sounds tend to propagate further than high pitched sounds. If a thunderstorm is very far away, you may be aware of a very deep sound on the edge of your hearing - this is infrasound, vibrations so low pitched that we don’t perceive them as sound, almost more as a feeling. The feeling in your chest when you hear a nearby plane take off (especially something like a C-130, or even the Chinook helicopter) or at a loud concert is one way humans experience infrasound. 

 

Thunderstorms don’t just produce optical light and sound. One way of detecting lightning is through the radio emission associated with the electrical discharge of lightning. However, as we know from astronomy, often where there is emission of radio waves from a high energy process, we also see the emission of gamma rays. In the 1990’s, NASA launched the Compton Gamma Ray Observatory to study the emission of gamma rays from the Milky Way galaxy and beyond. The BATSE instrument, which as designed to detect bright flashes of gamma rays from space, called “gamma ray bursts”, started to detect gamma ray flashes from Earth - a phenomenon called “Terrestrial Gamma Ray Flashes” or TGFs. In 1996, a Stanford university study connected the TGFs with intense thunderstorm activity and lightning flashes. More TGFs were subsequently discovered by the RHESSI satellite, named for pioneering gamma ray astronomer (and “father” of positron astrophysics) Reuven Ramaty. 

 

The detection of TGFs by RHESSI is interesting in the context of some more recent work. One of RHESSI’s missions was to observe the production and annihilation of positrons (antimatter electrons) in the Solar atmosphere. However, it is RHESSI’s observation of TGFs that lead scientists back to studying the production and annihilation of positrons here on Earth. For a long time, it was hypothesised that the bright bursts of gamma rays that make up TGFs may result in pair production: two interacting gamma rays give rise to an electron-positron pair. These positrons of course don’t travel far - while a positron in space may live in excess 10 million years before bumping into an electron and annihilating, positrons produced in the atmosphere are surrounded by far more electron-bearing things (atoms, for example) and annihilate very quickly. However, these annihilations also produce gamma rays, which can be detected by gamma ray satellites. For a long time, it was assumed that positrons produced by thunderstorms mostly came from pair-production. 

 

Even when we look at extreme environments in outer space, like jets launched by black holes, we aren’t really sure how many positrons we can produce via pair production. In an astrophysical context, we actually think most positrons come from the radioactive decay of a variety of nuclei that are produced by stars and the explosions that occur when stars end their lives. A recent study shows that a lot of positrons created by thunderstorms may also be coming from radioactive decay. TGFs, rather than producing positrons through the pair production mechanism, actually interact with nitrogen atoms in the atomosphere. This “photonuclear” reaction converts a standard, stable nitrogen atom (14N) that has seven protons and seven neutrons in it’s nucleus into 13N (nitrogen-13). This nucleus has one fewer neutron, and the nucleus is unstable. It will tend to decay through beta+ decay, which produces a positron (and a neutrino), and the nucleus transmutes into that of 13C (carbon-13). Other similar photonuclear reactions involving oxygen can also occur. In this work, recently published in the journal Nature, the authors detect positron annihilation gamma rays associated with the positrons produced in these photonuclear reactions. It makes a fascinating connection between nuclear reactions, lightning and antimatter. What’s more, when you have a hammer that models positron transport and annihilation, it’s another problem that looks like a nail. I’ll keep you posted. 

 

(As an end note, I was revisiting a wonderful book by Paul Simons called “Weird Weather”, where the author mentions a 1994 report in Atmospheric Environment about the emission of Krypton-84 by nuclear power stations. This chemically inert but radioactive gas apparently makes the atmosphere conduct electricity more easily, but no study had been done to see if thunderstorms were more common around nuclear power stations. An interesting tidbit to follow up on, perhaps)

Rosencrantz and Guildenstern aren't dead

Cloud chamber photograph of the first positron identified by Carl Anderson. The existence of such a particle (the anti-electron) was mentioned in passing by Dirac in his 1931 paper on quantization of electric charge.

Cloud chamber photograph of the first positron identified by Carl Anderson. The existence of such a particle (the anti-electron) was mentioned in passing by Dirac in his 1931 paper on quantization of electric charge.

 

2017 was a year of Big Science. The end of the Cassini mission, which was supported by hundreds of people over the years, and the remarkable coming-together of thousands of international scientists to study the first confirmed neutron star-neutron star merger, are two which readily spring to mind (possibly because I have been peripherally involved in both). The 2017 Nobel Prize for physics was awarded to Kip Thorne, Rainer Weiss and Barry Barish, but also recognises the work of the thousands of individuals who make up the LIGO collaboration, which has had phenomenal success detecting gravitational wave transients over the past couple of years. 

 

Advances in technology and ease of collaboration, as well as the collection of vast amounts of data, are making it possible to ask Big Science Questions in astronomy. Questions like

 

What is Dark Matter?

What is Dark Energy?

How did the Universe become reionized?

How do galaxies evolve across cosmic time?

 

need big resources, and big teams to solve them. These are the current dominant narratives we see in popular science media, the protagonists of modern astronomy. 

 

However, it’s worth remembering that often science happens in the wings of the main production. Sometimes, the science happening off-stage, in the background of the Big Discoveries, can have profound consequences. 

 

Paul Dirac is widely regarded as one of the fathers of modern physics. His wave equation, which marries together special relativity and quantum mechanics, is taught to hundreds of thousands of physics students. The same physics students often hear anecdotes about Dirac himself, who by many standards (although perhaps not by those of academia) an unusual character. Dirac’s legacy and passion for precision and elegance can be seen in every young physicist who turns to astronomy, and is confronted for the first time with order of magnitude approximations and tildes.

 

The unusual thing about the initial publication that describes the relativistic wave equation (the Dirac equation) is that it contains a rather uncomfortable loose end. In the initial publication, Dirac points out the loose end: specifically, there are solutions which describe a particle with “negative energy”. Like Shakespeare sending Hamlet’s erstwhile childhood friends Rosencrantz and Guildenstern away on business (off stage, of course), it is a loose end that begs some kind of resolution.

 

This resolution only came several years later, in the rather philosophical introduction of Dirac’s 1931 paper “Quantised Singularities in the Electromagnetic Field”. As the name of the paper suggests, the search for a physical meaning of these “negative energy” solutions to the Dirac equation is not the main subject of the paper. In the introduction, Dirac muses on the recent developments in fundamental physics, noting that the new paradigm for advances in physics will be the development of abstract mathematical theories, which can then be matched to observed physical phenomena. This is still an avenue of research today, but there are still many physicists (myself included) trying to explain observations, as opposed to hunting down some fragment of evidence to match a cornucopia of mathematical theories. 

 

The first description of antimatter* is almost a footnote to the main purpose of Dirac’s 1931 paper, reminiscent of the ambassador’s line that “Rosencrantz and Guildenstern are dead” in the final scene of Hamlet. It tidies up the concept that the negative energy solutions to the relativistic wave equation, describing the anti-electron (the term “positron” was coined by Carl Anderson a year later, the experimental discoverer of the particle). It is presented almost dismissively, with Dirac describing the annihilation probability of such a particle to be so high that they probably would not be observed in nature until significant technological advancements can be made. The de-emphasis of the whole idea, and the way the writing quickly moves past the point. It’s an interesting treatment of a concept that has possibly raised more questions than it has answered in the past 80-odd years. 

 

The concept has been given a life of its own by hundreds of physicists over the years, with questions about antimatter ranging from the existential (why does the universe contain more antimatter than matter?) to the ridiculous (I received an email recently about antimatter stars in the Milky Way). Like Tom Stoppard’s play, most of the science of antimatter has occurred in the wings of the Big Science Narratives, occasionally encroaching onto the main stage. Unlike Tom Stoppard’s play (in my opinion), the study of antimatter in the universe is actually interesting. Of course, I may be a little biased. 

 

Lots of blog posts have appeared in recent days about the Big Science Stories that people are looking forward to this year. I’m also looking forward to many of them. But I’m also looking forward to the small science stories. The “huh, that’s weird” stories that make you think again and remind you that while asking the big questions may be important, it’s often the little ones that keep you awake at night. 

 

*The discoverer of antimatter, at least experimentally, is usually stated to be Carl Anderson. However, Soviet academic Dimitri Skobeltsyn, and, independently, Caltech graduate student Chung-Yao Chao found tracks in bubble chambers that behaved as an electron would, except they curved the opposite way. Both observations were dismissed as an error on the part of the experimentalists. I think these two individuals deserve acknowledgement. 

Reference:

"Quantized singularities in the electromagnetic Field", P.A.M. Dirac