Counting down to the SI redefinition: kelvin and degree Celsius

kelvin mountainIn case you missed it, the redefinition of the International System of Units (SI) is going into effect this World Metrology Day, 20 May 2019. Each month we are bringing you a blog post featuring one of the units of the SI. This month we are focusing on the kelvin, the SI base unit for thermodynamic temperature. It’s winter in the Northern hemisphere and outdoor temperatures have dropped, so let’s jump in!

Unit of the month – kelvin

Accurate temperature measurement is essential in a wide range of everyday processes, from controlling chemical reactions and food production, to the assessment of weather and climate change. And almost every engineering process depends on temperature – sometimes critically. Knowing the correct temperature is also essential, but much more difficult, in more extreme conditions, like the intensely hot temperatures required to produce steel or the very low temperatures required to use superconductors.

Measuring temperature has a long history. About 2,000 years ago, the ancient Greek engineer Philo of Byzantium came up with what may be the earliest design for a thermometer: a hollow sphere filled with air and water, connected by tube to an open-air pitcher. The idea was that air inside the sphere would expand or contract as it was heated or cooled, pushing or pulling water into the tube. Later, people noticed that the air contracted in volume by about one third as the sphere was cooled from the boiling temperature of water to the ice point. This caused people to speculate on what would happen if one could keep cooling the sphere. In the middle of the 19th century, British physicist William Thomson – later Lord Kelvin – also became interested in the idea of ‘infinite cold’ a state we now call the absolute zero of temperature. In 1848, he published a paper, On an Absolute Thermometric Scale’ in which he estimated that absolute zero was approximately, -273 °C. In honour of his investigations, we now name the unit of temperature, the kelvin, after him.

kelvin imageWhen Lord Kelvin carried out his investigations, it was not yet universally accepted that all substances were made out of molecules in ceaseless motion. We now know that temperature is a measure of the average energy of motion of these particles, and absolute zero – zero kelvin – corresponds to the lowest possible temperature, a state where the thermal motion of molecules has ceased.

In 1960, when the SI was established, the temperature of the triple point of water was defined to be 273.16 K exactly. This is the temperature at which (in the absence of air) liquid water, solid water (ice) and water vapour can all co-exist in equilibrium. This temperature was chosen as a standard temperature because it was convenient and highly reproducible. Accordingly, the kelvin was defined to be the fraction 1/273.16 of the temperature of the ‘triple point’ of water. We then measured the temperature of an object by comparing it against the standard temperature. Unusually in the SI, we also defined another unit of temperature, called the degree Celsius (°C). This is related to the kelvin by subtracting 273.15 from the numerical value of the temperature expressed in kelvin.

t(in °C) = T(in K) – 273.15

The reason for this is to make it easier to use in a wide variety of applications that had previously used the ‘centigrade’ scale. In our everyday life we are used to expressing temperature in degrees Celsius. On this scale water freezes at about 0 oC and boils at approximately 100 oC. Notice the conversion from kelvin to degrees Celsius subtracts 273.15, so the triple point of water is 0.01 °C.

With the redefinition, the kelvin will no longer be defined in terms of an arbitrarily-chosen reference temperature. Instead, we will define temperatures in terms of the energy of molecular motion. We will do this by taking the value of the Boltzmann constant k to be 1.380 649 × 10−23 exactly when expressed in units of joules per kelvin (J K−1). One joule per kelvin is equal to one kg m2s−2 K−1, where the kilogram, metre and second are defined in terms of hc and ∆ν. So after this redefinition, we will be effectively measuring the temperature in terms of the energy of molecular motion. The degree Celsius will be related to kelvin in the same way as it was before May 2019.

Why is redefinition of Kelvin important?

For almost all users, the redefinition will pass unnoticed; water will still freeze at 0 °C, and thermometers calibrated before the change will continue to indicate the correct temperature. However, the redefinition opens up the possibility of using new technologies to measure temperature, something that is likely to be of benefit first at extremely high or low temperatures.

Range of temperatures

Coldest natural air temperature measured on Earth (Antarctic) -89.2 °C
Mercury Freezes -38.8 °C
Water freezes (1) 0 °C
Earth surface, average over year, land and ocean 1978 (2) 14.1 °C
Earth surface, average over year, land and ocean 2017 (2) 14.9 °C
Hottest natural air temperature measured on Earth, Furnace Creek, USA 56.7 °C
Water boils (1) 100 °C (actually 99.974 °C)
Molten Steel About 1 600 °C
Surface of the Sun About 5 500 °C
Centre of the Earth (estimate) About 7 000 °C
Centre of the Sun (estimate) About 15 million °C

(1) changes with altitude, this value at sea level.
(2) This year’s value and represents the general trend at the time.

Most people use the degree Celsius (°C) where °C = K + 273.15.


NTRK Fusion Testing for new therapies: Detecting & managing rare pediatric & adult cancers

Neurotrophic tyrosine receptor kinases (NTRK) can become abnormally fused to other genes resulting in growth signals that can lead to cancer in many organs of the human body. TRK gene fusion-based cancers are rare but present in pediatric and adult cancers such as lung, thyroid, colon, etc. (see, e.g., Figure 1). Anti-tumor drugs that target NTRK fusions have been shown to be largely effective across many tumor types regardless of patient age (adult or pediatric).

Figure 1: Estimated frequency of NTRK gene fusion in specific tumor types.[1]

New selective and targeted tyrosine kinase receptor inhibitor of the tropomyosin receptor kinases TrkA, TrkB, and TrkC are either in developmental stage (Entrectinib)[2] or recently approved Vitrakvi® (Larotrectinib)[3] for treatment of locally advanced or metastatic solid tumors with NTRK fusions without a known resistance mutation.[4] However, there is a dearth of NTRK fusion genes in many traditional solid tumor-based NGS targeted assays, which makes identifying patients that will benefit from these drugs by NGS testing a challenge. It also means that patient samples harboring NTRK fusions are extremely rare, thus hampering IVD development and compromising analytical validation according to CAP and CLIA guidelines.

NGS IVD vendors such as Illumina, Thermo Fisher, Archer, and others are expanding their NGS assays to incorporate RNA fusion analysis for NTRK genes. These new NGS assays will require analytical and clinical validation to support patient testing and eligibility for these anti-tropomyosin TKIs.  The use of highly multiplexed, patient-like reference samples containing NTRK fusion genes will be critical in the development, validation and clinical testing by NGS assays of solid tissue biopsies (FFPE) of metastatic solid tumor patients potentially harboring NTRK gene fusions in clinical trial stratification and targeted therapeutic treatments. The availability of designed NTRK quality control materials will immediately help overcome the lack of NTRK patient samples.

Today, the key need is designing and manufacturing solid tumor FFPE RNA NTRK fusion reference standards under ISO 13485 (cGMP) to support clinical testing laboratories looking to bring on board NTRK fusion testing assays as companion diagnostic or complementary tests for these classes of anti-tropomyosin TKIs.

SeraCare, in partnership with Bayer, has recently developed a panel of 15 RNA-based NTRK fusion genes in an FFPE format.[5] This reference standard contains NTRK1, NTRK2, and NTRK3 fusion genes with known actionable fusion partners in the TRK pathway.

Figure 2: List of NTRK fusions in the newly released Seraseq® FFPE NTRK RNA Fusion reference standard.5

In conclusion, anti-tropomyosin tyrosine kinase receptor drugs targeting NTRK genes have moved expeditiously from developmental stage all the way to the market. This has opened up new opportunities for cancer patients harboring these fusions to have access to therapeutic drugs that may ultimately address their diseases. To facilitate this, labs require highly-multiplexed FFPE NTRK RNA fusion reference standards for end-to-end evaluation of NGS assays from development to validation, and routine QC runs of patient samples. These reference standards provide readily available materials for rapid assay development and provide confidence to regulators and clinicians that an assay can detect the fusions pairs it claims to detect.

Genomic selection: methods in crop and animal breeding

Genomic selection NGS blog 2 photoGenomic selection: 6 factors to consider when choosing between targeted GBS and microarrays

Genomic selection through genotyping is more accurate than conventional breeding methods and promises to revolutionise crop and animal breeding. Gel-based technologies such as restriction fragment length polymorphism (RFLP) analysis and Sanger sequencing were used during the development of this field, followed by microarrays and PCR-based genotyping.

Next generation sequencing (NGS) is now powering the development of more targeted genotyping by sequencing (tGBS) methods, including capture-based enrichment followed by analysis using NGS. The question is, which genotyping solution is right for the challenges you face? Let’s compare the main contenders, arrays and targeted genotyping by sequencing (tGBS), by looking at some key factors that will affect the efficiency of your breeding program.

Can you implement the flexible and scalable marker strategy you need?

The number of markers you need to screen for genomic selection depends on the species and the stage in your breeding cycle. Single nucleotide polymorphism (SNP) discovery involves 10,000–100,000 markers on perhaps as little as 5 samples, whereas the sweet spot for genomic selection is around 1,000–25,000 assays run on approximately 1,000 samples (see Figure 1). Being able to apply different levels of multiplexing using the same technology adds efficiency and consistency to your breeding program.

NGS Blog 2 Figure 1 updated

Figure 1. A typical breeding program involves moving from high coverage of a few samples in SNP discovery to medium multiplex levels for genomic selection.

Certainly arrays of different densities can deliver high and medium capacity SNP analysis, but this technology is very rigid, making it difficult to adapt marker density and composition based on the stage in your breeding program. There are, on the other hand, tGBS methods that can be used to screen up to 100,000 markers per sample but also function efficiently in that mid-plex sweet spot of 500 to 25,000 markers. This gives you the flexibility you need for genomic selection, even when you are working with multiple populations that have different genetic backgrounds.


Certainly arrays of different densities can deliver high and medium capacity SNP analysis, but this technology is very rigid, making it difficult to adapt marker density and composition based on the stage in your breeding program. There are, on the other hand, tGBS methods that can be used to screen up to 100,000 markers per sample but also function efficiently in that mid-plex sweet spot of 500 to 25,000 markers. This gives you the flexibility you need for genomic selection, even when you are working with multiple populations that have different genetic backgrounds.

Can you be cost-effective?

The effective application of genomic selection means screening a large number of samples quickly and efficiently, which can reduce breeding cycles by years. This speeds up time to market for new varieties, giving you that competitive edge. To achieve this requires the right technology and also cost efficiency. Array technology is lagging behind in terms of flexibility, and the high setup cost can also be daunting. On the other hand, data output and efficiency of NGS platforms is continually being improved, dramatically reducing the cost of NGS (Figure 2). Already today we can multiplex thousands of samples for tGBS on a single flow cell of even a medium throughput NGS system.

So basing sample selection on NGS analysis will inevitably drive up throughput while reducing costs. Added to that, highly efficient enrichment methods can reduce day-to-day operation costs even further.


NGS Blog 2 Figure 2

Figure 2. The cost of NGS is falling rapidly. Source:

Can you stay on target?


Using NGS for whole genome sequencing will deliver a relatively low cost per data point, but there are strong arguments for ensuring that analysis is limited to the specific genomic regions relevant to your study. For example, in most crop genomes, the exome corresponds to only 1–2% of the entire genome. Specifically targeting the regions of interest through capture and sequencing significantly reduces the cost of sequencing and data analysis (see 1).

Can you make the most of imputation?

One way to reduce genotyping cost is imputation, which is the statistical inference of unobserved alleles by using known haplotypes based on database information progenitors and sequenced parental lines. Imputation is cheaper in breeding programs because the numbers of markers that are used for screening are reduced. Therefore, accurate and informative imputation can make breeding strategies much more cost effective, but this can only be achieved with high-quality data from previously screened populations.

Imputation can be performed both from arrays and sequencing data. The trick is to select an optimized subset of existing markers. In the case of arrays, these design rounds can be very time consuming and prohibitively expensive. Added to that, it may be impossible to replace these markers since they are fixed on an array that may be the result of collaboration between many groups. In contrast, the lower setup costs and flexibility of tGBS make this approach much more attractive when developing imputation panels. With tGBS, any non-informative markers can be quickly and easily exchanged for others that may be more informative in further rounds of screening and imputation.

Does the technology fit into your breeding cycle?

The setup time for an array based on a new set of markers can be considerable, up to six months. In contrast, tGBS approaches can enable a turnaround time of less than 2 weeks, plus 4–6 weeks for the design of a new oligo library, which means you can fit it into a plant breeding cycle and improve selection of the accessions to be transplanted to the field and progressed. The result can be years of savings in development time.

Can you discover de novo variants?

Arrays discriminate targeted SNPs and are, by definition, fixed. Sequencing-based methods such as tGBS on the other hand enable the discovery of new SNPs and structural variants in flanking sequences of targeted SNPs. This increases the amount of genetic information you have at your fingertips, increasing the power of genomic selection. For example, in a study of 500 markers using sequences previously tested on an array, only 491 SNP sequences were originally selected to be common between the tGBS library and array data whereas tGBS discovered 5,733 de novo SNPs (2).

How to find the sweet spot with tGBS

As we have seen, exploiting genomic selection will help you produce new varieties faster. But it means finding a sequence-based genotyping solution that can meet your needs in terms of flexibility and cost-efficiency, while enabling you to carry out de novo SNP discovery, imputation, and much more. We will look at one way of achieving this in the last article in this series.

Want to learn more? Download the white paper: SeqSNP tGBS as alternative for array genotyping in routine breeding programs.

About the author: Darshna ‘Dusty’ Vyas

Dusty has been with LGC for the last 6 years working as a plant genetics specialist.

Her career began at the James Hutton Institute, formerly the Scottish Crop Research Institute, developing molecular markers for disease resistance in raspberries. From there Dusty moved on to Biogemma UK Ltd for a period of 13 years, where she worked primarily with cereal crops such as wheat, maize and barley. Through her participation in the Artemisia Project, funded by the Bill and Melinda Gates Foundation, at York University, she gained a vast understanding of the requirements by breeders for varietal development using molecular markers in MAS.

Dusty’s goal is to further breeding programs for global agricultural sustainability using high throughput methods such as SeqSNP.


  1. Efficient genome-wide genotyping strategies and data integration in crop plants. Torkamaneh D et al. Theor Appl Genet. Mar;131(3):499–511 (2018)
  2. White paper: SeqSNP tGBS as alternative for array genotyping in routine breeding programs.


This blog post was originally published on the LGC, Biosearch Technologies blog.

Our hungry planet: new tools in agrigenomics are key to food security

Our hungry planet NGS blog 1 photoFood security is a major global threat and traditional methods of plant and animal breeding will not be sufficient to increase production to the level needed to sustain the growing world population. Modern genomics-driven breeding, through analysis based on technologies such as next generation sequencing (NGS) and arrays, is revolutionizing agriculture and making genomic selection a viable approach throughout the industry. In this three series blog post find out how technology is changing global food security and what the newest tools bring to the table.

The power of genomic selection

Perhaps the biggest revolution in agriculture in the last decade is the emergence of agrigenomics to enhance traditional breeding programs. Molecular techniques, such as marker assisted selection and genomic selection, have enabled selection of improved varieties without having to rely on assessing visible characteristics. Genomic selection, in particular, addresses the key factors of the breeder’s equation (2) that increase the rate of genetic gain in plant and animal breeding:

  • Reduced breeding cycles – individuals can be progressed faster when selection is based on genotype rather than phenotype alone
  • Greater selection intensity – selecting individuals based on genotype is cheaper than selecting on phenotype, so more individuals can be evaluated (increasing ‘n’)
  • Improved accuracy – the genomic estimated breeding value (GEBV) enables prediction models to select with greater accuracy based on phenotype and previous pedigree historical data and enables prediction models to be applied with greater accuracy.
  • More efficient integration of new genetic material through the development of training population, where intensive phenotyping and genotyping can be assessed

Genomic selection has been instrumental in dairy cattle breeding where it has essentially replaced progeny testing, enabling greater and faster improvements in terms of genetic gain (see, for example, reference 3). Genomic selection has, however, had a relatively slow uptake in plant breeding. Reasons include its relative complexity compared to traditional methods, the need for expensive investments, complexity of plant genomes and ability to analyse big data using bioinformatics. The divergence of plant and animal breeding has also hindered the translation of methods between these two fields, but this problem is being addressed and hopefully both animal and plant breeding of the future will gain from common insights into genomic selection (1).

Technological development powers the agrigenomics breakthrough

Genomic selection has been made more practical by a range of methods, including next generation sequencing (NGS) and microarrays for genotyping and single nucleotide polymorphism (SNP) analysis. Massive developments in NGS technology in particular have realized the potential of genotyping by sequencing, (or GBS), and promises to revolutionize the drive to develop varieties of plant crops with, for example, desirable traits such as drought tolerance, disease resistance, and higher yield.

Despite all these advances, there are still gaps to fill in the toolbox of technologies, and finding the optimal solution for genomic selection can be a demanding process. We will be looking into these issues in the next article in this series.

Make sure you don’t miss the rest of this series by subscribing to our blog!

About the author: Darshna ‘Dusty’ Vyas

Dusty has been with LGC for the last 6 years working as a plant genetics specialist.

Her career began at the James Hutton Institute, formerly the Scottish Crop Research Institute, developing molecular markers for disease resistance in raspberries. From there Dusty moved on to Biogemma UK Ltd for a period of 13 years, where she worked primarily with cereal crops such as wheat, maize and barley. Through her participation in the Artemisia Project, funded by the Bill and Melinda Gates Foundation, at York University, she gained a vast understanding of the requirements by breeders for varietal development using molecular markers in MAS.

Dusty’s goal is to further breeding programs for global agricultural sustainability using high throughput methods such as SeqSNP.


  1. Genomic prediction unifies animal and plant breeding programs to form platforms for biological discovery. J M Hickey, T Chiurugwi, I Mackay, W Powell & Implementing Genomic Selection in CGIAR Breeding Programs Workshop Participants. Nature Genetics volume 49, pages 1297–1303 (2017)
  2. Animal breeding plans 2nd J L Lush. The Iowa State College Press (1943)
  3. Genomic selection strategies in a small dairy cattle population evaluated for genetic gain and profit. J R Thomasen et al, J. Dairy Sci. 97:458–470. (2014).


This blog originally appeared on the LGC, Biosearch Technologies blog.

Keys to Better Liquid Biopsy Assay Sensitivity

“So as everyone here is aware, I’m sure, detection of circulating tumor DNA is challenging. There’s very little of it, to start with.” Hardly a revolutionary statement by Tony Godfrey, PhD, (Associate Chair, Surgical Research and Associate Professor of Surgery, Boston University School of Medicine) but an important acknowledgement from a leading expert of the difficulty faced by laboratorians in unlocking the full promise of liquid biopsy assays. Both he and Bob Daber, PhD, (President and CTO of Genosity) detailed how they overcame common ctDNA challenges in their labs during SeraCare’s AMP Corporate Workshop in San Antonio, Texas. Watch the video to see the practical advice they gave in their presentations.

Bob Daber, with whom we collaborated on our NGS-based assay validation eBook, discussed the challenges that are unique or more pronounced in ctDNA assay validation. Chief among them being limited access to samples for the variety of studies needed to have true confidence in your assay. Drawing on his years of clinical genomics and bioinformatics expertise, including building the tumor sequencing lab at BioReference, Dr Daber talked about how ground-truth data from known-negative and known-positive materials are critical to determining your assay’s sensitivity and specificity; two attributes that take on even greater importance in liquid biopsy.

Tony Godfrey’s presentation highlighted how access to patient-like ctDNA reference standards allowed his team to refine their SiMSen-Seq technology by reducing background error rates, evaluating absolute copy numbers, and improving sensitivity. Dr Godfrey talked about how his team was able to confidently detect  0.1% mutant allele frequency, and how ground-truth reference materials help them improve performance. Something that wouldn’t have been possible with the inherent variability and scarcity of remnant specimens.

Both presentations are full of actionable information and instructive data from the speakers’ own labs. Watch the video for free today to learn how you can have more confidence in your liquid biopsy assay.


This blog post was originally published on the SeraCare blog.

Counting down to the SI redefinition: Amperes, amber and EKGs

Electric current, for most of us, seems to appear mysteriously from electrical sockets and gives life to our appliances. Just like the blood in our veins, an electric current is flowing through the “conductive arteries” in our homes, powering the everyday equipment we need. That’s why we consider our unit for the month of January to be a bit of a hero. In fact, in 2019, the ampere – the SI base unit of electric current  – will be undergoing some “electrifying ” changes.

Today, electricity is so common that a most of us rarely give it a second thought. But, if it suddenly disappeared, life would quickly become very hard indeed. Lights, television, radio, phones and computers, the washing machine, dishwasher, fridge – a seemingly limitless list of appliances, operating only when electric current is present, which would no longer be available to us. Try and think about how many times you’ve made use of electricity already today. It was probably at least three before breakfast!

The earliest historical work on electricity comes from the ancient Greeks, who described static electricity.  They observed that when amber was rubbed with a piece of fur it began to attract small objects such as hair or dust. In fact, the word “electricity” comes from the Greek “electron” – ηλεκτρον – meaning amber. After the phenomenon was first noted on, however, “Amber electric current” went unstudied for many years, remaining only a strange curiosity. Further development on electricity began again in the 17th century, but it was only in the 19th century that work in this field started in earnest.

The rapid developments in electricity seen during that time also brought advancements in metrology. Not only was the measurement of electrical quantities easy to implement, but electricity was extraordinarily useful for all kinds of scientific activities – yielding new research, technology and industries. As a result, electrical measurements quickly dominated every field of metrology.

To this day, most quantities are measured by electrical methods – even non-electrical ones like mechanical properties.

With ever-increasing knowledge and understanding of electrical science it was also possible to measure our hero – ampere – with growing accuracy and precision. The first definition of the ampere was introduced during the International Electrical Congress held in Chicago in 1893, and confirmed by the International Conference in London in 1908. This “international ampere” was an early realization of the ampere we now know; defined as the current that would deposit 1,118 milligrams of silver per second from a silver nitrate solution. Measurements have now revealed that 1 international ampere is today’s 0,99985 A.

However, this definition had a major disadvantage: It strongly linked the unit with its practical realisation, meaning it was difficult to have certainty that measurements made in different times and places were exactly the same. Eventually, after constant development in generating and measuring electric currents, it became clear that there were much better methods to realise the ampere. For this reason, the 9th General Conference of Weights and Measures (CGPM) in 1948 approved a new definition of ampere, which is still valid today.

Its official wording is as follows:

The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed 1 metre apart in vacuum, would produce between these conductors a force equal to 2 x 10–7 newton per metre of length.

With the new definition advancement in measurement science accelerated again. Having an accurate and precise way to define the ampere meant metrologists were able to examine different physical phenomena that could be used in construction of more and more accurate measuring instruments. Quantum physics, for example, has been especially fruitful in providing new and innovative solutions for measurement challenges.

Now, however, practically realising the SI base unit – ampere – is far from the official definition. Present practical realisation of the ampere is based on the relation of electric current with voltage and resistance. A device called a ‘Josephson junction’ is used to generate the voltage, and the ‘quantum Hall effect’ generates resistance. Both methods use physical phenomena that are well understood, and are dependent on physical constants: the Josephson constant and von Klitzing constant respectively (both Josephson and von Klitzing are Nobel prize laureates), that can both be expressed in terms of fundamental constants of nature: e – elementary charge and h – Planck constant. What advantage is there in using these constants? Natural constants remain unchanged no matter what, so that using them to define a unit makes that unit measurable throughout whole universe (e.g. Planck constant is just the same in Copenhagen, Rome or on Mars). Units that are dependent on constants of nature also assure the long-term stability of measurement standards.

The official wording of ampere definition on 20th of May 2019 will be:

The ampere, symbol A, is the SI unit of electric current. It is defined by taking the fixed numerical value of the elementary charge e to be 1,602 176 634×10-19 when expressed in the unit C, which is equal to A s, where the second is defined in terms of ∆νCs

As you can hopefully see, the change in definition is radical, and will be a milestone in the field of measurement science. The redefinition will be ready to underpin the challenges of science and technology of the 21st century.

Did you know…

Each of us is a small power station. Our nervous system is an “electric circuit” that constantly sends millions of visual, tactile and auditory stimuli in the form of electrical impulses to our “central unit” – the brain. The brain then processes these electrical impulses so that we can see, hear, taste, smell and sense heat, cold and pain. In return, the brain, transmits appropriate electrical impulses back to our body to control it. Thanks to electrical impulses our heart beats! We can consciously walk, run, paint, and jump. Whether we like it or not, our January hero – the ampere – is always within us!

The fact that each of us is “charged with current” can be particularly useful in medicine, and especially in diagnosing diseases or administering lifesaving treatments. When our heart stops, for example, we use a defibrillator – a piece of equipment that delivers a dose of electric current to the heart. In heart diagnostics we commonly use electrocardiograph (EKG), which is no more than a very sensitive current measuring instrument. EKG records, analysed by specialists in cardiology, are a powerful tool that can reveal a lot about the condition of a heart.

Christmas, candles and the countdown to the SI redefinition

Following the recent decision, taken by measurement scientists from around the world, to redefine the International System of Measurement (SI) units; on the 20th of each month we will be looking at one of the seven SI base units. You’ll be able to find out where it’s used in everyday life, how it’s defined now, and the changes that will come into force on 20 May 2019. 


In a first for theatre, the Swan United Electric Light Company was commissioned to create miniature lights which twinkled from wreaths worn by the lead fairies. At the time electric lighting was still cutting edge and the tiny lights – powered by battery packs hidden in costumes – amazed audiences. The term ‘fairy lights’ was born. A year later Edward Johnson, a colleague of Thomas Edison, put fairy lights on a Christmas tree for the first time.


Which bring us to our SI unit of the month: the candela. The light, or luminous intensity, from a single clear indoor fairy light is approximately one candela, regardless of whether you have traditional tungsten filament fairy lamps or modern LED versions.

The candela is the only SI base unit linked to human perception. As the eye cannot see all light colours equally well, being most sensitive to yellow-green light, luminous intensity measures light adjusted for our human sensitivity to different frequencies.

Although the candela will effectively stay the same from 2019, as it is already defined in relation to other base units, the accuracy will be improved by updates to the second (find out more on 20 March) and the metre (see November’s update). The new definition will be:  

The candela is defined by taking the fixed numerical value of the luminous efficacy of monochromatic radiation of frequency 540×1012 Hz, Kcd, to be 683 when expressed in the unit lmW−1, which is equal to cdsrW−1, or cdsrkg−1m−2s3, where the kilogram, metre and second are defined in terms of h, c and ΔνCs. 

With rapid innovation in energy efficient lighting the need for reliable ways to compare the brightness of different light sources has become ever more important. This includes our fairy lights. Clear tungsten fairy lights use about ten times the electricity of modern LED lights, and coloured tungsten fairy lights are even less energy efficient as most light is absorbed by the coloured coating. Yet LEDs create colours with more visible light per electrical watt using different semiconductor materials. Accurately measuring their luminous intensity allows us to compare the visual appearance of each.

The lights most people hang on their trees this Christmas will be LEDs. As with most modern lighting, the tiny twinkling bulbs that amazed 19th Century opera fans have been superseded by energy efficient alternatives. And for that we have the candela to thank.

Merry Christmas!

Countdown to the SI redefinition

metr_bck_3Throughout history, measurement has been a fundamental part of human advancement. The oldest systems of weights and measures discovered date back over 4000 years. Early systems were tied to physical objects, like hands, feet, stones and seeds, and were used, as we still do now, for agriculture, construction, and trade. Yet, with new inventions and global trade more ever more accurate and unified systems were needed. In Europe, it wasn’t until the 19th Century that a universally agreed measurement system began to be adopted and the International System of Units (SI units) was born.

Now, after years of hard work and scientific progress, we are ready once again to update and improve the SI units. The redefinition of the International System of Units enacted on the 16 November 2018 during the General Conference for Weights and Measures will mean that the SI units will no longer be based on any physical objects, but instead derived through fundamental properties of nature. Creating a system centred on stable and universal natural laws will ensure the long-term stability and reliability of measurements, and act as a springboard for the future of science and innovation.

The redefinition of the SI units will come into force on the 20th of May 2019, the anniversary of the signing of the Metre Convention in 1875, an international treaty for the international cooperation in metrology.  To celebrate, we’ll be counting down each of the SI units – the metre, second, kilogram, kelvin, mole, candela, and ampere. Join us on the 20th of every month to find out where units are commonly used, how they’re defined, and the changes that will take place!


“You’ve never heard of the Millennium Falcon? … It’s the ship that made the Kessel run in less than 12 parsecs!” Han  Solo’s description of the Millennium Falcon in Star Wars is impressive, but something’s not quite right. Do you know why? The unit he uses to illustrate the prowess of the Falcon – a parsec – isn’t actually a measure of time, but length! It probably won’t surprise anyone Han Solo isn’t very precise when it comes to the physics of his ship, but in fact he isn’t too far from the truth. This is because we use time to define length.

metre facts

What does this mean? Well, in the case of Han Solo, one parsec is about 3.26 light-years, and a light-year is the distance light travels in one year. Back down on Earth, we have the same method for defining length. In the International System of Units (SI), the base unit of length is the metre, and it can be understood as:

A metre is the distance travelled by light in 1/299792458 of a second.

The reason we use the distance travelled by light in a certain amount of time is because light is the fastest thing in the universe (that we know of) and it always travels at exactly the same speed in a vacuum. This means that if you measure how far light has travelled in a vacuum in 1/299792458 of a second in France, Canada, Brazil or India, you will always get exactly the same answer no matter where you are!

On 20 May next year the official definition of the metre will change to:

The metre is defined by taking the fixed numerical value of the speed of light in vacuum c to be 299 792 458 when expressed in the unit m s−1, where the second is defined in terms of the caesium frequency, ∆ν.

We’ll be returning to the definition of the second on 20 March, so join us again then to find out more.

So, what’s the difference? Actually, there’s no big change coming for the metre. Although the word order has been rephrased, the physical concepts remain the same.

Making a mountain out of a Mole Day

Today is Mole Day, chemists’ #1 holiday! Mole Day occurs every year on October 23 from 6:02am to 6:02pm to commemorate Avogadro’s Number and the basic measuring unit of chemistry, the mole.

What is Avogadro’s Number?

Avogadro’s Number is currently defined as the number of atoms in 12 grams of carbon-12, which comes to 6.02 x 10²³.

Amadeo Avogadro was a 19th century Italian scientist who first proposed in 1811 that equal volumes of all gases will contain equal numbers of molecules to each other (known as Avogadro’s Law).  Nearly one hundred years later, in 1909, chemists decided to adopt the mole as a unit of measure for chemistry. At the time, the scientists decided to define the mole based on the number of atoms in 12 grams of carbon-12. Jean Baptiste Perrin suggested this number should be named after Avogadro, due to his contributions to molecular theory.

Molecules and atoms are very tiny and numerous, which makes counting them particularly difficult. To put it into perspective, an atom is one million times smaller than the width of the thickest human hair. It’s useful to know the precise amount of certain substances in a chemical reaction, but calculating the number of molecules would get very messy if every time we had to use numbers like 602,214,129,270,000,000,000,000.

Enter Avogadro’s number! Using the mole simplifies complex calculations. Before the mole was adopted, other units were inadequate for measuring such miniscule amounts. After all, one millilitre of water still has 33,456,340,515,000,000,000,000 H₂O molecules!

This doesn’t mean that one mole of different substances equal each other in mass or size; it simply refers to the number of something, while size and mass vary by object. For example, a mole of water molecules would be about 18 millilitres, while a mole of aluminium molecules would weigh about 26 grams. However, a mole of pennies would cover the Earth at a depth of over 400 metres.  And a mole of moles would weigh over half the size of the moon!

Why Mole Day?

Schools around the U.S. and other places use the day as a chance to cultivate an interest in chemistry among students. Mole Day goes back to the 1980s, when an article in The Science Teacher magazine proposed celebrating the day. This inspired other teachers to get involved and  a high school chemistry teacher in Wisconsin founded the National Mole Day Foundation in 1991. The American Chemical Society then planned National Chemistry Week so this it falls on the same week as Mole Day every year.

Every year, chemistry teachers use this as an opportunity to perform fun experiments, bake mole-shaped desserts, and teach random facts about Avogadro’s number to students, with the aim of increasing science engagement

revised-SI-logoWhat about the revised SI?

In a previous blog post, we outlined how several of the units of the International Standards of units are undergoing a change. For example, the kilogram will no longer be based on a physical artefact, but on a constant. In the case of the mole, the current definition defines one mole as containing as many molecules as “atoms in 12 grams of carbon-12”. The new definition, which will likely come into effect next May, simply defines the mole as containing exactly 6.02214076 x 10²³ elementary entities. This eliminates any reference to mass and lays out the exact number of molecules as Avogadro’s constant, so the mole will not be dependent on any substance’s mass.

More Mole Facts

A mole of doughnuts would cover the earth in a layer five miles deep!

All of the living cells in a human body make up just over half a mole.

A mole of rice grains would cover all of the land area on Earth at a depth of 75 metres.

A mole of turkeys could form sixteen earths.

Head on over to our Twitter page to tell us what you think about Mole Day (or share more great facts), and to see what everyone is talking about!

Analysis for Innovators: Supporting industry

The Coconut Collaborative Ltd (CCL) manufactures Coconut Yogurt for the UK and a wide international market. Based on its innovative products and strong market presence it has become the market leading coconut brand in the UK.

Quality checks are required to ensure CCL maintains the high quality of product expected by its growing consumer base. The unwanted use of a barrel of coconut cream tainted by rancidity in the manufacture of coconut cream renders it unsuitable for sale and consumption. This leads to complete batches of coconut yogurt being rejected. Checks for rancidity are currently performed manually, with batches of coconut cream being tasted ahead of their use in production. With the growth of the business, this is becoming increasingly impractical but there are currently no automated methods available to test for rancidity.

beach-coconut-delicious-322483Through the Analysis for Innovators (A4I) partnership, CCL had access to innovative and advanced measurement and analytical technologies at both the National Measurement Laboratory (NML) and the Science and Technology Facilities Council (STFC) to develop assess the feasibility of developing a rapid and robust screening approach to detect rancidity in coconut cream.


Supply specialists, engineers and scientists from CCL, the NML and STFC assessed the feasibility of using multispectral imaging (MSI) and Raman spectroscopy to detect traces of rancid coconut cream ahead of its use in the production of coconut yogurt.

Multispectral imaging (MSI) methods showed the sensitivity and repeatability to screen for and detect rancid coconut cream, performing a non-destructive test in no more than 20 seconds. MSI has also been shown to have the potential to be used as a quantitative screening approach to determine the level of rancidity in a sample of coconut cream.

These encouraging results have demonstrated proof of principle for using MSI as the basis for an enhanced level of quality control and screening in CCL’s manufacturing plants. This screening approach will help avoid annual costs in excess of £500k through reduced production and material charges. With further optimisation, MSI could also be used as a predictive tool upstream in the sample production process prior to the onset of
full rancidity, making further efficiency and cost savings for the industry in general.

In addition, the method has been “future proofed” so that it can also be extended to understand variations in coconut cream consistency between batches, suppliers and even geographic origin, as well as screening for the presence of other undesirable materials which could affect the quality of coconut cream.

This project has allowed CCL to continue to support the growth of its business whilst benefiting from the expertise brought by the collaboration with the NML and STFC.