SI Units

Welcome to our glossary dedicated to the exploration of the seven fundamental SI units. This platform is designed to provide a thorough understanding of each of these units, their history, their significance, and their application in the world of science and beyond.
The International System of Units, commonly known as SI units, is a globally accepted system that standardizes measurements for physical quantities. This system is composed of seven fundamental units:

Name
Symbol
Quantity
Meter
m
Length
Kilogram
kg
Mass
Mole
mol
Amount of substance
Ampere
A
Electric current
Second
s
Time
Kelvin
K
Thermadynamic Temperature
Candela
cd
Luminous intensity

As you delve into our glossary, you'll discover the fascinating history of each unit, learn about the scientists who played key roles in their establishment, and understand how these units have shaped and continue to shape our understanding of the world around us. Each section of this glossary will not only detail the science behind each unit but will also contextualize its place within the broader history of scientific discovery. Whether you are a student, a teacher, a scientist, or simply someone with an inquisitive mind, this glossary is a valuable resource for your journey into the world of SI units.

The History and Impact of SI Units

The history of units of measurement spans thousands of years and diverse cultures, reflecting humanity's innate need to quantify and understand the world. From the ancient Egyptians measuring land with the cubit to the Greeks using the stadion for distances, the concept of measurement has been fundamental to human progress.

However, a major problem with these early systems was the lack of standardization. Units varied widely from region to region, and even within the same region, different systems might be used for different purposes. This lack of uniformity made trade, scientific study, and technological advancement difficult.

The creation of the metric system in the late 18th century was a major breakthrough. Sparked by the French Revolution's spirit of rationality and equality, the metric system sought to standardize measurements based on nature's constants. For example, the meter was defined as one ten-millionth of the distance from the North Pole to the equator.

Symbol
Defining Constant
Exact Value
Used to Define
ΔνCs
Hyperfine transition frequency of Cesium
9192631770 Hz
Second
c
Speed of Light
299792458 m/s
Meter
h
Plank Constant
6.62607015×10−34 J⋅s
Kilogram
e
Elementary Charge
1.602176634×10−19 C
Ampere
k
Boltzmann Constant
1.380649×10−23 J/K
Kelvin
NA
Avogadro Constant
6.02214076×1023 mol−1
Mole
Kcd
Luminous efficancy of 540 THz radiation
683 lm/W
Candela

The birth of the metric system laid the groundwork for the establishment of the International System of Units, or SI, in the mid-20th century. The SI system took the principles of the metric system and expanded them, creating a coherent and universally applicable system. The SI units were carefully defined based on fundamental properties of the universe and were designed to be precise, reproducible, and easy to use.

Furthermore, the SI system has had a significant influence on trade and commerce, making transactions more straightforward and fair. It has also played a role in education, allowing students around the world to learn about physical quantities in a standardized way.

Today, the SI system continues to evolve to meet the needs of modern science and technology. In 2019, the definitions of the kilogram, ampere, kelvin, and mole were updated based on fundamental constants of nature, marking a significant step towards even greater precision and reliability.

Since its creation, the SI system has had a profound impact on society, particularly in the fields of science and technology. It has facilitated international collaboration and communication, enabling scientists from different countries to share their findings and build upon each other's work. It has also underpinned technological advancements, allowing engineers to design and build complex systems with precision and reliability.

The Kilogram - A Fundamental Unit of Mass

Definition

The kilogram is a unit of mass in the International System of Units (SI), and it is defined in terms of three fundamental physical constants. The first is a specific atomic transition frequency, denoted Δν Cs, which defines the duration of the second. The second is the speed of light, represented as 'c', which when combined with the second, defines the length of the metre. The third constant is the Planck constant, denoted 'h', which when combined with the metre and the second, defines the mass of the kilogram. This redefinition, based on fundamental constants of nature, ensures that the kilogram remains a stable and universally applicable unit of mass. It is no longer defined by a physical artifact, as it was prior to 2019. Instead, it is rooted in constants that are intrinsic to the nature of our universe, ensuring a higher degree of accuracy and stability in measurements. The kilogram is widely used in scientific contexts, such as physics and chemistry, as well as in everyday life​​.

Scientific History of the Kilogram

The kilogram was originally defined as the mass of one liter (a cube of water with sides of 0.1 meters in length) of pure water at the temperature of melting ice (0°C). This was the definition provided in 1795 when the metric system was first introduced by the French.

In 1875, the Treaty of the Metre was signed, creating the International Bureau of Weights and Measures (BIPM), whose task was to provide international control of measurements. Following this, a new standard for the kilogram, the International Prototype of the Kilogram (IPK), was created. This was a cylinder made of a platinum-iridium alloy, with height and diameter both 39.17 millimeters, stored at the BIPM in Sèvres, France.

Platinum-Iridium alloy cylinder which functioned as the physical definition of a kilogram between 1875 and 2019

For over a century, this physical artifact was the definition of the kilogram, and highly precise replicas were distributed around the world. However, over time, scientists noticed that the mass of the IPK and its replicas were slowly drifting apart.

Given these discrepancies, and to make the definition less dependent on a physical object, the BIPM voted in 2018 to redefine the kilogram based on fundamental physical constants.

Physical Constants Behind the Kilogram

The redefinition of the kilogram, which took effect on May 20, 2019, is based on the Planck constant (h). The Planck constant is a fundamental constant of nature that plays a crucial role in quantum mechanics, relating the energy of a photon to its frequency. Its value is approximately 6.62607015×10-34 joule-seconds.

The new definition sets an exact value for the Planck constant in terms of SI units:

h = 6.62607015×10-34 m2⋅kg/s

Using this fixed value, the kilogram can be defined indirectly through precise measurements of frequency, voltage, and current in a device called a Kibble balance. This redefinition ensures that the value of the kilogram will remain stable over time, and can be reproduced in different laboratories by following the same method.

Units Based on the Kilogram

The kilogram, as a fundamental unit of mass in the International System of Units (SI), is the basis for many derived units used in engineering. Here are 20 such units:

  1. Newton (N): SI unit of force. One newton is the force needed to accelerate one kilogram of mass at the rate of one metre per second squared.
  2. Joule (J): SI unit of energy, work, or amount of heat. One joule is equal to the energy transferred when one newton of force moves an object one meter.
  3. Pascal (Pa): SI unit of pressure. One pascal is the pressure exerted by a one-newton force acting on an area of one square meter.
  4. Watt (W): SI unit of power. One watt is equal to one joule per second.
  5. Kilowatt-hour (kWh): A unit of energy. One kilowatt-hour is the energy consumed by one kilowatt of power over one hour.
  6. Pound-force (lbf): A unit of force commonly used in the US customary system of units. It's the amount of force required to accelerate a one-kilogram mass at a rate of about 9.81 meters per second squared.
  7. Kilopascal (kPa): A unit of pressure. One kilopascal is equivalent to a thousand pascals.
  8. Gram (g): A unit of mass. One gram is one-thousandth of a kilogram.
  9. Metric ton (t): A unit of mass. One metric ton is equivalent to 1,000 kilograms.
  10. Kilogram-force (kgf): A unit of force, defined as the force exerted by one kilogram of mass in a 9.80665 m/s^2 gravitational field.
  11. Kilogram per meter cubed (kg/m³): A unit of density.
  12. Kilogram per second (kg/s): A unit of mass flow rate.
  13. Kilogram per mole (kg/mol): A unit of molar mass.
  14. Kilogram-meter (kg·m): A unit of torque.
  15. Kilogram meter per second (kg·m/s): A unit of linear momentum.
  16. Kilogram meter squared (kg·m²): A unit of moment of inertia.
  17. Kilogram meter squared per second (kg·m²/s): A unit of angular momentum.
  18. Kilogram per cubic meter (kg/m³): A unit of mass density or volumetric mass.
  19. Kilogram per liter (kg/L): A unit of density used for gases.
  20. Kilogram per square meter (kg/m²): A unit of surface density, used to measure the thickness of thin materials like fabric, foil, or sheet metal.

But this list is far from complete. All units of mass, from the pictogram to the imperial ton are based on the kilogram. Looking to convert between differnet units if mass? Use our handy unit converter tool:

Kilogram vs Imperial Weight Units

The kilogram is part of the metric system, which is decimal-based and used worldwide for most scientific work. However, the Imperial system, which is used for everyday measurements in a few countries including the United States, uses different units for mass. In the Imperial system, the basic unit of weight is the pound. One kilogram is approximately equal to 2.20462 pounds. Other Imperial units such as ounces, stones, and tons are also sometimes used, and these units do not have a decimal relationship with each other like the metric units.

The kilogram and pound exemplify the principal difference between the metric and Imperial systems. The metric system is decimal-based, meaning units are scaled by powers of ten. This system is logically consistent and thus is easier to understand, compute, and convert. This is one of the reasons the metric system is used almost universally in scientific contexts and is the official system of measurement for most countries in the world.

In contrast, the Imperial system uses different bases for different units. For instance, there are 16 ounces in a pound, 14 pounds in a stone, and 2,000 pounds in a short ton. This lack of consistency can make the Imperial system more difficult to use for complex calculations and conversions.

That being said, the Imperial system is deeply ingrained in the few countries that still primarily use it, like the United States, and many people in these places have an intuitive understanding of these units from daily use.

The Future of the Kilogram

The recent redefinition of the kilogram is a significant step forward in metrology, the science of measurement. By linking the kilogram to the Planck constant, we have ensured that the definition of this essential unit of mass will remain stable for the foreseeable future, and that it can be realized in any well-equipped laboratory in the world.

Yet, science is never static. Just as our understanding of mass has evolved over centuries, so too will it continue to evolve. As new technologies, materials, and scientific theories emerge, our methods of defining and measuring mass may change again. But for now, the kilogram as defined by the Planck constant represents the pinnacle of our quest to measure mass with ever greater accuracy and universality.

Conclusion

Understanding the kilogram, its history, its definition, and its place in the world of measurement is an integral part of both scientific education and daily life. This unit, which once depended on a physical artifact, now resides in the realm of fundamental constants, reflecting our deepening understanding of the universe. The development of the kilogram has been a journey through the history of science itself, and it will continue to evolve as our knowledge expands. Whether you're dealing with scientific calculations or everyday measures, the humble kilogram is at the heart of our quantifiable world.

The Meter - A Fundamental Unit of Length

Definition

A meter, often spelled "metre" outside of the United States, is the fundamental unit of length in the International System of Units (SI). It is defined as the distance traveled by light in a vacuum over a time interval of 1/299,792,458 of a second. This standard unit is used globally for a wide range of measurements in scientific, educational, and everyday contexts. The meter was initially established in the late 18th century by the French Academy of Sciences. Its original definition was as one ten-millionth of the distance from the equator to the North Pole along a meridian through Paris. Over time, the definition has been refined to its current state to ensure greater precision. In engineering and other technical fields, the meter is crucial for specifying dimensions, calculating forces, and modeling systems. Various derived units, like the kilometer, centimeter, and millimeter, further extend its utility. Conversion factors are used to relate the meter to non-SI units, such as inches, feet, and yards.

The Scientific History of the Meter

The history of the meter as a unit of length is an interesting journey that's deeply intertwined with the progress of science over the past few centuries. It all started in the 18th century during the French Revolution when the French Academy of Sciences was commissioned to create a unified system of measurement. In 1793, the first definition of the meter was established as one ten-millionth of the distance from the equator to the North Pole along a meridian through Paris. This distance was estimated using a survey of the Paris Meridian conducted by Pierre Méchain and Jean-Baptiste Delambre.

The International Prototype Meter which was used until 1960.

However, this geodesic method of defining the meter proved to be imprecise and difficult to replicate. Therefore, in 1889, the definition was revised and the meter was redefined in terms of a prototype meter bar. This bar, made of a platinum-iridium alloy, was kept at the International Bureau of Weights and Measures (BIPM) in France.

In the early 20th century, with the advent of new scientific methods and understanding, the meter was again redefined. In 1960, the meter was redefined in terms of the wavelength of light emitted by a certain type of krypton atom (krypton-86) when it changes energy states.

The Physical Constant behind the Meter

But the journey didn't stop there. The most recent definition of the meter, established in 1983, is based on the universal constant of the speed of light. According to this definition, a meter is the distance that light travels in a vacuum in 1/299,792,458 seconds. This definition is highly precise and universal, as the speed of light in a vacuum is a fundamental constant in physics.

The evolution of the meter over the centuries is a testament to the progress of scientific understanding and technological capabilities. It also underscores the importance of consistent and precise units of measurement in facilitating global scientific communication and collaboration.

Other Units based on the Meter

Units derived from the meter cover a broad range of physical quantities and scales. These include measurements of length across different magnitudes, from the tiny scales of nanometers and picometers used in quantum physics and molecular biology, to the larger scales of kilometers used in geography and transportation. Additionally, meters are also used in the definition of units for area and volume, such as the square meter and cubic meter, which measure two-dimensional and three-dimensional space, respectively.

Furthermore, the meter forms an essential part of many compound units used to measure other physical quantities. For instance, the meter per second is the unit of speed or velocity, while the meter per second squared is used for acceleration. Other examples include units of electric and magnetic fields, such as the ampere per meter and the tesla, as well as units for illuminance, energy transfer, and resistivity, among others. These units enable precise measurements and calculations in fields as diverse as physics, engineering, and meteorology, highlighting the versatility and importance of the meter in the SI system.

  1. Kilometer (km): Used to measure longer distances, such as the distance between cities. 1 km equals 1,000 meters.
  2. Centimeter (cm): Used to measure shorter lengths. 1 cm equals 0.01 meters.
  3. Millimeter (mm): Commonly used to measure small lengths like the thickness of a credit card. 1 mm equals 0.001 meters.
  4. Micrometer (μm): Used in scientific applications to measure objects much smaller than a millimeter, such as cells or particles. 1 μm equals 1x10^-6 meters.
  5. Nanometer (nm): Used to measure extremely small dimensions, such as the wavelength of light or the size of molecules. 1 nm equals 1x10^-9 meters.
  6. Square meter (m²): The SI unit of area, used to measure the size of a surface or a flat space.
  7. Cubic meter (m³): The SI unit of volume, used to measure the capacity or size of a three-dimensional space.
  8. Hectare (ha): Used to measure large areas of land. 1 ha equals 10,000 square meters.
  9. Meter per second (m/s): The SI unit of speed or velocity, expressing the distance traveled per unit of time.
  10. Meter per second squared (m/s²): The SI unit of acceleration, expressing the change in velocity per unit of time.
  11. Meter per hour (m/hr): A unit of speed typically used in specific industries, such as drilling or mining, to measure progress over time.
  12. Pascal (Pa): The SI unit of pressure, defined as one newton per square meter.
  13. Watt (W): The SI unit of power, defined as one joule per second, which may be expressed in terms of meters, kilograms, and seconds.
  14. Joule (J): The SI unit of energy, defined as the energy transferred to (or work done on) an object when a force of one newton acts on that object in the direction of its motion through a distance of one meter.
  15. Newton (N): The SI unit of force, defined as the force that will accelerate a one-kilogram mass by one meter per second squared.
  16. Ampere per meter (A/m): This is the SI unit for magnetic field strength, often used in electromagnetism.
  17. Coulomb per square meter (C/m²): The SI unit of electric charge density or electric flux density.
  18. Lux (lx): The SI unit of illuminance, measuring luminous flux per unit area. It is equal to one lumen per square meter.
  19. Watt per square meter (W/m²): This unit is used in physics to measure the rate of energy transfer or flux.
  20. Volt per meter (V/m): This is the SI derived unit for electric field strength.
  21. Ohm meter (Ω·m): Used in electrical engineering, this is the SI unit for resistivity.
  22. Tesla (T): The SI unit of magnetic flux density, defined as one weber per square meter.
  23. Henry per meter (H/m): The SI unit of permeability, used in electromagnetism.
  24. Farad per meter (F/m): The SI unit of permittivity, used in electromagnetism.
  25. Kilogram per cubic meter (kg/m³): The SI unit of density, expressing mass per unit volume.

Want to convert between different units of length? Use our simple conversion tool.

The Meter vs. Imperial units of length


The meter is the base unit of length in the International System of Units (SI), which is the most widely used system of measurement around the world. It provides a standard, universally recognized unit for measuring length, from tiny subatomic particles to the vast distances between stars. The SI system, including the meter, is decimal-based, meaning it uses multiples of 10. This makes conversions within the system straightforward, as one simply needs to move the decimal point to change between units such as millimeters, centimeters, meters, and kilometers.

On the other hand, the imperial system, which includes units such as the inch, foot, yard, and mile, is used primarily in the United States and for certain specific purposes in a few other countries. The imperial system has its roots in historical measurements, many of which were based on everyday objects or parts of the human body. However, conversions in the imperial system can be more complex because it doesn't use a consistent base like the SI system. For example, there are 12 inches in a foot, 3 feet in a yard, and 1,760 yards in a mile. This inconsistency can lead to more room for errors in calculations and conversions.

While the meter and the imperial units serve the same basic purpose of measuring length, the systems they belong to have notable differences. The SI system, with the meter as its base unit of length, offers a more universally recognized and easier-to-use system due to its decimal structure. Conversely, the imperial system, while still in use in some parts of the world, offers less consistency and can be more difficult to work with due to its irregular conversion factors. Despite these differences, understanding both systems can be valuable, especially in fields such as engineering, manufacturing, and international commerce where both systems might be in use.

How the Meter may change

The meter has evolved in its definition over time, from a fraction of the Earth's circumference to the length of a specific metal bar, and most recently, to a distance light travels in a specific fraction of a second. This evolution reflects our increasing ability to measure length more accurately and consistently, driven by advances in technology and scientific understanding. The current definition, based on the speed of light, is rooted in universal physical constants, and thus is inherently more stable and reproducible than the previous definitions.

Looking to the future, it seems likely that the current definition of the meter will remain relevant for a long time. The reason is that it is based on the speed of light, a fundamental constant of nature, which is not expected to change. Moreover, this definition allows for extremely precise measurements, with accuracy sufficient for the most demanding scientific and engineering applications we know today. The definition is also technology-independent, meaning it doesn't rely on any particular measurement device or method, which makes it resilient against future technological changes.

However, it's important to note that our understanding of the universe is still evolving, and there may be future scientific discoveries or technological advances that necessitate a revision of the meter's definition. For instance, new insights from quantum physics or the discovery of new fundamental physical constants could potentially influence the definition of the meter. However, any change would likely be driven by the need for even greater precision or consistency in measurements, rather than a fundamental flaw with the current definition. Therefore, while it's impossible to say with certainty that the current definition will remain relevant forever, it seems highly likely that it will continue to serve us well for the foreseeable future.

Conclusion

From its origins in the French Revolution to its current definition based on the immutable speed of light, the journey of the meter is a reflection of the evolution of human understanding of the physical world. As our scientific knowledge and technological capabilities have advanced, so too has our ability to measure length with ever-increasing precision. The meter, in its many forms, has been at the center of this journey, providing a consistent, universal standard for measuring length. Its influence extends beyond pure science, playing a crucial role in a wide array of practical applications, from engineering and manufacturing to navigation and mapping.

In an increasingly interconnected and technologically advanced world, the importance of a universally accepted and highly precise system of measurement cannot be overstated. The meter, as the cornerstone of this system, will continue to be integral to scientific discovery, technological innovation, and everyday life. Its journey is far from over, and future advancements in science and technology will only further cement its relevance. As we move forward into an exciting future, the meter will no doubt continue to evolve, adapt, and serve as a testament to human ingenuity and the quest for understanding the universe around us.

The Mole - The Fundamental Unit of Substance

Definition

The mole, often denoted by the symbol "mol", is the fundamental unit of amount of substance in the International System of Units (SI). It is defined as exactly 6.02214076×1023 elementary entities, where the elementary entity may be an atom, a molecule, an ion, an ion pair, or a subatomic particle such as a proton, depending on the substance. This standard unit is used globally for a broad range of calculations in scientific, educational, and everyday contexts. The mole was initially established as the number of atoms in 12 grams of carbon-12, but this definition has been refined for greater precision. In chemistry and other scientific fields, the mole is crucial for specifying amounts of reactants and products in chemical reactions, calculating concentrations of solutions, and in determining the molar mass of substances. Various derived units, like the millimole and micromole, further extend its utility. Conversion factors are used to relate the mole to non-SI units, such as the number of atoms or molecules.

The Scientific History of the Mole

The concept of the mole has been in use since the 19th century, but its precise definition has evolved over time. Originally, the term gram-molecule was used to mean one mole of molecules, and gram-atom for one mole of atoms. For instance, 1 mole of MgBr2 is 1 gram-molecule of MgBr2 but 3 gram-atoms of MgBr2​.

Historically, the mole was defined based on the number of elementary entities in 12 grams of carbon-12. However, this definition changed in 2019 when the International System of Units redefined the mole. The mole is now defined as exactly 6.02214076×1023 elementary entities, which is known as the Avogadro number. This redefinition was adopted to increase the precision and consistency of scientific measurements​​.

The Avogadro number or Avogadro constant, denoted by N or N0, is the number of particles in one mole. This number is approximately the number of nucleons (protons and neutrons) in one gram of ordinary matter. The Avogadro constant was chosen so that the mass of one mole of a chemical compound, expressed in grams, is approximately the number of nucleons in one constituent particle of the substance. It is numerically equal to the average mass of one molecule (or atom) of a compound in daltons​.

Portrait of Amedeo Avogadro. © De Agostini Editore/age fotostock

Avogadro's law, proposed in 1811 by Amedeo Avogadro, played a significant role in the history of the mole. The law states that under the same conditions of temperature and pressure, equal volumes of different gases contain an equal number of molecules. This law is approximately valid for real gases at sufficiently low pressures and high temperatures. The specific number of molecules in one gram-mole of a substance is defined as the molecular weight in grams, which is the Avogadro constant. The volume occupied by one gram-mole of gas is about 22.4 liters at standard temperature and pressure, and is the same for all gases according to Avogadro's law​.

The mole is a fundamental concept in chemistry, used to express amounts of reactants and products in chemical reactions, and the concentration of solutions. The mass of a substance is equal to its relative atomic (or molecular) mass multiplied by the molar mass constant, which is almost exactly 1 g/mol. The molar mass of a substance is the ratio of the mass of a sample of that substance to its amount of substance, expressed as the number of moles in the sample. With the definition of the mole tied to the Avogadro constant, the mass of one mole of any substance is N times the average mass of one of its constituent particles – a physical quantity whose precise value has to be determined experimentally for each substance​.

Other Units based on the Mole

The mole, a fundamental unit of measure in the field of chemistry, is widely used in a variety of contexts and can be combined with other units to provide meaningful and practical measures. The utility of the mole is expanded through its use in a range of derived units, which are applied in various scientific and technical scenarios. Below is a list of 20 such units that incorporate the mole, highlighting its versatility and essential role in quantifying chemical substances:

  1. Mole per litre (mol/L) or molar: This is a unit of concentration used in chemistry for solutions. It represents the number of moles of a substance present in one litre of solution.
  2. Mole per cubic meter (mol/m3): Another unit of concentration, often used in the context of gases.
  3. Mole per kilogram (mol/kg): This is a unit of molality, which measures the number of moles of solute per kilogram of solvent.
  4. Mole percent (mol %): This is a unit of mole fraction, the ratio of the number of moles of a component in a mixture to the total number of moles in the mixture, multiplied by 100.
  5. Mole per cubic decimetre (mol/dm3): This is another unit of concentration, equivalent to mol/L.
  6. Millimole (mmol): This is a submultiple of the mole, where 1 mmol = 0.001 mol.
  7. Micromole (μmol): This is a submultiple of the mole, where 1 μmol = 0.000001 mol.
  8. Nanomole (nmol): This is a submultiple of the mole, where 1 nmol = 0.000000001 mol.
  9. Picomole (pmol): This is a submultiple of the mole, where 1 pmol = 0.000000000001 mol.
  10. Femtomole (fmol): This is a submultiple of the mole, where 1 fmol = 0.000000000000001 mol.
  11. Mole per second (mol/s): This is a unit of reaction rate, representing the number of moles of a substance that reacts or is produced per second.
  12. Mole per hour (mol/h): Similar to mol/s, but over an hour timescale.
  13. Kilomole (kmol): This is a multiple of the mole, where 1 kmol = 1000 mol.
  14. Mole per gram (mol/g): This is a unit of specific amount, representing the number of moles of a substance per gram.
  15. Mole per millilitre (mol/mL): This is another unit of concentration.
  16. Mole per gallon (mol/gal): This is a unit of concentration used in some systems.
  17. Mole per kilomole (mol/kmol): This is a unit of mole fraction.
  18. Mole per pound (mol/lb): This is a unit of specific amount in some systems.
  19. Mole per cubic foot (mol/ft3): This is a unit of concentration, often used in the context of gases.
  20. Mole per cubic inch (mol/in3): This is another unit of concentration, often used in the context of gases.

Next to units, the Mole is of course also used in various engineering equations which explain the world around us. Here is a quick overview of the 10 most important ones:

  1. Ideal Gas Law: PV = nRT - This equation is used to describe the behavior of an ideal gas, where P is pressure, V is volume, n is number of moles, R is the ideal gas constant, and T is temperature.
  2. Molarity Equation: M = n/V - Used in solution chemistry, this equation calculates the molarity (M) of a solution, given the number of moles of solute (n) and the volume (V) of the solution in liters.
  3. Beer-Lambert Law: A = εbc - Used in spectroscopy, this equation relates the absorbance of light by a solution (A) to the path length of the light (b), the molar absorptivity (ε), and the concentration in moles per liter (c).
  4. Equation of State for an Ideal Gas: PV = nRT - Similar to the Ideal Gas Law, this equation is used in thermodynamics to model the state of an ideal gas.
  5. Raoult's Law: P = P0*X - Raoult's law is used to predict the vapor pressure of a solution, where P is the partial vapor pressure of the solute, P0 is the vapor pressure of the pure solute, and X is the mole fraction of the solute.
  6. Dalton's Law of Partial Pressures: Ptotal = ΣPi - Used to calculate the total pressure of a gas mixture, where Ptotal is the total pressure and Pi is the partial pressure of each gas (calculated by nRT/V for each gas, where n is number of moles of each gas).
  7. Henry's Law: P = kH * C - This law describes the solubility of a gas in a liquid under a given pressure, where P is the partial pressure of the gas, kH is the Henry's law constant, and C is the concentration of the gas in moles per liter.
  8. Arrhenius Equation: k = Ae(-Ea/RT) - This equation is used in chemistry and engineering to model the temperature dependence of reaction rates, where k is the rate constant, A is the pre-exponential factor, Ea is the activation energy, R is the gas constant, and T is the temperature.
  9. Van't Hoff Equation: ln(K2/K1) = -ΔH/R(1/T2 - 1/T1) - This equation is used to estimate the change in equilibrium constant of a reaction with temperature, where K1 and K2 are the equilibrium constants at temperatures T1 and T2 respectively, ΔH is the enthalpy change of the reaction, and R is the gas constant.
  10. Nernst Equation: E = E0 - (RT/nF) * ln(Q) - This equation is used in electrochemistry to relate the reduction potential of a half-cell at any point in time to the standard electrode potential, temperature, and reaction quotient, where E is the cell potential, E° is the standard cell potential, R is the gas constant, T is the temperature, n is the number of moles of electrons transferred in the half-cell reaction, F is the Faraday constant, and Q is the reaction quotient.

What the Mole is used for

The mole serves as a cornerstone in the realm of stoichiometry, which is the study of quantitative relationships in chemical reactions. When chemists carry out reactions, they can use the mole to accurately determine the amount of reactants needed and predict the amount of products that will be produced. This is vital in industries where chemical reactions are regularly carried out, such as pharmaceuticals and manufacturing, to ensure efficiency and avoid wastage of resources. For example, when the balanced chemical equation for a reaction is known, chemists can use the mole ratio of reactants and products to guide the quantities required for the reaction and the expected yield.

Moreover, the mole is instrumental in expressing concentrations of solutions, which is fundamental in fields like biochemistry and environmental science. The molar concentration, defined as the amount of dissolved substance per unit volume of solution, is commonly used and is typically expressed in moles per litre (mol/L). For instance, in assessing the health of a water body, scientists might measure the molar concentration of pollutants, or in a medical lab, the concentration of a specific protein in a patient's blood sample may be determined in mol/L.

Additionally, the concept of the mole is used in determining molecular and atomic masses. By definition, the molar mass of a substance is the mass of a sample of that substance divided by its amount in moles. This concept allows scientists to easily convert between the mass of a substance and the number of particles it contains, which is crucial in many applications in chemistry and physics. For example, knowing the molar mass of a substance can help in identifying unknown substances in a sample by comparing measured molar masses.

The mole also plays a significant role in gas laws, which govern how gases behave under various conditions. One such law, Avogadro's law, states that equal volumes of gases, at the same temperature and pressure, contain an equal number of moles. This understanding allows scientists and engineers to predict and control the behavior of gases in a variety of applications, such as in the design of engines or in the study of the Earth's atmosphere.

Lastly, the mole aids in the definition and understanding of the Avogadro constant, a fundamental constant of nature that relates the number of particles in a system to the amount of substance in moles. This relationship is pivotal in the field of quantum physics, where it is used to calculate quantities at the atomic and subatomic level. The Avogadro constant is also used in the definition of the kilogram, linking the macroscopic world of everyday objects to the microscopic world of atoms and molecules.

The mole in Engineering and Thermodynamics

In the realm of engineering and thermodynamics, the mole plays a fundamental role. It is an essential unit in the study and application of thermodynamics, specifically in the quantification of the amount of heat and work involved in different processes. The mole is used as a bridge to connect macroscopic properties (like heat and temperature) with microscopic properties (like the kinetic energy of individual particles). This allows engineers and scientists to make sense of phenomena on a scale that is directly observable and relevant to practical applications.

The use of the mole is not confined to pure sciences; it extends to various branches of engineering. Chemical engineering, in particular, heavily relies on the mole concept for the design, operation, and optimization of chemical processes. In the context of chemical reactions, the mole is used to balance equations and determine stoichiometry, thereby enabling engineers to predict the quantities of reactants needed and products formed.

In thermodynamics, the mole is a key player in the concept of molar concentration, also known as molarity. This term is used to express the concentration of a solution by specifying the number of moles of solute present in a liter of solution. Accurate determination of molar concentration is crucial in many engineering applications, such as the design of chemical reactors and the control of process variables in various industries.

Furthermore, the mole is also central to the understanding and application of ideal gas laws, which are foundational in thermodynamics. Engineers often use these laws to model and predict the behavior of gases under varying conditions of temperature, volume, and pressure. Through the mole, they can link these macroscopic properties to the number of gas particles, providing a deeper understanding of gas behavior.

Conclusion

In summary, the mole is an essential unit in chemistry, physics, and engineering that provides a bridge between the macroscopic world we can see and the microscopic world of atoms and molecules. It enables precise measurement and prediction of chemical reactions, as well as the characterization and control of gases. In turn, it allows us to understand and manipulate the natural world, from the design of new medicines to the prediction of atmospheric behavior.

Moreover, the mole concept, alongside Avogadro's constant, enables a profound link between the everyday objects we interact with and the atomic and subatomic particles that constitute them. Through this connection, we can better understand the fundamental structures and behaviors of the universe. In this way, the humble mole serves as a crucial tool in the quest to unlock the secrets of the natural world.

As we continue to refine our scientific understanding and technological capabilities, the mole will no doubt remain an indispensable tool. It's a testament to the power of scientific thinking, how a simple concept can provide such remarkable insight and utility. In the study of both the infinitely small and the infinitely large, the mole will continue to be a fundamental player, shaping our exploration and understanding of the universe around us.

The Ampere - The Fundamental Unit of Current

Definition

The Ampere, often symbolized by the letter "A", is the fundamental unit of electric current in the International System of Units (SI). It is defined as one coulomb of electric charge per second. More precisely, as per the 2019 redefinition of SI units, it's defined by taking the fixed numerical value of the elementary charge e to be 1.602176634×10⁻¹⁹ when expressed in the unit C, which is equal to A.s, where the second is defined in terms of the caesium frequency ΔνCs. More about this, further down.

This universal unit is applied in numerous calculations across scientific, educational, and everyday contexts. Ampere forms the basis for understanding electric current flow in circuits and is critical to fields such as electronics, electrodynamics, and electrical engineering. It helps in specifying and controlling the flow of electric charge in devices ranging from high power machinery to minute electronic components.

Derivations of the Ampere unit, such as milliampere (mA) and microampere (μA), are frequently employed to describe current levels in different contexts. The Ampere, alongside other SI units like the Volt and Ohm, forms the foundation of Ohm's Law and other essential principles in electrodynamics and circuit theory. Various conversion factors are used to correlate the Ampere with non-SI units of electric current.

The Scientific History of the Ampere

The story of the Ampere as an SI unit traces its roots back to the dawn of the electrical age in the 18th and 19th centuries, during which electricity was an area of fervent exploration. Early scientists had begun to perceive the interplay between electricity and magnetism, as demonstrated by Hans Christian Ørsted’s discovery in 1820 that a compass needle could be deflected by an electric current. This observation was the first real proof of a link between electric currents and magnetic fields, laying the groundwork for the idea of quantifying electric current.

André-Marie Ampère, a French physicist and mathematician, built upon Ørsted's observations and carried out significant investigations in the field of electrodynamics. Within a week of hearing about Ørsted's discovery, Ampère had formulated a mathematical and physical theory to explain the mutual action of electric currents, establishing the foundation of electrodynamics. Ampère proposed that just as the electric current produced a magnetic field, two passing currents would affect each other. This was a groundbreaking discovery and the unit of electric current was named 'Ampere' in his honor.

A key aspect of the Ampere's history is the construction of the first practical method for measuring current—the galvanometer. Invented in 1820 by Johann Schweigger, this device was initially used to detect and measure small electric currents. This rudimentary apparatus, which incorporated a needle to show the strength and direction of a current, provided the practical means necessary for scientists to investigate the properties of electric current and make more precise observations.

As electrical technology progressed and became more central to industrial development in the 19th century, there arose a need for standardizing electrical units. This led to the formation of the International Electrical Congress in the late 19th century, which aimed to create international standards for electrical units. In 1881, the Congress decided that the Ampere should be the unit of electric current in the emerging International System of Units. This decision cemented the Ampere's position as a fundamental unit in science and technology, paving the way for international consistency in electrical measurements.

In the early 20th century, as the field of quantum physics was beginning to evolve, the link between electricity and atomic phenomena became clearer. Scientists realized that the flow of electric current was essentially the movement of electrons, subatomic particles carrying a negative elementary charge. This understanding led to the redefinition of the Ampere in terms of elementary charge per second, providing a connection between macroscopic electrical phenomena and the underlying quantum world.

The Ampere's definition underwent a significant change in 2019 with the redefinition of the SI units. Using the exact value of the elementary charge (e), the Ampere was redefined as the amount of electric current corresponding to the flow of 1.602176634×10⁻¹⁹ elementary charges per second. This definition brought the Ampere into alignment with the other SI units, which are now all defined in terms of physical constants. This marked the latest chapter in the Ampere's history, and this definition remains in use today, continuing to support scientific advancement and technological innovation around the world.

The Physical Constant behind the Ampere

The physical constant behind the Ampere, the elementary charge (denoted as "e"), is a fundamental concept in the realm of physics and chemistry. This constant pertains to the electric charge carried by a single proton, or equivalently, the negative of the electric charge carried by a single electron. The precise value of the elementary charge has been determined through multiple innovative and intricate experiments over the last century, with each revision improving upon the accuracy of this vital physical constant.

One of the key experiments that led to the determination of the elementary charge is the oil-drop experiment conducted by Robert A. Millikan and Harvey Fletcher in the early 20th century. Millikan's experiment was designed to measure the electric charge of tiny oil droplets suspended in an electric field. The outcome of this experiment allowed Millikan to calculate the smallest charge that any droplet carried, leading to the quantification of the elementary charge.

The establishment of the elementary charge as a physical constant was a groundbreaking achievement, providing the basis for our understanding of electric charge at a quantum level. This constant underpins a wide range of phenomena and equations in quantum mechanics and electrodynamics. For instance, it is used in the calculation of the electromagnetic force between charges, given by Coulomb's law.

The elementary charge also plays a critical role in the field of quantum electrodynamics (QED), the theory describing how light and matter interact. This theory, considered one of the most successful in the history of physics, relies heavily on the elementary charge as it helps define the coupling constant, which measures the strength of the electromagnetic interaction. In the context of particle physics, the elementary charge is the basic unit of charge, and particles' electric charges are often given as multiples of this constant.

For decades, the elementary charge was a measured quantity. However, in the 2019 redefinition of the SI units, its value was fixed by definition at exactly 1.602176634×10⁻¹⁹ coulombs. This change was part of a wider shift towards defining all SI units in terms of physical constants, resulting in more robust and future-proof definitions.

The decision to fix the elementary charge's value had significant implications for the definition of the Ampere. Under the new definitions, the Ampere became a derived unit, defined in terms of the amount of electric current corresponding to the flow of 1.602176634×10⁻¹⁹ elementary charges per second. This redefinition, while subtle, underscores the intimate relationship between the Ampere and the elementary charge and how deeply interconnected they are in our understanding of the universe.

Other Units based on the Ampere

What the Ampere is used for

Conclusion

Heading

Definition

The Scientific History of the Second

The Physical Constant behind the Second

Other Units based on the Second

The Second vs. Other units of Time

What the Second is used for

Conclusion

Heading

Definition

The Scientific History of the Kelvin

The Physical Constant behind the Kelvin

Other Units based on the Kelvin

The Kelvin vs. Imperial units of Temperature

What the Kelvin is used for

Conclusion

Heading

Definition

The Scientific History of the Candela

The Physical Constant behind the Candela

Other Units based on the Candela

What the Candela is used for

Conclusion