Welcome to our glossary dedicated to the exploration of the seven fundamental SI units. This platform is designed to provide a thorough understanding of each of these units, their history, their significance, and their application in the world of science and beyond.
The International System of Units, commonly known as SI units, is a globally accepted system that standardizes measurements for physical quantities. This system is composed of seven fundamental units:
As you delve into our glossary, you'll discover the fascinating history of each unit, learn about the scientists who played key roles in their establishment, and understand how these units have shaped and continue to shape our understanding of the world around us. Each section of this glossary will not only detail the science behind each unit but will also contextualize its place within the broader history of scientific discovery. Whether you are a student, a teacher, a scientist, or simply someone with an inquisitive mind, this glossary is a valuable resource for your journey into the world of SI units.
The history of units of measurement spans thousands of years and diverse cultures, reflecting humanity's innate need to quantify and understand the world. From the ancient Egyptians measuring land with the cubit to the Greeks using the stadion for distances, the concept of measurement has been fundamental to human progress.
However, a major problem with these early systems was the lack of standardization. Units varied widely from region to region, and even within the same region, different systems might be used for different purposes. This lack of uniformity made trade, scientific study, and technological advancement difficult.
The creation of the metric system in the late 18th century was a major breakthrough. Sparked by the French Revolution's spirit of rationality and equality, the metric system sought to standardize measurements based on nature's constants. For example, the meter was defined as one ten-millionth of the distance from the North Pole to the equator.
The birth of the metric system laid the groundwork for the establishment of the International System of Units, or SI, in the mid-20th century. The SI system took the principles of the metric system and expanded them, creating a coherent and universally applicable system. The SI units were carefully defined based on fundamental properties of the universe and were designed to be precise, reproducible, and easy to use.
Furthermore, the SI system has had a significant influence on trade and commerce, making transactions more straightforward and fair. It has also played a role in education, allowing students around the world to learn about physical quantities in a standardized way.
Today, the SI system continues to evolve to meet the needs of modern science and technology. In 2019, the definitions of the kilogram, ampere, kelvin, and mole were updated based on fundamental constants of nature, marking a significant step towards even greater precision and reliability.
Since its creation, the SI system has had a profound impact on society, particularly in the fields of science and technology. It has facilitated international collaboration and communication, enabling scientists from different countries to share their findings and build upon each other's work. It has also underpinned technological advancements, allowing engineers to design and build complex systems with precision and reliability.
The kilogram is a unit of mass in the International System of Units (SI), and it is defined in terms of three fundamental physical constants. The first is a specific atomic transition frequency, denoted Δν Cs, which defines the duration of the second. The second is the speed of light, represented as 'c', which when combined with the second, defines the length of the metre. The third constant is the Planck constant, denoted 'h', which when combined with the metre and the second, defines the mass of the kilogram. This redefinition, based on fundamental constants of nature, ensures that the kilogram remains a stable and universally applicable unit of mass. It is no longer defined by a physical artifact, as it was prior to 2019. Instead, it is rooted in constants that are intrinsic to the nature of our universe, ensuring a higher degree of accuracy and stability in measurements. The kilogram is widely used in scientific contexts, such as physics and chemistry, as well as in everyday life.
The kilogram was originally defined as the mass of one liter (a cube of water with sides of 0.1 meters in length) of pure water at the temperature of melting ice (0°C). This was the definition provided in 1795 when the metric system was first introduced by the French.
In 1875, the Treaty of the Metre was signed, creating the International Bureau of Weights and Measures (BIPM), whose task was to provide international control of measurements. Following this, a new standard for the kilogram, the International Prototype of the Kilogram (IPK), was created. This was a cylinder made of a platinum-iridium alloy, with height and diameter both 39.17 millimeters, stored at the BIPM in Sèvres, France.
For over a century, this physical artifact was the definition of the kilogram, and highly precise replicas were distributed around the world. However, over time, scientists noticed that the mass of the IPK and its replicas were slowly drifting apart.
Given these discrepancies, and to make the definition less dependent on a physical object, the BIPM voted in 2018 to redefine the kilogram based on fundamental physical constants.
The redefinition of the kilogram, which took effect on May 20, 2019, is based on the Planck constant (h). The Planck constant is a fundamental constant of nature that plays a crucial role in quantum mechanics, relating the energy of a photon to its frequency. Its value is approximately 6.62607015×10-34 joule-seconds.
The new definition sets an exact value for the Planck constant in terms of SI units:
h = 6.62607015×10-34 m2⋅kg/s
Using this fixed value, the kilogram can be defined indirectly through precise measurements of frequency, voltage, and current in a device called a Kibble balance. This redefinition ensures that the value of the kilogram will remain stable over time, and can be reproduced in different laboratories by following the same method.
The kilogram, as a fundamental unit of mass in the International System of Units (SI), is the basis for many derived units used in engineering. Here are 20 such units:
But this list is far from complete. All units of mass, from the pictogram to the imperial ton are based on the kilogram. Looking to convert between differnet units if mass? Use our handy unit converter tool:
The kilogram is part of the metric system, which is decimal-based and used worldwide for most scientific work. However, the Imperial system, which is used for everyday measurements in a few countries including the United States, uses different units for mass. In the Imperial system, the basic unit of weight is the pound. One kilogram is approximately equal to 2.20462 pounds. Other Imperial units such as ounces, stones, and tons are also sometimes used, and these units do not have a decimal relationship with each other like the metric units.
The kilogram and pound exemplify the principal difference between the metric and Imperial systems. The metric system is decimal-based, meaning units are scaled by powers of ten. This system is logically consistent and thus is easier to understand, compute, and convert. This is one of the reasons the metric system is used almost universally in scientific contexts and is the official system of measurement for most countries in the world.
In contrast, the Imperial system uses different bases for different units. For instance, there are 16 ounces in a pound, 14 pounds in a stone, and 2,000 pounds in a short ton. This lack of consistency can make the Imperial system more difficult to use for complex calculations and conversions.
That being said, the Imperial system is deeply ingrained in the few countries that still primarily use it, like the United States, and many people in these places have an intuitive understanding of these units from daily use.
The recent redefinition of the kilogram is a significant step forward in metrology, the science of measurement. By linking the kilogram to the Planck constant, we have ensured that the definition of this essential unit of mass will remain stable for the foreseeable future, and that it can be realized in any well-equipped laboratory in the world.
Yet, science is never static. Just as our understanding of mass has evolved over centuries, so too will it continue to evolve. As new technologies, materials, and scientific theories emerge, our methods of defining and measuring mass may change again. But for now, the kilogram as defined by the Planck constant represents the pinnacle of our quest to measure mass with ever greater accuracy and universality.
Understanding the kilogram, its history, its definition, and its place in the world of measurement is an integral part of both scientific education and daily life. This unit, which once depended on a physical artifact, now resides in the realm of fundamental constants, reflecting our deepening understanding of the universe. The development of the kilogram has been a journey through the history of science itself, and it will continue to evolve as our knowledge expands. Whether you're dealing with scientific calculations or everyday measures, the humble kilogram is at the heart of our quantifiable world.
A meter, often spelled "metre" outside of the United States, is the fundamental unit of length in the International System of Units (SI). It is defined as the distance traveled by light in a vacuum over a time interval of 1/299,792,458 of a second. This standard unit is used globally for a wide range of measurements in scientific, educational, and everyday contexts. The meter was initially established in the late 18th century by the French Academy of Sciences. Its original definition was as one ten-millionth of the distance from the equator to the North Pole along a meridian through Paris. Over time, the definition has been refined to its current state to ensure greater precision. In engineering and other technical fields, the meter is crucial for specifying dimensions, calculating forces, and modeling systems. Various derived units, like the kilometer, centimeter, and millimeter, further extend its utility. Conversion factors are used to relate the meter to non-SI units, such as inches, feet, and yards.
The history of the meter as a unit of length is an interesting journey that's deeply intertwined with the progress of science over the past few centuries. It all started in the 18th century during the French Revolution when the French Academy of Sciences was commissioned to create a unified system of measurement. In 1793, the first definition of the meter was established as one ten-millionth of the distance from the equator to the North Pole along a meridian through Paris. This distance was estimated using a survey of the Paris Meridian conducted by Pierre Méchain and Jean-Baptiste Delambre.
However, this geodesic method of defining the meter proved to be imprecise and difficult to replicate. Therefore, in 1889, the definition was revised and the meter was redefined in terms of a prototype meter bar. This bar, made of a platinum-iridium alloy, was kept at the International Bureau of Weights and Measures (BIPM) in France.
In the early 20th century, with the advent of new scientific methods and understanding, the meter was again redefined. In 1960, the meter was redefined in terms of the wavelength of light emitted by a certain type of krypton atom (krypton-86) when it changes energy states.
But the journey didn't stop there. The most recent definition of the meter, established in 1983, is based on the universal constant of the speed of light. According to this definition, a meter is the distance that light travels in a vacuum in 1/299,792,458 seconds. This definition is highly precise and universal, as the speed of light in a vacuum is a fundamental constant in physics.
The evolution of the meter over the centuries is a testament to the progress of scientific understanding and technological capabilities. It also underscores the importance of consistent and precise units of measurement in facilitating global scientific communication and collaboration.
Units derived from the meter cover a broad range of physical quantities and scales. These include measurements of length across different magnitudes, from the tiny scales of nanometers and picometers used in quantum physics and molecular biology, to the larger scales of kilometers used in geography and transportation. Additionally, meters are also used in the definition of units for area and volume, such as the square meter and cubic meter, which measure two-dimensional and three-dimensional space, respectively.
Furthermore, the meter forms an essential part of many compound units used to measure other physical quantities. For instance, the meter per second is the unit of speed or velocity, while the meter per second squared is used for acceleration. Other examples include units of electric and magnetic fields, such as the ampere per meter and the tesla, as well as units for illuminance, energy transfer, and resistivity, among others. These units enable precise measurements and calculations in fields as diverse as physics, engineering, and meteorology, highlighting the versatility and importance of the meter in the SI system.
Want to convert between different units of length? Use our simple conversion tool.
The meter is the base unit of length in the International System of Units (SI), which is the most widely used system of measurement around the world. It provides a standard, universally recognized unit for measuring length, from tiny subatomic particles to the vast distances between stars. The SI system, including the meter, is decimal-based, meaning it uses multiples of 10. This makes conversions within the system straightforward, as one simply needs to move the decimal point to change between units such as millimeters, centimeters, meters, and kilometers.
On the other hand, the imperial system, which includes units such as the inch, foot, yard, and mile, is used primarily in the United States and for certain specific purposes in a few other countries. The imperial system has its roots in historical measurements, many of which were based on everyday objects or parts of the human body. However, conversions in the imperial system can be more complex because it doesn't use a consistent base like the SI system. For example, there are 12 inches in a foot, 3 feet in a yard, and 1,760 yards in a mile. This inconsistency can lead to more room for errors in calculations and conversions.
While the meter and the imperial units serve the same basic purpose of measuring length, the systems they belong to have notable differences. The SI system, with the meter as its base unit of length, offers a more universally recognized and easier-to-use system due to its decimal structure. Conversely, the imperial system, while still in use in some parts of the world, offers less consistency and can be more difficult to work with due to its irregular conversion factors. Despite these differences, understanding both systems can be valuable, especially in fields such as engineering, manufacturing, and international commerce where both systems might be in use.
The meter has evolved in its definition over time, from a fraction of the Earth's circumference to the length of a specific metal bar, and most recently, to a distance light travels in a specific fraction of a second. This evolution reflects our increasing ability to measure length more accurately and consistently, driven by advances in technology and scientific understanding. The current definition, based on the speed of light, is rooted in universal physical constants, and thus is inherently more stable and reproducible than the previous definitions.
Looking to the future, it seems likely that the current definition of the meter will remain relevant for a long time. The reason is that it is based on the speed of light, a fundamental constant of nature, which is not expected to change. Moreover, this definition allows for extremely precise measurements, with accuracy sufficient for the most demanding scientific and engineering applications we know today. The definition is also technology-independent, meaning it doesn't rely on any particular measurement device or method, which makes it resilient against future technological changes.
However, it's important to note that our understanding of the universe is still evolving, and there may be future scientific discoveries or technological advances that necessitate a revision of the meter's definition. For instance, new insights from quantum physics or the discovery of new fundamental physical constants could potentially influence the definition of the meter. However, any change would likely be driven by the need for even greater precision or consistency in measurements, rather than a fundamental flaw with the current definition. Therefore, while it's impossible to say with certainty that the current definition will remain relevant forever, it seems highly likely that it will continue to serve us well for the foreseeable future.
From its origins in the French Revolution to its current definition based on the immutable speed of light, the journey of the meter is a reflection of the evolution of human understanding of the physical world. As our scientific knowledge and technological capabilities have advanced, so too has our ability to measure length with ever-increasing precision. The meter, in its many forms, has been at the center of this journey, providing a consistent, universal standard for measuring length. Its influence extends beyond pure science, playing a crucial role in a wide array of practical applications, from engineering and manufacturing to navigation and mapping.
In an increasingly interconnected and technologically advanced world, the importance of a universally accepted and highly precise system of measurement cannot be overstated. The meter, as the cornerstone of this system, will continue to be integral to scientific discovery, technological innovation, and everyday life. Its journey is far from over, and future advancements in science and technology will only further cement its relevance. As we move forward into an exciting future, the meter will no doubt continue to evolve, adapt, and serve as a testament to human ingenuity and the quest for understanding the universe around us.
The mole, often denoted by the symbol "mol", is the fundamental unit of amount of substance in the International System of Units (SI). It is defined as exactly 6.02214076×1023 elementary entities, where the elementary entity may be an atom, a molecule, an ion, an ion pair, or a subatomic particle such as a proton, depending on the substance. This standard unit is used globally for a broad range of calculations in scientific, educational, and everyday contexts. The mole was initially established as the number of atoms in 12 grams of carbon-12, but this definition has been refined for greater precision. In chemistry and other scientific fields, the mole is crucial for specifying amounts of reactants and products in chemical reactions, calculating concentrations of solutions, and in determining the molar mass of substances. Various derived units, like the millimole and micromole, further extend its utility. Conversion factors are used to relate the mole to non-SI units, such as the number of atoms or molecules.
The concept of the mole has been in use since the 19th century, but its precise definition has evolved over time. Originally, the term gram-molecule was used to mean one mole of molecules, and gram-atom for one mole of atoms. For instance, 1 mole of MgBr2 is 1 gram-molecule of MgBr2 but 3 gram-atoms of MgBr2.
Historically, the mole was defined based on the number of elementary entities in 12 grams of carbon-12. However, this definition changed in 2019 when the International System of Units redefined the mole. The mole is now defined as exactly 6.02214076×1023 elementary entities, which is known as the Avogadro number. This redefinition was adopted to increase the precision and consistency of scientific measurements.
The Avogadro number or Avogadro constant, denoted by N or N0, is the number of particles in one mole. This number is approximately the number of nucleons (protons and neutrons) in one gram of ordinary matter. The Avogadro constant was chosen so that the mass of one mole of a chemical compound, expressed in grams, is approximately the number of nucleons in one constituent particle of the substance. It is numerically equal to the average mass of one molecule (or atom) of a compound in daltons.
Avogadro's law, proposed in 1811 by Amedeo Avogadro, played a significant role in the history of the mole. The law states that under the same conditions of temperature and pressure, equal volumes of different gases contain an equal number of molecules. This law is approximately valid for real gases at sufficiently low pressures and high temperatures. The specific number of molecules in one gram-mole of a substance is defined as the molecular weight in grams, which is the Avogadro constant. The volume occupied by one gram-mole of gas is about 22.4 liters at standard temperature and pressure, and is the same for all gases according to Avogadro's law.
The mole is a fundamental concept in chemistry, used to express amounts of reactants and products in chemical reactions, and the concentration of solutions. The mass of a substance is equal to its relative atomic (or molecular) mass multiplied by the molar mass constant, which is almost exactly 1 g/mol. The molar mass of a substance is the ratio of the mass of a sample of that substance to its amount of substance, expressed as the number of moles in the sample. With the definition of the mole tied to the Avogadro constant, the mass of one mole of any substance is N times the average mass of one of its constituent particles – a physical quantity whose precise value has to be determined experimentally for each substance.
The mole, a fundamental unit of measure in the field of chemistry, is widely used in a variety of contexts and can be combined with other units to provide meaningful and practical measures. The utility of the mole is expanded through its use in a range of derived units, which are applied in various scientific and technical scenarios. Below is a list of 20 such units that incorporate the mole, highlighting its versatility and essential role in quantifying chemical substances:
Next to units, the Mole is of course also used in various engineering equations which explain the world around us. Here is a quick overview of the 10 most important ones:
The mole serves as a cornerstone in the realm of stoichiometry, which is the study of quantitative relationships in chemical reactions. When chemists carry out reactions, they can use the mole to accurately determine the amount of reactants needed and predict the amount of products that will be produced. This is vital in industries where chemical reactions are regularly carried out, such as pharmaceuticals and manufacturing, to ensure efficiency and avoid wastage of resources. For example, when the balanced chemical equation for a reaction is known, chemists can use the mole ratio of reactants and products to guide the quantities required for the reaction and the expected yield.
Moreover, the mole is instrumental in expressing concentrations of solutions, which is fundamental in fields like biochemistry and environmental science. The molar concentration, defined as the amount of dissolved substance per unit volume of solution, is commonly used and is typically expressed in moles per litre (mol/L). For instance, in assessing the health of a water body, scientists might measure the molar concentration of pollutants, or in a medical lab, the concentration of a specific protein in a patient's blood sample may be determined in mol/L.
Additionally, the concept of the mole is used in determining molecular and atomic masses. By definition, the molar mass of a substance is the mass of a sample of that substance divided by its amount in moles. This concept allows scientists to easily convert between the mass of a substance and the number of particles it contains, which is crucial in many applications in chemistry and physics. For example, knowing the molar mass of a substance can help in identifying unknown substances in a sample by comparing measured molar masses.
The mole also plays a significant role in gas laws, which govern how gases behave under various conditions. One such law, Avogadro's law, states that equal volumes of gases, at the same temperature and pressure, contain an equal number of moles. This understanding allows scientists and engineers to predict and control the behavior of gases in a variety of applications, such as in the design of engines or in the study of the Earth's atmosphere.
Lastly, the mole aids in the definition and understanding of the Avogadro constant, a fundamental constant of nature that relates the number of particles in a system to the amount of substance in moles. This relationship is pivotal in the field of quantum physics, where it is used to calculate quantities at the atomic and subatomic level. The Avogadro constant is also used in the definition of the kilogram, linking the macroscopic world of everyday objects to the microscopic world of atoms and molecules.
In the realm of engineering and thermodynamics, the mole plays a fundamental role. It is an essential unit in the study and application of thermodynamics, specifically in the quantification of the amount of heat and work involved in different processes. The mole is used as a bridge to connect macroscopic properties (like heat and temperature) with microscopic properties (like the kinetic energy of individual particles). This allows engineers and scientists to make sense of phenomena on a scale that is directly observable and relevant to practical applications.
The use of the mole is not confined to pure sciences; it extends to various branches of engineering. Chemical engineering, in particular, heavily relies on the mole concept for the design, operation, and optimization of chemical processes. In the context of chemical reactions, the mole is used to balance equations and determine stoichiometry, thereby enabling engineers to predict the quantities of reactants needed and products formed.
In thermodynamics, the mole is a key player in the concept of molar concentration, also known as molarity. This term is used to express the concentration of a solution by specifying the number of moles of solute present in a liter of solution. Accurate determination of molar concentration is crucial in many engineering applications, such as the design of chemical reactors and the control of process variables in various industries.
Furthermore, the mole is also central to the understanding and application of ideal gas laws, which are foundational in thermodynamics. Engineers often use these laws to model and predict the behavior of gases under varying conditions of temperature, volume, and pressure. Through the mole, they can link these macroscopic properties to the number of gas particles, providing a deeper understanding of gas behavior.
In summary, the mole is an essential unit in chemistry, physics, and engineering that provides a bridge between the macroscopic world we can see and the microscopic world of atoms and molecules. It enables precise measurement and prediction of chemical reactions, as well as the characterization and control of gases. In turn, it allows us to understand and manipulate the natural world, from the design of new medicines to the prediction of atmospheric behavior.
Moreover, the mole concept, alongside Avogadro's constant, enables a profound link between the everyday objects we interact with and the atomic and subatomic particles that constitute them. Through this connection, we can better understand the fundamental structures and behaviors of the universe. In this way, the humble mole serves as a crucial tool in the quest to unlock the secrets of the natural world.
As we continue to refine our scientific understanding and technological capabilities, the mole will no doubt remain an indispensable tool. It's a testament to the power of scientific thinking, how a simple concept can provide such remarkable insight and utility. In the study of both the infinitely small and the infinitely large, the mole will continue to be a fundamental player, shaping our exploration and understanding of the universe around us.
The Ampere, often symbolized by the letter "A", is the fundamental unit of electric current in the International System of Units (SI). It is defined as one coulomb of electric charge per second. More precisely, as per the 2019 redefinition of SI units, it's defined by taking the fixed numerical value of the elementary charge e to be 1.602176634×10⁻¹⁹ when expressed in the unit C, which is equal to A.s, where the second is defined in terms of the caesium frequency ΔνCs. More about this, further down.
This universal unit is applied in numerous calculations across scientific, educational, and everyday contexts. Ampere forms the basis for understanding electric current flow in circuits and is critical to fields such as electronics, electrodynamics, and electrical engineering. It helps in specifying and controlling the flow of electric charge in devices ranging from high power machinery to minute electronic components.
Derivations of the Ampere unit, such as milliampere (mA) and microampere (μA), are frequently employed to describe current levels in different contexts. The Ampere, alongside other SI units like the Volt and Ohm, forms the foundation of Ohm's Law and other essential principles in electrodynamics and circuit theory. Various conversion factors are used to correlate the Ampere with non-SI units of electric current.
The story of the Ampere as an SI unit traces its roots back to the dawn of the electrical age in the 18th and 19th centuries, during which electricity was an area of fervent exploration. Early scientists had begun to perceive the interplay between electricity and magnetism, as demonstrated by Hans Christian Ørsted’s discovery in 1820 that a compass needle could be deflected by an electric current. This observation was the first real proof of a link between electric currents and magnetic fields, laying the groundwork for the idea of quantifying electric current.
André-Marie Ampère, a French physicist and mathematician, built upon Ørsted's observations and carried out significant investigations in the field of electrodynamics. Within a week of hearing about Ørsted's discovery, Ampère had formulated a mathematical and physical theory to explain the mutual action of electric currents, establishing the foundation of electrodynamics. Ampère proposed that just as the electric current produced a magnetic field, two passing currents would affect each other. This was a groundbreaking discovery and the unit of electric current was named 'Ampere' in his honor.
A key aspect of the Ampere's history is the construction of the first practical method for measuring current—the galvanometer. Invented in 1820 by Johann Schweigger, this device was initially used to detect and measure small electric currents. This rudimentary apparatus, which incorporated a needle to show the strength and direction of a current, provided the practical means necessary for scientists to investigate the properties of electric current and make more precise observations.
As electrical technology progressed and became more central to industrial development in the 19th century, there arose a need for standardizing electrical units. This led to the formation of the International Electrical Congress in the late 19th century, which aimed to create international standards for electrical units. In 1881, the Congress decided that the Ampere should be the unit of electric current in the emerging International System of Units. This decision cemented the Ampere's position as a fundamental unit in science and technology, paving the way for international consistency in electrical measurements.
In the early 20th century, as the field of quantum physics was beginning to evolve, the link between electricity and atomic phenomena became clearer. Scientists realized that the flow of electric current was essentially the movement of electrons, subatomic particles carrying a negative elementary charge. This understanding led to the redefinition of the Ampere in terms of elementary charge per second, providing a connection between macroscopic electrical phenomena and the underlying quantum world.
The Ampere's definition underwent a significant change in 2019 with the redefinition of the SI units. Using the exact value of the elementary charge (e), the Ampere was redefined as the amount of electric current corresponding to the flow of 1.602176634×10⁻¹⁹ elementary charges per second. This definition brought the Ampere into alignment with the other SI units, which are now all defined in terms of physical constants. This marked the latest chapter in the Ampere's history, and this definition remains in use today, continuing to support scientific advancement and technological innovation around the world.
The physical constant behind the Ampere, the elementary charge (denoted as "e"), is a fundamental concept in the realm of physics and chemistry. This constant pertains to the electric charge carried by a single proton, or equivalently, the negative of the electric charge carried by a single electron. The precise value of the elementary charge has been determined through multiple innovative and intricate experiments over the last century, with each revision improving upon the accuracy of this vital physical constant.
One of the key experiments that led to the determination of the elementary charge is the oil-drop experiment conducted by Robert A. Millikan and Harvey Fletcher in the early 20th century. Millikan's experiment was designed to measure the electric charge of tiny oil droplets suspended in an electric field. The outcome of this experiment allowed Millikan to calculate the smallest charge that any droplet carried, leading to the quantification of the elementary charge.
The establishment of the elementary charge as a physical constant was a groundbreaking achievement, providing the basis for our understanding of electric charge at a quantum level. This constant underpins a wide range of phenomena and equations in quantum mechanics and electrodynamics. For instance, it is used in the calculation of the electromagnetic force between charges, given by Coulomb's law.
The elementary charge also plays a critical role in the field of quantum electrodynamics (QED), the theory describing how light and matter interact. This theory, considered one of the most successful in the history of physics, relies heavily on the elementary charge as it helps define the coupling constant, which measures the strength of the electromagnetic interaction. In the context of particle physics, the elementary charge is the basic unit of charge, and particles' electric charges are often given as multiples of this constant.
For decades, the elementary charge was a measured quantity. However, in the 2019 redefinition of the SI units, its value was fixed by definition at exactly 1.602176634×10⁻¹⁹ coulombs. This change was part of a wider shift towards defining all SI units in terms of physical constants, resulting in more robust and future-proof definitions.
The decision to fix the elementary charge's value had significant implications for the definition of the Ampere. Under the new definitions, the Ampere became a derived unit, defined in terms of the amount of electric current corresponding to the flow of 1.602176634×10⁻¹⁹ elementary charges per second. This redefinition, while subtle, underscores the intimate relationship between the Ampere and the elementary charge and how deeply interconnected they are in our understanding of the universe.