Friday April 28, 2017 - 09:38 PM
Links in This Area
Glossary of Terms and Definitions This section is provided for those who desire to truly go in depth into a particular topic or subject. The first section covers the basic laws of heat and the development of the modern science of heat, known as Thermodynamics. The second section covers a broad range of topics and includes many definitions of terms found in other areas of the site.

Section One

The First Law of Thermodynamics
The findings of Joule and others led Rudolf Clausius, a German physicist, to state in 1850: "In any process, energy can be changed from one form to another (including heat and work), but it is never created or destroyed." This is now known as the first law of thermodynamics. An adequate mathematical statement of this first law is delta E = q - w, where delta E is the change (delta) in internal energy (E) of the system, q is the heat added to the system (a negative value if heat is taken away), and w is work done by the system. In thermodynamic terms, a system is defined as a part of the total universe that is isolated from the rest of the universe by definite boundaries, such as the coffee in a covered Styrofoam cup; a closed room; a cylinder in an engine; or the human body. The internal energy, E, of such a system is called a state function. This means that E is dependent only on the state of the system at a given time, not on how the state was achieved.

If the system considered is a chemical system of fixed volume--for example, a substance in a sealed bulb--the system cannot do work (w) in the traditional sense, as could a piston expanding against an external pressure. If no other type of work--such as electrical work--is done on or by the system, then an increase in internal energy is equal to the amount of heat absorbed at constant volume, the subscript v indicating that the volume of the system remains constant throughout the process.

If the heat is absorbed at constant pressure instead of constant volume, which can occur to any unenclosed system, the increase in the energy of the system is instead represented by the state function H, which is closely related to the internal energy. Changes in H (heat content) are called changes in enthalpy.

In 1840, before Joule had made his determinations of the mechanical equivalent of heat, Swiss chemist Germain Henri Hess reported the results of experiments that indicated that the heat evolved or absorbed in a given chemical reaction (delta H) is independent of the particular manner in which the reaction takes place or the path the reaction follows. This generalization is now one of the basic postulates of thermochemistry.

 

The Second Law of Thermodynamics

 

 

The Third Law of Thermodynamics
Entropy, as a measure of disorder, is a function of temperature. Increasing temperature results in an increase in entropy (positive delta S). The third law of thermodynamics considers perfect order. It states that the entropy of a perfect crystal is zero only at absolute zero. This reference point allows absolute entropy values to be expressed for compounds at temperatures above absolute zero, which is impossible to achieve.

Heat/Overview

Heat is a form of energy. According to the Kinetic Theory of Matter, heat is the result of the continuous motion and vibration of the atoms and molecules that constitute all matter. The transfer of heat between objects of different temperatures by thermal flow processes involves a reduction in the average motion of the particles of the hotter object and an increase in the average motion of the particles of the cooler object. The branch of physics comprising the comprehensive study of the transfer of heat, and of the conversion of heat into WORK and work into heat in physical and chemical processes, is known as Thermodynamics.

Cold is the absence of heat. The coldest possible temperature is Absolute Zero, - 273.15 degrees C. At this temperature, all molecular motion would cease. Even though temperatures within a few millionths of a degree of absolute zero have been achieved, it is impossible in any real process to attain absolute zero. This impossibility is known as the third law of thermodynamics.

 

Development of the Concept of Heat

Until the growth of classical physics in the 18th century, scientists did not comprehend the true nature of heat. Even though earlier investigators such as English scientist Robert Boyle had considered heat to be some form of manifestation of the movement of small particles in objects, throughout most of the 18th century heat was still mainly thought of in terms dating back to the days of ancient Greek philosophy. That is, heat was considered to be a kind of basic "element," a form of actual substance. This substance, while all-pervasive, was conceived of as an invisible, weightless material--a fluid called caloric by French chemist Antoine Lavoisier. If a hot object were to be placed in contact with a cooler object, for example, this invisible fluid would enter the cooler object to make it hotter. The notion was sometimes expanded to a two-fluid form--the other, cold fluid being called frigoric.

In the mid-18th century the Scottish chemist Joseph Black, while adhering to the caloric idea, was able to advance scientific understanding of heat's true nature by developing the concepts of heat capacity and latent heat. He also made clear the difference between heat and temperature.

By the end of the 18th century the notion of heat as a fluid, while still prevalent among scientists, was on the way out. It was helped along in particular by the work of Count Rumford and English chemist Humphry Davy. Experiments that they conducted gave strong support to Boyle's concept of heat as a result of motions. After that, in the course of the 19th century, the basic concepts of thermodynamics were worked out by a number of noted physicists, including British scientists James Prescott Joule and Lord Kelvin. From their work arose the modern understanding of heat as a form of energy in transit.

 

Thermodynamics/Overview

Thermodynamics is the branch of the physical sciences that studies the transfer of HEAT and the interconversion of heat and work in various physical and chemical processes. The term is derived from the Greek words thermos (heat) and dynamics (power). The study of Thermodynamics is central to both chemistry and physics and is becoming increasingly important in understanding biological and geological processes.

There are several subdisciplines within this blend of chemistry and physics. Classical thermodynamics considers the transfer of energy and work in macroscopic systems--that is, without any consideration of the nature of the forces and interactions between microscopic individual particles. Statistical thermodynamics, on the other hand, links the atomic nature of matter on a microscopic level with the observed behavior of materials on the macroscopic level. (In a further subdivision, statistical thermodynamics proper is concerned with macroscopic processes that are independent of time, while statistical mechanics is concerned with time-dependent processes.) Statistical thermodynamics describes energy relationships based on the statistical behavior of large groups of individual atoms or molecules, and it relies heavily on the mathematical implications of Quantum Mechanics. Chemical thermodynamics focuses on energy transfer during chemical reactions, and on the work done by chemical systems.

Thermodynamics is limited in its scope. It emphasizes the initial and the final state of a system--a given system being all of the components that interact in the process under study, and the path, or manner, by which the interaction takes place. It provides no information concerning either the speed of the change or what occurs at the atomic and molecular levels during the course of the change..

 

 

Development of Thermodynamics

The early studies of thermodynamics were motivated by the desire to derive useful work from heat energy. The first reaction turbine was described by Hero of Alexandria (AD c.120). It consisted of a pivoted copper sphere fitted with two bent nozzles and partially filled with water. When the sphere was heated over a fire, steam would escape from the nozzles and the sphere would rotate. The device was not designed to do useful work. It was instead a curiosity, and the nature of heat and heat transfer at that time remained mere speculation. The changes that occur when substances burn were initially accounted for, in the late 17th century, by proposing the existence of an invisible material substance called phlogiston, which was supposedly lost when combustion took place.

In 1789, Antoine Lavoisier prepared oxygen from mercuric oxide. In doing so, he demonstrated the law of conservation of mass and thus overthrew the phlogiston theory. Lavoisier proposed that heat, which he called caloric, was an element, probably a weightless fluid surrounding the atoms of substances, and that this fluid could be removed during the course of a reaction. The observation that heat flowed from warmer to colder bodies when such bodies were placed in thermal contact was explained by proposing that particles of caloric repelled one another.

Roughly simultaneous to these advances, the actual conversion of heat to useful work was progressing as well. At the end of the 17th century Thomas Savery invented a machine to pump water from a well, using steam and a system of tanks and hand-operated valves. Savery's pump is generally hailed as the first practical application of steam power. Thomas Newcomen developed Savery's invention into the first piston engine in 1712. The design of the steam-powered piston engine was further refined by James Watt later in the 18th century.


Section Two

Active Systems

Active solar heating systems commonly consist of several hundred square meters of solar collector panels, plus a storage medium to hold the heat collected during the day, and a set of automatic controls that monitor and regulate both heat collection and delivery between the storage medium and the living space. Active systems use either a liquid (of which the most popular is a mixture of water and an antifreeze, such as propylene glycol) or air as the heat-transfer medium. Insulated pipes or ducts carry the heat-transfer medium, called the working fluid, to the collector panels--where the fluid absorbs heat--and then back to the storage, which in liquid-based (hydronic) systems is an insulated tank or in air systems is an insulated bin of fist-sized rocks. (Alternatively, phase-change materials may be used to store heat.) The absorbed heat is transferred to the storage medium, and the cooled working fluid is then returned to the collectors to pick up more heat. Heat is removed from storage and delivered to the living space as needed. Most types of active systems require an auxiliary heating system to provide extra heat during extended periods of cloudiness or extreme cold. A typical active heating system might cost between $2,000 and $5,000 per thousand square feet of living space in the northeastern United States--depending on the type of system, its efficiency, and so forth. A large portion of a building's annual domestic hot water (DHW) needs can be supplied by a relatively inexpensive (between $2,000 and $2,500) active hydronic system using about 9 sq m (100 sq ft) of collectors for a typical residence. A heat exchanger, usually in the hot-water tank, keeps the working fluid separate from the potable water supply. Such systems, although they require a backup energy source, may pay for themselves in energy savings in less than 10 years.

High-temperature solar collector panels may be used to power absorption-chiller air conditioning. Such systems are relatively expensive but may be cost-effective in climates where plentiful sunshine and a substantial need for air conditioning exist. Also, heat pumps may be used in conjunction with solar panels; solar heat boosts the heat pump's source during the winter, and during the summer the heat pump can discharge heat to the outdoors at night through the collectors.

American Generating Plant Fuel Usage

The central-station generating plants built throughout the United States were generally designed to use the most accessible and economical fuels. Hydroelectric plants were built at locations where dams could be built to impound the water needed to supply the hydraulic energy for the turbogenerators. Power plants near coalfields were likely to have coal-fired furnaces, whereas others were more likely to utilize oil or natural gas as the primary fuel. In time the price of fuel became an important factor in the generating process as fuel transportation systems developed. Many coal-burning plants were converted to use either oil or gas as competition between the fuels and fuel suppliers increased.

Air pollution from coal-fired plants became a major issue in some parts of the country in the 1960s and '70s, leading more utilities to switch to gas or oil. Later, however, shortages of oil and gas required some plants to be converted back to coal, either because high costs or uncertain supplies of the desired fuel meant that it was not available or because of governmental regulatory action. To reduce pollutants to within new statutory limits, some utilities shifted to new--and frequently distant--sources of coal, and some installed sophisticated and expensive devices to cleanse pollutants from plant emissions.

The locations for power plants run by nuclear energy are usually determined not by the source of their fuel but by land availability, access to suitable sources of cooling water, and other physical considerations. At one time the breeder reactor, a nuclear reactor that makes more fuel than it uses, was looked upon as a major potential source of energy for electric power production in the United States, but recent concern about the spread of plutonium sufficient supplies of uranium and has resulted in a decreased interest in breeder-reactor power plants. Sharp differences of opinion exist concerning the safety of nuclear plants, and safety has become a growing public concern, particularly since the reactor accident (1979) at Three Mile Island in Pennsylvania and the nuclear disaster (1986) at Chernoble in the Soviet Union. As a result, the future of nuclear power in the United States appears uncertain. Increasing concern about the contribution of the burning of fossil fuels to the greenhouse effect has led to a reevaluation of nuclear power, however. Nuclear power plants do not emit "greenhouse gases" such as carbon dioxide.

In 1987, production of electric energy by utilities in the United States totaled 2,570 billion kilowatt-hours (kW h). Of this, 56.9% was produced by coal-burning plants, 4.6% by oil, 10.6% by gas, 9.7% by hydroelectric plants, and 17.7% by nuclear plants. The remainder came from geothermal, wood, waste, and solar plants. This distribution of sources represents a significant decrease in oil use by utilities and an increase in coal and uranium use.

The highly industrialized nature of the United States, together with its population, economic development, and overall size, have made it the world's largest user of electric energy. In 1988, about 35% of the total electricity sold in the United States was used for residential purposes, 35% for industrial activities, and 27% for commercial purposes. The remainder was used for farm purposes and miscellaneous uses, and some was lost in the generation, transmission, and distribution system.

Battery

In experimenting with what he called atmospheric electricity, Galvani found that a frog muscle would twitch when hung by a brass hook on an iron lattice. Another Italian, Alessandro Volta, a professor at the University of Pavia, affirmed that the brass and iron, separated by the moist tissue of the frog, were generating electricity, and that the frog's leg was simply a detector. In 1800, Volta succeeded in amplifying the effect by stacking plates made of copper, zinc, and moistened pasteboard respectively and in so doing he invented the battery.

A battery separates electrical charge by chemical means. If the charge is removed in some way, the battery separates more charge, thus transforming chemical energy into electrical energy. A battery can affect charges, for instance, by forcing them through the filament of a light bulb. Its ability to do work by electrical means is measured by the VOLT, named for Volta. A volt is equal to 1 joule of work or energy (1 Joule = 2.78/10,000,000 kilowatt-hours) for each coulomb of charge. The electrical ability of a battery to do work is called the electromotive force, or emf.

Btu

The British thermal unit (Btu) is a quantity of energy usually associated with the production or transfer of heat. Before 1929 it was defined as the amount of heat required to raise the temperature of 1 pound of water 1 degree Fahrenheit (from 59.5 deg F to 60.5 deg F). In 1929 it was redefined in terms of electrical units and is equivalent to 251.996 calories, 778.26 ft-lb, or approximately one-third watt-hours.

Calorie

A calorie is a unit of heat energy, originally defined as the amount of energy, as heat (calor in Latin means heat), required to raise the temperature of 1 g of liquid water from 14.5 deg to 15.5 deg C. Today a calorie is defined in mechanical rather than thermal terms. In this system, 1 calorie (cal) equals 4.184000 watt-seconds (W-s), or joules (J).

The energy required to melt 1 g of ice is 80 cal; to boil 1 g of water, 540 cal must be expended. The Earth receives from the Sun approximately 2 cal/min/sq cm of surface area.

The combustion of 1 g of carbon liberates 7,830 cal, or 7.830 kcal (kilocalorie--nutritionists write Cal for kcal). Metabolism of carbohydrates liberates about 4 kcal/g. Fats yield about 9 kcal/g. The caloric requirement of an average adult is about 2,000 kcal/d (day), or about 1.4 kcal/min. Vigorous running expends about 15 kcal/min.

A 1-ft free-fall of a 1-lb mass a sea level (defined as 1 foot-pound of force, or ft lbf) produces 0.324 cal. One horsepower (hp) is equal to 550 ft lbs/s (second), which is equal to 10.7 kcal/min.

Capacitor

Another device capable of electrical work is the capacitor, a descendant of the Leyden jar, which is used to store charge. If a charge Q is placed on the metal plates the voltage rises to amount V. The measure of a capacitor's ability to store charge is the capacitance C, where C = Q/V. Charge flows from a capacitor just as it flows from a battery, but with one significant difference. When the charge leaves a capacitor's plates, no more can be obtained without recharging. This happens because the electrical force is conservative. The energy released cannot exceed the energy stored. This ability to do work is called electric potential.

This type of conservation of energy is also associated with emf. The electrical energy obtainable from a battery is limited by the energy stored in chemical molecular bonds. Both emf and electric potential are measured in volts, and, unfortunately, the terms voltage, potential, and emf are used rather loosely. For example, the term battery potential is often used instead of emf.

Heat Capacity /Specific Heat

Heat can be measured quantitatively. The units of measurement are typically either the calorie or the British thermal unit (Btu). Both of these units were at one time defined in terms of the amount of heat energy required to raise a given amount of liquid water by a given thermometric degree (within a certain degree range). Now the units are defined in mechanical terms--by the amount of work they can do, which can be expressed in electrical-unit equivalents, as well. temperature is a measurement of the degree of heat of an object, not the total quantity of heat energy that the object possesses. Thus an object that is at a higher temperature than another does not necessarily have a greater total heat content than the object at a lower temperature. The sizes and types of material of the objects, as well as their temperatures, determine the total quantities of heat energy the contain.

In thermodynamics, the heat capacity of an object or substance is the amount of heat energy required to raise the temperature of the object or substance by one degree Celsius. Specific heat is a closely related concept. It is the amount of heat necessary to raise a unit mass of matter by one degree. Common units of heat, such as the calorie and British thermal unit, have been defined in terms of the specific heat of water at a standard temperature. The specific heat of copper is 0.093 cal/(g) at room temperature, and the heat capacity of a 100-g copper bar is 9.3 cal/C degree. The specific heats of most materials remain essentially constant over the common range of temperatures. At extremely low temperatures, however, specific heats become considerably smaller.

A volume of gas will accept more heat energy per degree of temperature rise if it is allowed to expand freely than if it is confined. Thus a gas has two distinct values of specific heat: one value at constant pressure, and another, smaller value at constant volume. The ratio of these values is

different for different gases and is of importance in describing the behavior of a gas undergoing a thermodynamic process.

Combustion Turbines

More recently, combustion turbine generators have become popular as peaking units, not only because of their quick-start and intermittent-operation capabilities, but often because of the short time they require for installation. In recent years many U.S. utilities have found themselves deficient in generating capacity when the installation of new facilities has been delayed by problems in procurement, licensing, or construction. Lengthy delays have occurred in many planned nuclear, fossil-fueled, and hydroelectric facilities. Procurement and construction of a large steam-electric station takes from 5 to 10 years even after advance procedural requirements have been met. Thus a combustion turbine unit of 30,000 kW or more, which can be installed and operated within a year or two after procurement, is an attractive alternative. Such units can also provide emergency service during power outages are often valuable as sources of start-up power for conventional generating plants following a blackout. Capital costs of combustion turbines are lower than those of conventional steam units, but fuel efficiency is usually not as high and maintenance is more expensive.

Conduction

Conduction heat transfer is the flow of thermal energy in matter as a result of molecular collisions. For example, if one end of a metal bar is held in a flame, heat is conducted along the bar. This conduction is initiated by the excitation, or increased vibration, of metal molecules at the hot end of the bar. The excited molecules then collide with other molecules, exciting them also. This process passes thermal energy along the length of the bar. It continues as long as a temperature difference is maintained between the two ends.

Convection

While conduction involves energy transfer on a microscopic, or atomic, scale, convective heat transfer results from the motion of large-scale quantities of matter. Convection is important in gases and liquids, which are able to expand significantly when they accept thermal energy and can develop currents of material flow. For example, convective heat transfer occurs in a pan of water being heated on a stove. The water at the bottom of the pan accepts heat energy from the pan by conduction. The water in this region then undergoes thermal expansion and is buoyed upward by the surrounding, denser water. The lighter water carries thermal energy throughout the pan by this convection process. That is, the convection current that has been established travels throughout the body of the water, transferring heat and causing a temperature redistribution. Convection currents permit buildings to be heated without the use of circulatory devices. The heated air moves solely by gravity.

Electric Current

An electric charge in motion is called electric current. The strength of a current is the amount of charge passing a given point (as in a wire) per second, or I = Q/t, where Q coulombs of charge pass in t seconds. The unit for measuring current is the ampere or amp, which equals 1 coulomb/sec. Because it is the source of magnetism as well, current is the link between electricity and magnetism. In 1819 the Danish physicist Hans Christian Oersted found that a compass needle was affected by a current-carrying wire. Almost immediately, Andre Ampere in France discovered the magnetic force law. Michael Faraday in England and Joseph Henry in the United States added the idea of magnetic induction, whereby a changing magnetic field produces an electric field. The stage was then set for the encompassing electromagnetic theory of James Clerk Maxwell .

The variation of actual currents is enormous. A modern electrometer can detect currents as low as 1/100,000,000,000,000,000 amp, which is a mere 63 electrons per second. The current in a nerve impulse is approximately 1/100,000 amp; a 100-watt light bulb carries 1 amp; a lightning bolt peaks at about 20,000 amps; and a 1,200-megawatt nuclear power plant can deliver 10,000,000 amps at 115 V.

Most materials are insulators. In them, all electrons are bound in individual atoms and do not permit a flow of charge unless the electric field acting on the material is so high that breakdown occurs. Then, in a process called ionization, the most loosely bound electrons are torn from the atoms, allowing current flow. This condition exists during a lightning storm. The separation of charge between the clouds and the ground creates a large electric field that ionizes the air atoms, thereby forming a conducting path from cloud to ground.

Electricity

Electricity is a form of energy, a phenomenon that is a result of the existence of electrical charge. The theory of electricity and its inseparable effect, magnetism, is probably the most accurate and complete of all scientific theories. The understanding of electricity has led to the invention of motors, generators, telephones, radio and television, X-ray devices, computers, and nuclear energy systems. Electricity is a necessity to modern civilization.

Electrical History

Amber is a yellowish, translucent mineral. As early as 600 BC the Greeks were aware of its peculiar property: when rubbed with a piece of fur, amber develops the ability to attract small pieces of material such as feathers. For centuries this strange, inexplicable property was thought to be unique to amber.

Two thousand years later, in the 16th century, William Gilbert proved that many other substances are electric (from the Greek word for amber, elektron) and that they have two electrical effects. When rubbed with fur, amber acquires resinous electricity; glass, however, when rubbed with silk, acquires vitreous electricity. Electricity repels the same kind and attracts the opposite kind of electricity. Scientists thought that the friction actually created the electricity (their word for charge). They did not realize that an equal amount of opposite electricity remained on the fur or silk. In 1747, Benjamin Franklin in America and William Watson (1715-87) in England independently reached the same conclusion: all materials possess a single kind of electrical "fluid" that can penetrate matter freely but that can be neither created nor destroyed. The action of rubbing merely transfers the fluid from one body to another, electrifying both. Franklin and Watson originated the principle of conservation of charge: the total quantity of electricity in an insulated system is constant.

Franklin defined the fluid, which corresponded to vitreous electricity, as positive and the lack of fluid as negative. Therefore, according to Franklin, the direction of flow was from positive to negative--the opposite of what is now known to be true. A subsequent two-fluid theory was developed, according to which samples of the same type attract, whereas those of opposite types repel.

Franklin was acquainted with the Leydon Jar, a glass jar coated inside and outside with tinfoil. It was the first capacitor, a device used to store charge. The Leyden jar could be discharged by touching the inner and outer foil layers simultaneously, causing an electrical shock to a person. If a metal conductor was used, a spark could be seen and heard. Franklin wondered whether lightning and thunder were also a result of electrical discharge. During a thunderstorm in 1752, Franklin flew a kite that had a metal tip. At the end of the wet, conducting hemp line on which the kite flew he attached a metal key, to which he tied a nonconducting silk string that he held in his hand. The experiment was extremely hazardous, but the results were unmistakable: when he held his knuckles near the key, he could draw sparks from it. The next two who tried this extremely dangerous experiment were killed.

It was known as early as 1600 that the attractive or repulsive force diminishes as the charges are separated. This relationship was first placed on a numerically accurate, or quantitative, foundation by Joseph Priestley, a friend of Benjamin Franklin. In 1767, Priestley indirectly deduced that when the distance between two small, charged bodies is increased by some factor, the forces between the bodies is reduced by the square of the factor. For example, if the distance between charges is tripled, the force decreases to one-ninth its former value. Although rigorous, Priestley's proof was so simple that he did not strongly advocate it. The matter was not considered settled until 18 years later, when John Robinson of Scotland made more direct measurements of the electrical force involved.

The French physicist Charles A. de Coulomb, whose name is used as the unit of electrical charge, later performed a series of experiments that added important details, as well as precision, to Priestley's proof. He also promoted the two-fluid theory of electrical charges, rejecting both the idea of the creation of electricity by friction and Franklin's single-fluid model.

Today the electrostatic force law, also known as Coulomb’s Law, is expressed as follows: if two small objects, a distance r apart, have charges p and q and are at rest, the magnitude of the force (F) on either is given by F = kpq/rr, where K is a constant. According to the International System of Units, the force is measured in newtons (1 newton = 0.225 lb), the distance in meters, and the charges in Coulombs. The constant k then becomes 8.988 billion. Charges of opposite sign attract, whereas those of the same sign repel. A coulomb (C) is a large amount of charge. To hold a positive coulomb (+ C) 1 meter away from a negative coulomb (- C) would require a force of 9 billion newtons (2 billion pounds). A typical charged cloud about to give rise to a lightning bolt has a charge of about 30 coulombs.

Because of an accident the 18th-century Italian scientist Luigi GALVANI started a chain of events that culminated in the development of the concept of voltage and the invention of the battery. In 1780 one of Galvani's assistants noticed that a dissected frog leg twitched when he touched its nerve with a scalpel. Another assistant thought that he had seen a spark from a nearby charged electric generator at the same time. Galvani reasoned that the electricity was the cause of the muscle contractions. He mistakenly thought, however, that the effect was due to the transfer of a special fluid, or "animal electricity," rather than to conventional electricity.

Electricity/Overview

Electric power has become an indispensable form of energy throughout much of the world. Even systems that use forms of energy other than electricity are likely to contain controls or equipment that run on electric power. For example, modern home heating systems may burn natural gas, oil, or coal, but most systems have combustion and temperature controls that require electricity in order to operate. Similarly, most industrial and manufacturing processes require electric power, and the computers and business machines of many offices and commercial establishments are paralyzed if electric service is interrupted.

During the first part of the 20th century, only about 10% of the total energy generated in the United States was converted to electricity. By 1990 electric power accounted for about 40% of the total. Developing countries are usually not as dependent on electricity as are the more industrialized nations, but the growth rate of electricity use in some of those countries is comparable to the rate of growth in the early years of electricity availability in the United States.

Electrical Puzzles

In spite of many spectacular successes, important unanswered questions remain within the field of electricity. One basic question remains unanswered: how does the force get from here to there? Perhaps it is by the exchange between charged particles of quanta of electromagnetic radiation. These hypothetical quanta are small, chargeless, massless particles in a so-called virtual state. This idea is part of the theory of quantum electrodynamics, developed by Richard Feynman of the California Institute of Technology and Julian Schwinger of Harvard. This theory is puzzling, however. The complete answer might never be known.

Another unsolved problem involves the electrical theory of matter. The electron is considered a small body packed with negative electrical charge. According to some scientists, it is a ball of charge having a radius of approximately 1/10,000,000,000,000,000 meters. What holds it together? Unless some other force, an attractive one, is involved, the negative charge on one side repelling the negative charge on the other side would tear the particle apart. Another force may exist, although no such force has been found.

Speed of Electricity

As electrons bounce along through the wire, the general charge drift constitutes the current. The average, or drift, speed is defined as the speed the electrons would have if all were moving with constant velocity parallel to the field. The drift speed is actually small even in good conductors. In a 1.0-mm-diameter copper wire carrying a current of 10 amps at room temperature, the drift speed of the electrons is 0.2 mm per second. In copper, the electrons rarely drift faster than one hundred-billionth the speed of light.

On the other hand, the speed of the electric signal is the speed of light. This means that, at the speed of light, the removal of one electron from one end of a long wire would affect electrons elsewhere. For example, consider a long, motionless freight train, with the cars representing electrons in a wire. Because the couplings between cars have play in them, the caboose is affected a short while after the engine begins moving.

During this time the engine moves forward a short distance. The signal telling the caboose to start moves backward quickly, traveling the length of the train in the same time it takes the engine to go forward a meter or so. Similarly, the electron drift speed in a conductor is low, but the signal moves at the speed of light in the opposite direction.

Evaporation

Evaporation is the conversion of a liquid substance into the gaseous state. If the liquid is in an open container, eventually it will evaporate completely. If a liquid is placed in a closed container of larger volume, some molecules leave the liquid and go into the excess space. This process continues until an equilibrium is reached, in which the molecules of vapor return to the liquid at the same rate as they evaporate. The pressure exerted by the vapor in equilibrium with its liquid is called the vapor pressure; it is a characteristic property of each substance at a given temperature, and it increases as temperature increases.

Evaporation causes a decrease in the temperature of the liquid; to maintain a constant temperature, heat must be supplied. The secretion and evaporation of sweat is the principal mechanism by which the human body gets rid of excess heat. High humidity hinders evaporation; in conjunction with high temperature, it causes a person to feel uncomfortable.

The amount of water evaporated from the Earth's surface each year is, on the average, equivalent to a layer 100 cm (39 in) thick over the entire surface of the planet; the process absorbs about one-fourth of the solar radiation that reaches the surface. The water vapor remains in the atmosphere for about 10 days before being returned to the surface as rain or snow. This hydrologic cycle of evaporation and condensation is essential to life on land and is largely responsible for weather and climate.

The Future of Power

Researchers in both government and industry are seeking new technology, methods, and equipment for the years ahead. Among the issues under investigation are the more efficient use of power during peak periods, and the development and greater utilization of power-saving devices, such as high-efficiency light bulbs. The search for economic methods of synthetic fuel creation continues, as well as experimentation in such promising areas as the use of hydrogen for fuel. In addition, recent discoveries of higher-temperature superconducting materials that present lower resistance to current flow may open up important new approaches.

As awareness grows of the need to conserve energy resources, increasing interest also is being shown in the development of small-scale hydroelectric power plants. In the United States, legislation now favors the development of such plants. The Public Utilities Regulatory Policies Act (1978) states that utilities must buy electric power fed into their lines from small, privately owned generators. Such small-scale facilities can make more efficient user of power resources.

In the 1990s, faced with the possibility of government deregulation and increasing energy costs, the electric power industry in the United States was forced to seek more economical ways of generating electricity.

Underground Transmission Cables

Many transmission circuits utilize underground cables, although these installations have been limited largely to locations where rights-of-way for overhead lines could not be obtained or where overhead lines were not feasible because they would have interfered with other activities. In general the costs of underground circuits are several times those of comparable overhead circuits.

Insulation problems are very different with underground cables from those with overhead lines, in which air serves as a major insulating medium. A number of different types of cable designs and insulation have been used in the United States. Solid synthetic insulating materials have given satisfactory results in the lower voltage ranges, but for high-voltage applications the principal insulator is gas, or an, oil-paper combination. Some extruded synthetic insulation have recently been developed that use materials such as polyethylene.

One common kind of gas- and oil-insulated cable, known as self-contained cable, uses a conductor formed around a hollow core that is later filled with oil under low pressure. The conductor is insulated with an oil-impregnated paper, and the entire assembly is covered with a metal sheath. Three such cables are required, one for each phase of a three-phase power circuits normally used for alternating current transmission throughout the world. Another cable system, known as pipe-type, utilizes conductors insulated with oil-impregnated paper and covered with metallic and synthetic sheathing tapes. Three of these cables are pulled into a single pipe that is then filled with either gas or oil under high pressure. In the United States, the pipe-type system has been used most.

One significant problem with underground AC circuits is the continuous flow of charging current between the energized conductors and the metallic cable sheaths. Unless expensive compensation devices are used, this charging current can utilize the entire current-carrying capacity of the cable within a few miles of circuit and introduce other operating problems as well. Although these problems do not occur with DC cable systems, DC transmission involves the additional cost of converters.

Aesthetic concerns and difficulties in obtaining rights-of-way have increased the pressures to place power circuits underground, and the future will probably see a significant expansion in the use of underground systems. The most extensive extra-high-voltage (EHV) underground cable system at present is the 345 kV network that supplies the New York City area.

Growth of The Electric Power Industry

The first commercial electric-power installations in the United States were constructed in the latter part of the 19th century. The Rochester, N.Y., Electric Light Co. was established in 1880. In 1882, Thomas A. Edison's Pearl Street steam-electric station began operation in New York City and within a year was reported to have had 500 customers for the lighting services it supplied. A short time later a central station powered by a small waterwheel began operation in Appleton, Wis.

In 1886 the feasibility of sending electric power greater distances from the point of generation by using alternating current (AC) was demonstrated at Great Barrington, Mass. The plant there utilized transformers to raise the voltage from the generators for a high-voltage transmission line.

The electric power industry of the United States grew from small beginnings such as these to become, in less than 10 years, the most heavily capitalized industry in the country. It now comprises about 3,100 different corporate entities, including systems of private investors, federal and other government bodies, and cooperative-user groups. Less than one-third of the corporate groups have their own generating facilities; the others are directly involved only in the transmission and distribution of electric power.

For several decades electric power use in the United States grew at an average annual rate of about 7%, a rate that results in a doubling every 10 years. The rate of growth remained constant, with only minor year-to-year variations, until the early 1970s, when fuel shortages and rising concern over possible environmental damage, together with reduced expansion of the U.S. economy, slowed the growth rate. In the period from 1974 to 1985 the annual increase in electricity use varied between 1.7% and 6.2%. Although total energy use in the United States has either declined or remained unchanged since 1973, electricity use has continued to grow.

Heat Engines

The steam engine developed by James Watt in 1769 was a type of heat engine. A heat engine is any device that withdraws heat from a heat source, converts some of this heat into useful work, and transfers the remainder of the heat to a cooler reservoir. A major advance in the understanding of the heat engine was provided in 1824 by N. L. Sadi Carnot, a French engineer, in his discussion of the cyclic nature of the heat engine. This theoretical approach is known as the Carnot Cycle.

Heat Exchanger

A heat exchanger is a device in which heat is transferred from one fluid, across a tube or other solid surface, to another fluid. When two fluids at different temperatures enter the heat exchanger, the temperature of the cold fluid increases. In this process, none of the transferred heat is lost. Any device in which a temperature difference exists may be classified as a heat exchanger. However, a heat exchanger generally is considered to be a device for the transfer, elimination, or recovery of heat without a change of the fluids' state. If a fluid condenses, the heat exchanger is a condenser. If a fluid evaporates, the heat exchanger is an evaporator.

The simplest heat exchanger is a tube through which a hot liquid flows. Cool air flows around the outside of the tube to carry away the heat, thereby heating the air and cooling the liquid inside the tube. The automobile radiator s an example. Usually a liquid-cooled heat exchanger is of the shell-and-tube type. In it a smaller tube runs inside a larger tube, tank, or shell. Cold liquid flows through one tube to cool the hot liquid flowing in the other tube. This type of heat exchanger is used in automobiles with automatic transmissions to cool the hot automatic-transmission fluid. The hot fluid is circulated through a tube located in the lower tank of the radiator. There the cooler liquid in the engine cooling system surrounds the tube to pick up and carry away the excess heat.

Heat exchangers may be fired or unfired. Examples of fired heat exchangers are boilers, furnaces, and engines. The typical forced-air furnace widely used to heat homes has a large heating chamber, or heat exchanger. Cool air is circulated in close contact with the hot iron firebox. This heats the air for distribution through ducts. Unfired heat exchangers are condensers, coolers, and evaporators. These are used in heating and refrigeration systems, power-plant cooling, and chemical and food-processing plants.

History of Plastics

The first synthetic plastic was celluloid, a mixture of cellulose nitrate and camphor. Invented in 1856 by Alexander Parkes, it was used initially as a substitute for ivory in billiard balls, combs, and piano keys. The high flammability of celluloid has restricted its use to products that are small in size. For years celluloid was widely used in photographic and motion picture film stock, until it was superseded by the less dangerous polymer cellulose acetate.

In 1909 the second synthetic plastic, phenol-formaldehyde (also called Bakelite), was invented by Leo Baekeland when he simply heated a mixture of phenol and formaldehyde. Shortly before World War II a number of synthetic polymers were developed, including Casein, Nylon, Polyesters, polyvinyl chloride, polystyrene, and polyethylene. Since then the number as well as the types and qualities of plastics have greatly increased, producing superior materials such as epoxies, polycarbonate, Teflon, silicones, and polysulfones.

Two modern trends found in the development of plastic materials are of interest. One is the increased number of foamed plastics --plastics that are imbedded with gas--and the other is the specific designing of plastics to satisfy particular service requirements. The ability of chemists to tailor the properties of plastics has become powerful and dramatic. This may be illustrated by polyethylene, which is soft and waxy when used as a film, but hard and abrasion-resistant when used as a socket for an artificial hip joint.

Heat Pump

A heat pump efficiently heats and cools air. It works on a direct expansion-refrigeration cycle for cooling and a reverse-refrigeration cycle for heating. During cooling, the refrigerant is compressed and discharged through a four-way reversing valve that sends the hot gas to a condenser, where it is liquefied. The high-pressure liquid flows through the expansion valve, where it is expanded to a low-pressure gas in the evaporator, a heat exchanger that transfers the heat of the air to be cooled to the refrigerant to be vaporized. The gas is returned to the compressor to repeat the cycle.

As it heats, the four-way reversing valve sends the hot gas from the compressor into the evaporator, where it heats air passing over its coils. The high-pressure, high-temperature gas becomes a liquid that is forced through the expansion valve into the outdoor coil (condenser), which functions as an evaporator. Heat from outside air vaporizes the liquid refrigerant, which becomes a low-pressure, low-temperature gas and returns to the compressor.

Heat pumps that use air as the cooling and heating medium work best in relatively mild climates. At very high or very low temperatures the heat pump's efficiency decreases rapidly. This is especially true in the heating cycle. Most heat pumps are small units with heating and cooling capacities between 6,000 and 300,000 British thermal units per hour. Larger systems using well water instead of air deliver unmatched efficiency in the heating cycle. A ground system heat pump extracts the heat in the ground below the frost line through pipes buried in the soil. Such a system can also extract heat from groundwater.

It is possible to miniaturize all the components in a heat pump except for the compressor. Scientists theorize that eventually miniature heat pumps, positioned on room walls, could heat and cool houses far more efficiently and inexpensively than present heating and cooling systems.

Heat Transfer

Heat transfer concerns the flow of heat energy in matter as a result of differences in temperature. The energy, whether in the form of molecular motion or electromagnetic radiation, obeys certain natural laws of heat transfer in flowing from one body to another. Heat energy flows naturally in only one direction, that is, from hotter objects to cooler ones, and specialized devices are needed to reverse this natural direction of transfer. Heat transfer takes place through conduction, convection, or radiation. The science of thermodynamics relates the rates of heat flow to temperature differences and material properties. The efficient operation of any device that uses energy is likely to depend on reducing certain rates of heat transfer and increasing others. For example, a home heating system operates most efficiently when the heat loss through the building walls is minimized and the heat-transfer rate from the burning fuel to the room air is maximized.

Infrared radiation

Infrared radiation is the region of the electromagnetic spectrum between visible light and microwaves, containing radiation with wavelengths ranging from about 0.75 microns (1 micron equals 1 one-millionth of a meter) to about 1,000 microns (1 mm). These limits are arbitrary, because the characteristics of the radiation are unchanged on either side of the limits. The discovery of infrared radiation is attributed to Sir William Herschel who, in 1800, dispersed sunlight into its component colors with a prism and showed that most of the heat in the beam fell in the spectral region beyond the red, where no visible light existed. In 1847, Armand Fizeau and Jean Foucault of France showed that infrared radiation, although invisible, behaved similarly to light in its ability to produce interference effects.

Infrared radiation is generally associated with heat because heat is its most easily detected effect . Most materials, in fact, readily absorb infrared radiation in a wide range of wavelengths, which causes an increase in the temperatures of the materials. All objects with a temperature greater than absolute zero emit infrared energy, and even incandescent objects usually emit far more infrared energy than visible radiation; about 60% of the Sun's rays are infrared. Sources of infrared radiation other than hot, solid bodies include the emissions of electrical discharges in gases and the laser, which can emit highly monochromatic (single-wavelength) infrared radiation.

Infrared radiation can be used to detect the temperature of a distant object and therefore has many temperature-sensing applications, such as in astronomy or in heat-seeking military missiles. Photographs taken by infrared radiation reveal information not detectable by visible light. In the laboratory, infrared spectroscopy is an important method for identifying unknown chemicals.

Joule

The joule is the unit of energy or work in the mks (meter-kilogram-second) system of units. It is the work done when a force of 1 newton acts through a distance of 1 meter, and is thus synonymous with a newton-meter of work. One joule is equivalent to 1 watt-second, 10 million ergs, 0.7376 foot-pounds, and 9.48 X .0001 Btu.

Natural Gas

Natural gas, a flammable gas within the Earth's crust, is a form of petroleum and is second only to crude oil in importance as a fuel. Natural gas consists mostly (88 to 95 percent) of the hydrocarbon Methane (CH4), but proportions of hydrocarbons (see HYDROCARBON) higher in the methane series are usually present, including Ethane (C2H6), 3 to 8 percent; Propane (C3H8), 0.7 to 2 percent; Butane (C4H10), 0.2 to 0.7 percent; and Pentane (C5H12), 0.03 to 0.5 percent. Other gases present include carbon dioxide (CO2), 0.6 to 2.0 percent; nitrogen (N2), 0.3 to 3.0 percent; and helium (He), 0.01 to 0.5 percent. Carbon dioxide, nitrogen, and helium detract slightly from the heating value of natural gas. Helium and carbon dioxide, however, are valuable in their own right; in certain natural gases where their concentrations are relatively high, they may be extracted commercially.

The hydrocarbons that make up natural gas area component of in -ground petroleum. In the past the gas was considered a useless by-product of oil production and was burned off in the oil fields as waste. Coal beds also contain appreciable quantities of methane, the principal component of natural gas.

Natural gas is produced on all continents except Antarctica. The world's largest producer is Russia. The United States, Canada, and the Netherlands are also important producers.

The most efficient, least costly means of transporting natural gas is via pipeline. The United States has nearly 3.2 million km (2 million mi) of natural-gas pipeline, much of it built during World War II. The Siberian-Western Europe gas pipeline, completed in 1983, was built to exploit the huge natural gas reserves of the former USSR, primarily in present-day Russia.

The gas may also be transported in pressurized tanks. Liquefied natural gas (LNG) must be kept under very high pressures and at very low temperatures during transport, but it requires far less space than the substance in its gaseous state.

Natural gas is used primarily as a fuel and as a raw material in manufacturing. It fuels home furnaces and water heaters, clothes dryers, and cooking stoves. It is used in brick, cement, and ceramic-tile kilns; in glass making; for generating steam in water boilers; and as a clean heat source for sterilizing instruments and processing foods.

As a raw material in petrochemical manufacturing, its uses are widespread. They include the production of sulfur, carbon black, and ammonia. Ammonia is used as a source of nitrogen in

a range of fertilizers and as a secondary feedstock for manufacturing other chemicals including nitric acid and urea. Ethylene, perhaps the most important basic petrochemical produced from natural gas, is used in manufacturing plastics and many other products.

Widespread concern about the environmental damage caused by the burning of coal and petroleum, and the realization that reserves of natural gas may be much greater than was once estimated, have spurred new technologies that have already increased its use significantly. Small gas-turbine generators add capacity to power-generation plants, and utilities anticipate lowered pollution as more natural gas replaces coal and oil. Because it is a clean-burning fuel that emits less carbon monoxide and carbon dioxide than gasoline, natural gas is already being used instead of gasoline in some U.S. truck, bus, and auto fleets.

By the end of the 20th century, the greatest growth in the use of natural gas will occur in the Pacific region. Projections for a 70 percent increase in energy use in that region by the year 2000 have spurred plans for the construction of international pipelines running east and south from Russia and Kazakhstan, north from Australia and Indonesia, and west from Alaska.

Overhead Transmission Lines

Many of the first high-voltage transmission lines in the United States were built principally to transmit electrical energy from hydroelectric plants to distant industrial locations and population centers. High-voltage transmission lines were originally designed to permit the construction of large generating units and central stations on attractive, remote sites close to fuel sources and supplies of cooling water. Today, however, they connect different power networks in order to achieve greater economy by exchanges of low-cost power, to achieve savings in reserve generating capacity, to improve the reliability of the system, and to take advantage of diversity in the peak loads of different systems and thereby reduce operating costs.

At one time power lines in the 33-kV or 44-kV class were classified as high-voltage lines. As loads increased and transmission distances became greater, transmission voltages were increased. Electrical losses increase proportionately to the square of the current--the higher the voltage of the line, the lower the current needed to carry an equivalent amount of power. Moreover, one high-voltage line can usually carry as much power as several lower-voltage ones, so the use of higher voltages reduces the number of lines required and conserves the space required for rights-of-way. Voltage levels increased to 69, 115, 138, and 161 kV in various sections of the United States. Before World War II the highest-voltage lines in the United States were 230 kV, with the exception of one 287-kV line from Boulder Dam to Los Angeles. In the early 1950s several 345-kV lines were constructed. By 1964 the first 500-kV lines in the United States were being completed, and in 1969 the first 765-kV line was put into service. All of these involved AC systems.

In 1970 a 1,380-km (856-mi), 800-kV direct-current (DC) line was placed in commercial service to connect northwestern U.S. hydroelectric sources with the Los Angeles area. Such systems offer an economical means of transferring large quantities of power over long distances. They also avoid stability problems sometimes encountered by AC systems; DC systems are sometimes used to connect AC systems even over short transmission distances.

Peak Load

All electric-utility systems experience cyclic load patterns involving higher demands for electric power at some hours of the day and some seasons of the year than at others. Such considerations affect the design of a utility's generating capacity plant because some types of generating equipment are better suited to supplying base, or continuous, loads and may not operate satisfactorily or economically over a varying load cycle; others are better designed for the variable loading, intermittent use, and frequent start-up and shutdown required by such patterns of operation. Hydroelectric plants are often well adapted to intermittent operation and may be useful for supplying peaking power. They can be constructed only in special locations, however, and they must often rely on fuel plants to supply peaking needs. Steam plants especially designed for peaking service have been installed in a few systems, and internal combustion units have sometimes been used for such service.

Properties of Plastics

The bonding properties and chemical versatility of carbon account for the great number of plastics. Although carbon is the backbone of polymer chains, other elements are included, to varying degrees, in the chemical structures of plastics. These include hydrogen, oxygen, nitrogen, chlorine, fluorine, and occasionally other elements, such as sulfur and silicon.

While progress in polymer technology makes it increasingly difficult to make general statements about these materials, the following properties are characteristic of most plastics:

low strength--for the familiar plastics, about one-sixth the strength of structural steel

low stiffness (technically, modulus of elasticity)--less then one-tenth that of metals, except for reinforced plastics

a tendency to creep, that is, to increase in length under a tensile stress

low hardness (except formaldehyde plastics)

low density, usually an advantage, the density of most plastics being close to that of water

brittleness at low temperatures and loss of strength and hardness at moderately elevated temperatures (thermal expansion of plastics is about ten times that of metals)

flammability, although many plastics do not burn

outstanding electrical characteristics, such as electrical resistance

degradation of some plastics by environmental agencies such as ultraviolet radiation, although most plastics are highly resistant to chemical attack

Almost all of the characteristics mentioned above can be modified to some degree by the addition to a given plastic of suitable fillers or reinforcing fibers. For example, a number of plastics have been developed that can sustain elevated temperatures, including Teflon and the silicones. Addition of other materials to plastics generally reduces their property of electrical resistance. On the other hand, a number of plastics have more recently been developed for the specific purpose of making them electrically conductive. The aim of such research is to produce cheap and lightweight components for use in the electronics industry.

Plastics

The bonding properties and chemical versatility of carbon account for the great number of plastics. Although carbon is the backbone of polymer chains, other elements are included, to varying degrees, in the chemical structures of plastics. These include hydrogen, oxygen, nitrogen, chlorine, fluorine, and occasionally other elements, such as sulfur and silicon. @bWhile progress in polymer technology makes it increasingly difficult to make general statements about these materials, the following properties are characteristic of most plastics: 1. low strength--for the familiar plastics, about one-sixth the strength of structural steel 2. low stiffness (technically, modulus of elasticity)--less then one-tenth that of metals, except for reinforced plastics 3. a tendency to creep, that is, to increase in length under a tensile stress 4. low hardness (except formaldehyde plastics) 5. low density, usually an advantage, the density of most plastics being close to that of water 6. brittleness at low temperatures and loss of strength and hardness at moderately elevated temperatures (thermal expansion of plastics is about ten times that of metals) 7. flammability, although many plastics do not burn 8. outstanding electrical characteristics, such as electrical resistance 9. degradation of some plastics by environmental agencies such as ultraviolet radiation, although most plastics are highly resistant to chemical attack Almost all of the characteristics mentioned above can be modified to some degree by the addition to a given plastic of suitable fillers or reinforcing fibers. For example, a number of plastics have been developed that can sustain elevated temperatures, including Teflon and the silicones. Addition of other materials to plastics generally reduces their property of electrical resistance. On the other hand, a number of plastics have more recently been developed for the specific purpose of making them electrically conductive. The aim of such research is to produce cheap and lightweight components for use in the electronics industry.

Ethylene-Based Plastics

The simplest structure among the many thermoplastics is that of polyethylene. Addition polymerization is the name given to the process in which each ethylene monomer opens up at a double bond and joins to the end of the lengthening chain. The earliest thermoplastics to be developed had the basic structure of polyethylene and were made by addition polymerization. These polymers could be created simply by substituting other atoms or groups of atoms for one or more of the four hydrogen atoms in the ethylene monomer. Polyvinyl chloride is made from an ethylene monomer in which one chlorine atom has replaced one hydrogen atom. The result is a polymer that is nonflammable. Polyvinyl fluoride is made from an ethylene monomer in which a fluorine atom has replaced a hydrogen atom. The result is another polymer with improved heat resistance. Polyvinyl alcohol involves the substitution of an OH group, which causes the polymer to be water soluble. Polytetrafluoroethylene (Teflon) contains fluorine atoms in place of all hydrogen atoms. The well-known properties of this plastic include remarkable heat resistance as well as the inability to be softened by heat. In polypropylene a methyl group (CH3) replaces one hydrogen atom. In the monomer of polystyrene a phenyl ring of six carbon atoms is attached to the ethylene unit in place of one hydrogen atom. This bulky side group results in a brittle plastic.

Except for the fluorinated polymers and the acrylic polymers, thermoplastics must be protected from destruction caused by ultraviolet radiation. Carbon black provides such protection in polyethylene pipe, but other additives must be used if the product must be white or pigmented.

The consumption of polyethylene exceeds that of any other plastic. This soft, flexible, waxy material is produced in five grades: low density, medium density, high density, ultrahigh molecular weight (UHMW), and irradiated (cross-linked by radiation). It is also made into a flexible foam. The differences in density result from differences in the degree of crystallinity. When the long polymer chains are ordered in a parallel arrangement like the atoms in a metal crystal, the result is a higher density than would be possible in a random or disordered distribution. The branching of polymer chains also leads to lower densities. Although low-density polyethylene has the highest vapor transmission rate, it is the least expensive of the five grades and is used as a vapor barrier in buildings. High-density polyethylene is used in blown bottles and pipes. The UHMW grade is a harder, stronger material.

Polypropylene is hard and strong, and has a higher useful temperature range than polyethylene, polyvinyl chloride, and polystyrene. It is highly crystalline. At low temperatures it becomes brittle, but this is overcome by copolymerization with ethylene or other monomers.

Polymethyl methacrylate (PMMA), also called acrylic, is known by its trade names Lucite and Plexiglass. Its monomer contains a complex side group, which prevents crystallization. PMMA has outstanding resistance to outdoor environments, including ultraviolet radiation. It has excellent optical properties and unlimited coloring possibilities. It is also harder and stronger than the plastics previously mentioned, although it is brittle. PMMA is familiar in lighting fixtures, outdoor signs, aircraft windows, and automobile taillights.

The fluorocarbon group consists of several polymers, all containing fluorine. The presence of fluorine makes these polymers nonflammable. The carbon-fluorine bond is extremely stable and provides chemical and heat stability and low surface tension, thus leading to low friction and nonwetting, nonstaining, nonsticking properties. New resins called Teflon AFs are also amorphous, enhancing their physical properties and making them of potential great usefulness in optical and electronic circuits for computers and instruments. Polyvinylidene dichloride (PVDC) is a tough, protective plastic that can be processed to exhibit Piezioelectricity, making it valuable for many applications in electronics.

Polyvinyl chloride (PVC) is a stiff plastic made soft and flexible by adding plasticizers. It is used as shower curtains, hoses, and electrical insulation. Polystyrene is a clear, hard, brittle plastic that is attacked by many solvents.

Power

Power, in physics, is the rate at which work is done. A given amount of work done over a long period of time represents less power than that work done over a short period of time. The average power required to accomplish a certain amount of work is found by dividing the work by the time period during which it is done. The instantaneous power requirement at any moment during the job may be found from the time derivative of the work function, a concept of calculus. For example, the familiar 1/4-horsepower electric motor in many household appliances may deliver several horsepower for a short period just after it is turned on; its average power output over a long running period is likely to be somewhat less than 1/4 horsepower. Units of power are properly expressed in terms of work per unit time. In the international metric (SI) system, power is expressed in joules per second or watts (W). A machine capable of delivering 746 W of continuous power is rated at one Horsepower in the English system of physical units. Other units of power include the Btu/h and the foot-pound/sec.

Propane

Propane is a Hydrocarbon of molecular weight 44.09 and boiling point -42.1 deg C (-43.8 deg F) at atmospheric pressure. Like ethane and methane, propane is a member of the Alkane series of hydrocarbons. It is an important fuel gas and chemical feedstock and is a major constituent of Liquefied Petroleum Gas. The net calorific value of propane is about 12,000 cal/g (21,600 Btu/lb). When used as a fuel, 23.8 cu m of air are required for the combustion of 1 cu m of propane gas, with the products of combustion being carbon dioxide, water and nitrogen. The ignition temperature is 466 deg C, and the flame temperature is 1,970 deg C.

Electric Generating Plants

Virtually all commercial electric energy is now produced by generators driven by steam from the burning of fossil fuels or from nuclear sources or by hydropower. Developed nations depend mainly on fossil fuels, but some countries now depend more heavily on nuclear energy produced by materials such as uranium. France, for example, generates about 70% of its electricity from nuclear power plants; power costs in that nation are the lowest in Europe.

A basic steam-power plant includes a furnace or reactor for raising the temperature of the water in a boiler, or steam generator, until it changes into steam, and a turbine, which drives the generator to produce electric power. Throughout the history of the electric power industry, improvements in design, metallurgy, fabrication techniques, and control systems have permitted continual increases in the size, operating temperatures, pressures, and efficiencies of electric generating units. These improvements and increasing demands for electric power have led generating facilities to develop from the early steam-engine-driven generator, which could produce a few kilowatts (kW), today's giants, with outputs as high as 1,300,000 kW. Hydroelectric, or waterpower, generators have grown from the 12-kW machines of 1882 to the 600,000-kW units at the Grand Coulee station in Washington state.

Electrical Power Transmission

Electric power transmission systems consist of step-up transformer stations to connect the lower-voltage power-generating equipment to the higher-voltage transmission facilities; high-voltage transmission lines and cables for transferring power from one point to another and pooling generation resources; switching stations, which serve as junction points for different transmission circuits; and step-down transformer stations that connect the transmission circuits to lower-voltage distribution systems or other user facilities. In addition to the transformers, these transmission substations contain circuit breakers and associated connection devices to switch equipment into and out of service, lightning arresters to protect the equipment, and other appurtenances for particular applications of electricity. Highly developed control systems, including sensitive devices for rapid detection of abnormalities and quick disconnection of faulty equipment, are an essential part of every installation in order to provide protection and safety for both the electrical equipment and the public.

Radiation

Radiative heat transfer involves the flow of energy in the form of electromagnetic waves. Radiation thus differs fundamentally from conduction and convection, in that it does not depend on the presence of matter. The energy of electromagnetic radiation is not the same thing as heat, but when the radiation strikes an absorbing material it is converted into heat. Heat may even be transmitted across a vacuum in this way, through conversion processes. To be transmitted, however, the energy must originate in matter at a higher temperature than the matter receiving the energy.

The term radiation refers both to the transmission of energy in the form of waves, and to the transmission of streams of atomic particles through space. Any energy that is transmitted in the form of waves is some kind of electromagnetic radiation. Each kind is distinguished by its wavelength, or frequency. All kinds of electromagnetic radiation obey the same physical laws, they all travel at the speed of light, and when they fall on a surface they exert a pressure proportional to the net flux of energy divided by the speed of light. Roughly in the order of decreasing wavelength, the kinds of electromagnetic radiation are radio waves, radiant heat energy and microwaves, infrared radiation, light, ultraviolet radiation, X rays, and gamma rays. Many forms of particulate radiation are possible. In the phenomenon of radioactivity, alpha radiation and beta radiation are observed, along with gamma rays. Very energetic particles from outer space are called cosmic rays. Any particulate or electromagnetic radiation that can dissociate atoms into ions is called ionizing radiation. Such radiation can produce harmful effects in organisms, and it is of concern in matters dealing with nuclear energy. It is also widely used in medicine, however, for both diagnosis and therapy as well as being widely employed in scientific research.

Resistance

Although a conductor permits the flow of charge, it is not without a cost in energy. The electrons are accelerated by the electric field. Before they move far, however, they collide with one of the atoms of the conductor, slowing them down or even reversing their direction. As a result, they lose energy to the atoms. This energy appears as heat, and the scattering is a resistance to the current.

In 1827 a German teacher named George OHM demonstrated that the current in a wire increases in direct proportion to the voltage V and the cross-sectional areas of the wire A, and in inverse proportion to the length I. Because the current also depends on the particular material, Ohm's law is written in two steps, I = V/R, and R = pI/A X the resistivity. The quantity R is called the Resistance. The Resistivity depends only on the type of material. The unit of resistance is the Ohm, where 1 ohm is equal to 1 volt/mp.

In lead, a fair conductor, the resistivity is 22/100,000,000 ohm-meters; in copper, an excellent conductor, it is only 1.7/100,000,000 ohm-meters. Where high resistances between 1 and 1 million ohms are needed, Resistors are made of materials such as carbon, which has a resistivity of 1,400/100,000,000 ohm-meters.

Certain materials, such as lead, lose their resistance almost entirely when cooled to within a few degrees of absolute zero. Such materials are called superconductors. Substances have recently been found that become superconductive at much higher temperatures.

The resistive heating caused by electron scattering is a significant effect and is used in electric stoves and heaters as well as in incandescent light bulbs. In a resistor the power P, or energy per second, is given by P = (I squared) R.

Solar Radiation

Radiation given off by the Sun, consisting mainly of visible light, *ultraviolet radiation, and *infrared radiation, although the whole spectrum of electromagnetic waves is present, from radio waves to X-rays. High-energy charged particles such as electrons are also emitted, especially from solar flares. When these reach the Earth, they cause magnetic storms (disruptions of the Earth's magnetic field), which interfere with radio communications.

Thermostat

A thermostat is an electro-mechanical on/off switch that is activated by temperature changes. It is typically used to control a heating or cooling system. The sensing element is usually a spiral bimetallic strip that coils and uncoils in response to temperature changes because of differential expansion of the two bonded metals. The switch element is either a set of electrical contacts or a glass-encapsulated mercury switch that controls a low-voltage relay. The relay can actuate a motor starter and igniter for an oil burner, a heavy-duty switch for electrical units, or a solenoid-operated valve on a gas furnace. The thermostat may also control a house-type air conditioner or heat pump. To reduce temperature swings, a small electrical heater unit is energized during the warming period, causing the switch to break prematurely in anticipation of room-heater override.

Thermometer

The thermometer, a device for measuring temperature, is used in many forms, basically divided into mechanical and electrical types. The best-known mechanical type is the liquid-in-glass thermometer, and an important electrical type is the resistance thermometer. To cover the full range of temperature measurement, from near Absolute Zero to thousands of degrees, other instruments are also used, such as the Bolometer, Pyrometer, Thermocouple, and thermopile. The temperature scales most commonly used on thermometers are the Celsius scale, the Fahrenheit scale, and the Kelvin scale. The first two are based on the freezing and boiling points of water, although the Celsius (C) is numerically more useful than the Fahrenheit (F) scale because those two points are assigned the numbers 0 and 100, respectively. The Kelvin (K) scale has the widest scientific applications because 0 degrees on the scale corresponds to absolute zero.

The liquid-in-glass thermometer consists of a small bulb reservoir and a calibrated fine-bore capillary tube. The liquid in the bulb rises or falls in the tube as it expands or contracts in response to temperature changes. The height of the column is measured against the markings on the tube. Mercury is the preferred liquid in quality thermometers. It freezes at - 38.9 degrees C ( - 38 degrees F) and boils at 357 degrees C (675 degrees F). The accuracy of industrial mercury thermometers is 1 percent of the column. Other liquids used are dye-colored alcohol, toluene, and pentane, the last with a freezing point of - 200 degrees C ( - 328 degrees F).

A second type of liquid-expansion thermometer consists of a liquid-filled metal bulb and capillary tube attached to either a spiral tube or a bellows. As the temperature of the bulb changes, the pressure or the volume of the liquid changes, moving an indicator across a scale.

A typical gas or vapor thermometer similarly consists of a bulb and a capillary tube connected to a pressure-measuring device. The gas thermometer is simple, rugged, and accurate and has a wide response. Vapor-pressure thermometers respond to the pressure exerted by saturated vapor in equilibrium with a volatile liquid. It is similar to the gas thermometer in construction. The principal advantage of the vapor-pressure type is the large change in pressure obtained for small temperature changes, resulting in high sensitivity.

Electrical resistance thermometers operate on the principle that the resistivity of most metals increases with increased temperature. This principle was discovered in 1821 by Sir Humphry Davy, but this phenomenon was not used until the construction of a platinum resistance thermometer in 1861 by the German engineer Ernst W. von Siemens. In 1886 the British physicist Hugh L. Callendar proposed this thermometer as a new standard of accuracy in temperature measurement. Today the U.S. National Institute of Standards and Technology uses high-precision platinum resistance thermometers, accurate to 0.001 degrees C, to define the key points on the International Practical Temperature Scale, established in 1968. Both copper-wire and nickel-wire resistance thermometers are much lower in cost than platinum thermometers and have a precision of 0.05 degrees C. In the range of 10 degrees to 2 degrees above absolute zero, impurity-doped germanium resistance thermometers are used, calibrated against the temperature of liquid helium.

Ultraviolet

Electromagnetic radiation having wavelengths shorter than visible light but longer than X rays is called ultraviolet light, or ultraviolet radiation. This light is invisible to human eyes and is also known as black light. The ultraviolet region of the spectrum was discovered in 1801 by German physicist Johann Ritter in the course of photochemical experiments.

Ultraviolet light is generally divided into the near, far, and extreme ultraviolet regions . The extreme wavelengths, which are particularly harmful to life, are strongly absorbed by the Earth's atmosphere and particularly by the Ozone Layer.

Ultraviolet light is created by the same processes that generate visible light--transitions in atoms in which an electron in a high-energy state returns to a less energetic state. Fluorescent and mercury-vapor lamps produce large amounts of ultraviolet light, which is filtered out when the lamps are intended for optical use. Visible light may instead be filtered out to achieve black-light effects through the induced Luminescence of objects by ultraviolet light.

Biological effects of ultraviolet light include sunburn and tanning. Excessive exposure has been linked to the development of skin cancers and of cataracts in the eye. Far ultraviolet light, which has the ability to destroy certain kinds of bacteria, is used for sterilizing foodstuffs and medical equipment.

Voltage

Whether as an emf or an electric potential, voltage is a measure of the ability of a system to do work on a unit amount of charge by electrical means. Voltage is a better-known quantity than electric field. For instance, voltages measured in an electrocardiogram peak at 5 millivolts; many are familiar with the 115-volt potential of a house. The potential between a cloud and the ground just before a typical lightning bolt is a minimum of 10,000 volts.

Devices for developing or changing potential or emf include batteries, generators, transformers, and Van De Graaff Generators. Sometimes high voltages are needed. For instance, the electron beams in television tubes require more than 30,000 volts. Electrons "falling" through such a potential reach velocities as high as one-third the speed of light and have sufficient energy to cause a spot of light on the screen. Such high potentials may be developed from low alternating potentials by using a transformer.

By scuffing shoes on a carpet on a dry day, an electric potential of more than 20,000 volts can be developed, resulting in a spark.

Watt

The watt is the unit of power ordinarily employed in mechanics and electricity. One watt equals 1 joule per second, or 10 million ergs, and 746 watts equal 1 horsepower (h.p.). The power in watts developed in an electrical circuit is equal to the potential (volts) times the current (amperes). In heat measurement, which customarily uses calories and Btu's as energy units, 1 watt equals 0.239 calories per second or 3.4192 Btu/h. Multiplying a unit of power by a time unit, such as is done to obtain kilowatt hours (kW h), gives units of energy.

Home | Pool Heating | Water Heating | Photovoltaics | Radiant Barrier | Flasolar Forums
Recommend Us | Contact Us | Legal Disclaimer | Privacy Policy
Flasolar.com
Copyright 1997 - 2008 | Flasolar.com - All rights reserved.