In electrical engineering, the power factor of an AC electrical power system is defined as the ratio of the real power absorbed by the load to the apparent power flowing in the circuit, and is a dimensionless number in the closed interval of ?1 to 1. A power factor of less than one indicates the voltage and current are not in phase, reducing the instantaneous product of the two. Real power is the instantaneous product of voltage and current and represents the capacity of the electricity for performing work. Apparent power is the average product of current and voltage. Due to energy stored in the load and returned to the source, or due to a non-linear load that distorts the wave shape of the current drawn from the source, the apparent power may be greater than the real power. A negative power factor occurs when the device (which is normally the load) generates power, which then flows back towards the source.
In an electric power system, a load with a low power factor draws more current than a load with a high power factor for the same amount of useful power transferred. The higher currents increase the energy lost in the distribution system, and require larger wires and other equipment. Because of the costs of larger equipment and wasted energy, electrical utilities will usually charge a higher cost to industrial or commercial customers where there is a low power factor.
Power-factor correction increases the power factor of a load, improving efficiency for the distribution system to which it is attached. Linear loads with low power factor (such as induction motors) can be corrected with a passive network of capacitors or inductors. Non-linear loads, such as rectifiers, distort the current drawn from the system. In such cases, active or passive power factor correction may be used to counteract the distortion and raise the power factor. The devices for correction of the power factor may be at a central substation, spread out over a distribution system, or built into power-consuming equipment.
Importance of power factor in distribution systems
75 Mvar capacitor bank in a 150 kV substation
Power factors below 1.0 require a utility to generate more than the minimum volt-amperes necessary to supply the real power (watts). This increases generation and transmission costs. For example, if the load power factor were as low as 0.7, the apparent power would be 1.4 times the real power used by the load. Line current in the circuit would also be 1.4 times the current required at 1.0 power factor, so the losses in the circuit would be doubled (since they are proportional to the square of the current). Alternatively, all components of the system such as generators, conductors, transformers, and switchgear would be increased in size (and cost) to carry the extra current. When the power factor is close to unity, for the same KVA rating of the transformer more load can be connected.
Utilities typically charge additional costs to commercial customers who have a power factor below some limit, which is typically 0.9 to 0.95. Engineers are often interested in the power factor of a load as one of the factors that affect the efficiency of power transmission.
With the rising cost of energy and concerns over the efficient delivery of power, active PFC has become more common in consumer electronics. Current Energy Star guidelines for computers call for a power factor of ≥ 0.9 at 100% of rated output in the PC's power supply. According to a white paper authored by Intel and the U.S. Environmental Protection Agency, PCs with internal power supplies will require the use of active power factor correction to meet the ENERGY STAR 5.0 Program Requirements for Computers.
In Europe, EN 61000-3-2 requires power factor correction be incorporated into consumer products.
Small customers, such as households, are not usually charged for reactive power and so power factor metering equipment for such customers will not be installed.