In music theory, an interval is a difference in pitch between two sounds.[1] An interval may be described as horizontal, linear, or melodic if it refers to successively sounding tones, such as two adjacent pitches in a melody, and vertical or harmonic if it pertains to simultaneously sounding tones, such as in a chord.[2][3]
In Western music, intervals are most commonly differences between notes of a diatonic scale. Intervals between successive notes of a scale are also known as scale steps. The smallest of these intervals is a semitone. Intervals smaller than a semitone are called microtones. They can be formed using the notes of various kinds of non-diatonic scales. Some of the very smallest ones are called commas, and describe small discrepancies, observed in some tuning systems, between enharmonically equivalent notes such as C♯ and D♭. Intervals can be arbitrarily small, and even imperceptible to the human ear.
In physical terms, an interval is the ratio between two sonic frequencies. For example, any two notes an octave apart have a frequency ratio of 2:1. This means that successive increments of pitch by the same interval result in an exponential increase of frequency, even though the human ear perceives this as a linear increase in pitch. For this reason, intervals are often measured in cents, a unit derived from the logarithm of the frequency ratio.
In Western music theory, the most common naming scheme for intervals describes two properties of the interval: the quality (perfect, major, minor, augmented, diminished) and number (unison, second, third, etc.). Examples include the minor third or perfect fifth. These names identify not only the difference in semitones between the upper and lower notes but also how the interval is spelled. The importance of spelling stems from the historical practice of differentiating the frequency ratios of enharmonic intervals such as G–G♯ and G–A♭.[4]