Can a sound wave kill? Forget about discrete audio cards. The integrated one is enough for everyone. Let's move on to checking the sound card

February 18, 2016

The world of home entertainment is quite varied and can include: watching movies on a good home theater system; fascinating and exciting gameplay or listening to music. As a rule, everyone finds something of their own in this area, or combines everything at once. But whatever a person’s goals for organizing his leisure time and whatever extreme they go to, all these links are firmly connected by one simple and understandable word - “sound”. Indeed, in all of the above cases, we will be led by the hand by sound. But this question is not so simple and trivial, especially in cases where there is a desire to achieve high-quality sound in a room or any other conditions. To do this, it is not always necessary to buy expensive hi-fi or hi-end components (although it will be very useful), but a good knowledge of physical theory is sufficient, which can eliminate most of the problems that arise for anyone who sets out to obtain high-quality voice acting.

Next, the theory of sound and acoustics will be considered from the point of view of physics. In this case, I will try to make this as accessible as possible to the understanding of any person who, perhaps, is far from knowing physical laws or formulas, but nevertheless passionately dreams of realizing the dream of creating a perfect acoustic system. I do not presume to say that in order to achieve good results in this area at home (or in a car, for example), you need to know these theories thoroughly, but understanding the basics will allow you to avoid many stupid and absurd mistakes, and will also allow you to achieve the maximum sound effect from the system any level.

General theory of sound and musical terminology

What is it sound? This is the sensation that the auditory organ perceives "ear"(the phenomenon itself exists without the participation of the “ear” in the process, but this is easier to understand), which occurs when the eardrum is excited by a sound wave. The ear in this case acts as a “receiver” of sound waves of various frequencies.
sound wave it is essentially a sequential series of compactions and discharges of the medium (most often the air medium under normal conditions) of various frequencies. The nature of sound waves is oscillatory, caused and produced by the vibration of any body. The emergence and propagation of a classical sound wave is possible in three elastic media: gaseous, liquid and solid. When a sound wave occurs in one of these types of space, some changes inevitably occur in the medium itself, for example, a change in air density or pressure, movement of air mass particles, etc.

Since a sound wave has an oscillatory nature, it has such a characteristic as frequency. Frequency measured in hertz (in honor of the German physicist Heinrich Rudolf Hertz), and denotes the number of oscillations over a period of time equal to one second. Those. for example, a frequency of 20 Hz indicates a cycle of 20 oscillations in one second. The subjective concept of its height also depends on the frequency of the sound. The more sound vibrations occur per second, the “higher” the sound appears. The sound wave also has one more most important characteristic, which has a name - wavelength. Wavelength It is customary to consider the distance that a sound of a certain frequency travels in a period equal to one second. For example, the wavelength of the lowest sound in the human audible range at 20 Hz is 16.5 meters, and the wavelength of the highest sound at 20,000 Hz is 1.7 centimeters.

The human ear is designed in such a way that it is capable of perceiving waves only in a limited range, approximately 20 Hz - 20,000 Hz (depending on the characteristics of a particular person, some are able to hear a little more, some less). Thus, this does not mean that sounds below or above these frequencies do not exist, they are simply not perceived by the human ear, going beyond the audible range. Sound above the audible range is called ultrasound, sound below the audible range is called infrasound. Some animals are able to perceive ultra and infra sounds, some even use this range for orientation in space (bats, dolphins). If sound passes through a medium that is not in direct contact with the human hearing organ, then such sound may not be heard or may be greatly weakened subsequently.

In the musical terminology of sound, there are such important designations as octave, tone and overtone of sound. Octave means an interval in which the frequency ratio between sounds is 1 to 2. An octave is usually very distinguishable by ear, while sounds within this interval can be very similar to each other. An octave can also be called a sound that vibrates twice as much as another sound in the same period of time. For example, the frequency of 800 Hz is nothing more than a higher octave of 400 Hz, and the frequency of 400 Hz in turn is the next octave of sound with a frequency of 200 Hz. The octave, in turn, consists of tones and overtones. Variable vibrations in a harmonic sound wave of the same frequency are perceived by the human ear as musical tone. High-frequency vibrations can be interpreted as high-pitched sounds, while low-frequency vibrations can be interpreted as low-pitched sounds. The human ear is capable of clearly distinguishing sounds with a difference of one tone (in the range of up to 4000 Hz). Despite this, music uses an extremely small number of tones. This is explained from considerations of the principle of harmonic consonance, everything is based on the principle of octaves.

Let's consider the theory of musical tones using the example of a string stretched in a certain way. Such a string, depending on the tension force, will be “tuned” to one specific frequency. When this string is exposed to something with one specific force, which causes it to vibrate, one specific tone of sound will be consistently observed, and we will hear the desired tuning frequency. This sound is called the fundamental tone. The frequency of the note “A” of the first octave is officially accepted as the fundamental tone in the musical field, equal to 440 Hz. However, most musical instruments never reproduce pure fundamental tones alone, they are inevitably accompanied by overtones called overtones. Here it is appropriate to recall an important definition of musical acoustics, the concept of sound timbre. Timbre- this is a feature of musical sounds that gives musical instruments and voices their unique, recognizable specificity of sound, even when comparing sounds of the same pitch and volume. The timbre of each musical instrument depends on the distribution of sound energy among overtones at the moment the sound appears.

Overtones form a specific coloring of the fundamental tone, by which we can easily identify and recognize a specific instrument, as well as clearly distinguish its sound from another instrument. There are two types of overtones: harmonic and non-harmonic. Harmonic overtones by definition, are multiples of the fundamental frequency. On the contrary, if the overtones are not multiples and noticeably deviate from the values, then they are called non-harmonic. In music, the operation of multiple overtones is practically excluded, so the term is reduced to the concept of “overtone,” meaning harmonic. For some instruments, such as the piano, the fundamental tone does not even have time to form; in a short period of time, the sound energy of the overtones increases, and then just as rapidly decreases. Many instruments create what is called a "transition tone" effect, where the energy of certain overtones is highest at a certain point in time, usually at the very beginning, but then changes abruptly and moves on to other overtones. The frequency range of each instrument can be considered separately and is usually limited to the fundamental frequencies that that particular instrument is capable of producing.

In sound theory there is also such a concept as NOISE. Noise- this is any sound that is created by a combination of sources that are inconsistent with each other. Everyone is familiar with the sound of tree leaves swaying by the wind, etc.

What determines the volume of sound? Obviously, such a phenomenon directly depends on the amount of energy transferred by the sound wave. To determine quantitative indicators of loudness, there is a concept - sound intensity. Sound intensity is defined as the flow of energy passing through some area of ​​space (for example, cm2) per unit of time (for example, per second). During normal conversation, the intensity is approximately 9 or 10 W/cm2. The human ear is capable of perceiving sounds over a fairly wide range of sensitivity, while the sensitivity of frequencies is heterogeneous within the sound spectrum. This is how the frequency range 1000 Hz - 4000 Hz, which most widely covers human speech, is best perceived.

Because sounds vary so greatly in intensity, it is more convenient to think of it as a logarithmic quantity and measure it in decibels (after the Scottish scientist Alexander Graham Bell). The lower threshold of hearing sensitivity of the human ear is 0 dB, the upper is 120 dB, also called the “pain threshold”. The upper limit of sensitivity is also perceived by the human ear not in the same way, but depends on the specific frequency. Sounds low frequencies must have a much greater intensity than high ones to cause a pain threshold. For example, the pain threshold at a low frequency of 31.5 Hz occurs at a sound intensity level of 135 dB, when at a frequency of 2000 Hz the sensation of pain will appear at 112 dB. There is also the concept of sound pressure, which actually expands the usual explanation of the propagation of a sound wave in air. Sound pressure- this is a variable excess pressure that arises in an elastic medium as a result of the passage of a sound wave through it.

Wave nature of sound

To better understand the system of sound wave generation, imagine a classic speaker located in a pipe filled with air. If the speaker makes a sharp movement forward, the air in the immediate vicinity of the diffuser is momentarily compressed. The air will then expand, thereby pushing the compressed air region along the pipe.
This wave movement will subsequently become sound when it reaches the auditory organ and “excites” the eardrum. When a sound wave occurs in a gas, excess pressure and excess density are created and particles move at a constant speed. About sound waves, it is important to remember the fact that the substance does not move along with the sound wave, but only a temporary disturbance of the air masses occurs.

If you imagine a piston suspended in free space on a spring and making repeated movements “back and forth”, then such oscillations will be called harmonic or sinusoidal (if we imagine the wave in the form of a graph, then in this case we will get the purest sinusoid with repeated declines and rises). If we imagine a speaker in a pipe (as in the example described above) performing harmonic oscillations, then at the moment the speaker moves “forward” the well-known effect of air compression is obtained, and when the speaker moves “backwards” the opposite effect of rarefaction occurs. In this case, a wave of alternating compression and rarefaction will propagate through the pipe. The distance along the pipe between adjacent maxima or minima (phases) will be called wavelength. If the particles oscillate parallel to the direction of propagation of the wave, then the wave is called longitudinal. If they oscillate perpendicular to the direction of propagation, then the wave is called transverse. Typically, sound waves in gases and liquids are longitudinal, but in solids both types of waves can occur. Transverse waves in solids arise due to resistance to change in shape. The main difference between these two types of waves is that a transverse wave has the property of polarization (oscillations occur in a certain plane), while a longitudinal wave does not.

Speed ​​of sound

The speed of sound directly depends on the characteristics of the medium in which it propagates. It is determined (dependent) by two properties of the medium: elasticity and density of the material. The speed of sound in solids directly depends on the type of material and its properties. Velocity in gaseous media depends on only one type of deformation of the medium: compression-rarefaction. The change in pressure in a sound wave occurs without heat exchange with surrounding particles and is called adiabatic.
The speed of sound in a gas depends mainly on temperature - it increases with increasing temperature and decreases with decreasing temperature. Also, the speed of sound in a gaseous medium depends on the size and mass of the gas molecules themselves - the smaller the mass and size of the particles, the greater the “conductivity” of the wave and, accordingly, the greater the speed.

In liquid and solid media, the principle of propagation and the speed of sound are similar to how a wave propagates in air: by compression-discharge. But in these environments, in addition to the same dependence on temperature, the density of the medium and its composition/structure are quite important. The lower the density of the substance, the higher the speed of sound and vice versa. The dependence on the composition of the medium is more complex and is determined in each specific case, taking into account the location and interaction of molecules/atoms.

Speed ​​of sound in air at t, °C 20: 343 m/s
Speed ​​of sound in distilled water at t, °C 20: 1481 m/s
Speed ​​of sound in steel at t, °C 20: 5000 m/s

Standing waves and interference

When a speaker creates sound waves in a confined space, the effect of reflection of the waves from the boundaries inevitably occurs. As a result, this most often occurs interference effect- when two or more sound waves overlap each other. Special cases of interference phenomena are the formation of: 1) Beating waves or 2) Standing waves. Wave beats- this is the case when waves with similar frequencies and amplitudes are added together. The picture of the occurrence of beats: when two waves of similar frequencies are superimposed on each other. At some point in time, with such an overlap, the amplitude peaks may coincide “in phase,” and the declines may also coincide in “antiphase.” This is exactly how sound beats are characterized. It is important to remember that, unlike standing waves, phase coincidences of peaks do not occur constantly, but at certain time intervals. To the ear, this pattern of beats is distinguished quite clearly, and is heard as a periodic increase and decrease in volume, respectively. The mechanism by which this effect occurs is extremely simple: when the peaks coincide, the volume increases, and when the valleys coincide, the volume decreases.

Standing waves arise in the case of superposition of two waves of the same amplitude, phase and frequency, when when such waves “meet” one moves in the forward direction and the other in the opposite direction. In the area of ​​space (where a standing wave has formed), a picture of the superposition of two frequency amplitudes appears, with alternating maxima (the so-called antinodes) and minima (the so-called nodes). When this phenomenon occurs, the frequency, phase and attenuation coefficient of the wave at the place of reflection are extremely important. Unlike traveling waves, there is no energy transfer in a standing wave due to the fact that the forward and backward waves that form this wave transfer energy in equal quantities in both the forward and opposite directions. To clearly understand the occurrence of a standing wave, let’s imagine an example from home acoustics. Let's say we have floor-standing speaker systems in some limited space (room). Making them play some song with a large number bass, let's try to change the location of the listener in the room. Thus, a listener who finds himself in the zone of minimum (subtraction) of a standing wave will feel the effect that there is very little bass, and if the listener finds himself in a zone of maximum (addition) of frequencies, then the opposite effect of a significant increase in the bass region is obtained. In this case, the effect is observed in all octaves of the base frequency. For example, if the base frequency is 440 Hz, then the phenomenon of “addition” or “subtraction” will also be observed at frequencies of 880 Hz, 1760 Hz, 3520 Hz, etc.

Resonance phenomenon

Most solids have natural frequency resonance. It is quite easy to understand this effect using the example of an ordinary pipe, open at only one end. Let's imagine a situation where a speaker is connected to the other end of the pipe, which can play one constant frequency, which can also be changed later. So, the pipe has its own resonance frequency, in simple terms - this is the frequency at which the pipe “resonates” or makes its own sound. If the frequency of the speaker (as a result of adjustment) coincides with the resonance frequency of the pipe, then the effect of increasing the volume several times will occur. This happens because the loudspeaker excites vibrations of the air column in the pipe with a significant amplitude until the same “resonant frequency” is found and the addition effect occurs. The resulting phenomenon can be described as follows: the pipe in this example “helps” the speaker by resonating at a specific frequency, their efforts add up and “result” in an audible loud effect. Using the example of musical instruments, this phenomenon can be easily seen, since the design of most instruments contains elements called resonators. It is not difficult to guess what serves the purpose of enhancing a certain frequency or musical tone. For example: a guitar body with a resonator in the form of a hole mating with the volume; The design of the flute tube (and all pipes in general); The cylindrical shape of the drum body, which itself is a resonator of a certain frequency.

Frequency spectrum of sound and frequency response

Since in practice there are practically no waves of the same frequency, it becomes necessary to decompose the entire sound spectrum of the audible range into overtones or harmonics. For these purposes, there are graphs that display the dependence of the relative energy of sound vibrations on frequency. This graph is called a sound frequency spectrum graph. Frequency spectrum of sound There are two types: discrete and continuous. A discrete spectrum plot displays individual frequencies separated by blank spaces. The continuous spectrum contains all sound frequencies at once.
In the case of music or acoustics, the usual graph is most often used Amplitude-Frequency Characteristics(abbreviated as "AFC"). This graph shows the dependence of the amplitude of sound vibrations on frequency throughout the entire frequency spectrum (20 Hz - 20 kHz). Looking at such a graph, it is easy to understand, for example, the strengths or weaknesses of a particular speaker or acoustic system as a whole, the strongest areas of energy output, frequency dips and rises, attenuation, and also to trace the steepness of the decline.

Propagation of sound waves, phase and antiphase

The process of propagation of sound waves occurs in all directions from the source. The simplest example to understand this phenomenon: a pebble thrown into water.
From the place where the stone fell, waves begin to spread across the surface of the water in all directions. However, let’s imagine a situation using a speaker in a certain volume, say a closed box, which is connected to an amplifier and plays some kind of musical signal. It is easy to notice (especially if you apply a powerful low-frequency signal, for example a bass drum) that the speaker makes a rapid movement “forward”, and then the same rapid movement “backward”. What remains to be understood is that when the speaker moves forward, it emits a sound wave that we hear later. But what happens when the speaker moves backward? And paradoxically, the same thing happens, the speaker makes the same sound, only in our example it propagates entirely within the volume of the box, without going beyond its limits (the box is closed). In general, in the above example one can observe quite a lot of interesting physical phenomena, the most significant of which is the concept of phase.

The sound wave that the speaker, being in the volume, emits in the direction of the listener is “in phase”. The reverse wave, which goes into the volume of the box, will be correspondingly antiphase. It remains only to understand what these concepts mean? Signal phase– this is the sound pressure level at the current moment in time at some point in space. The easiest way to understand the phase is by the example of the reproduction of musical material by a conventional floor-standing stereo pair of home speaker systems. Let's imagine that two such floor-standing speakers are installed in a certain room and play. In this case, both acoustic systems reproduce a synchronous signal of variable sound pressure, and the sound pressure of one speaker is added to sound pressure another column. A similar effect occurs due to the synchronicity of signal reproduction from the left and right speakers, respectively, in other words, the peaks and troughs of the waves emitted by the left and right speakers coincide.

Now let’s imagine that the sound pressures still change in the same way (have not undergone changes), but only now they are opposite to each other. This can happen if you connect one speaker system out of two in reverse polarity ("+" cable from the amplifier to the "-" terminal of the speaker system, and "-" cable from the amplifier to the "+" terminal of the speaker system). In this case, the signal opposite in direction will cause a pressure difference, which can be represented in numbers as follows: left speaker system will create a pressure of "1 Pa", and the right speaker system will create a pressure of "minus 1 Pa". As a result, the total sound volume at the listener's location will be zero. This phenomenon is called antiphase. If we look at the example in more detail for understanding, it turns out that two speakers playing “in phase” create identical areas of air compaction and rarefaction, thereby actually helping each other. In the case of an idealized antiphase, the area of ​​compressed air space created by one speaker will be accompanied by an area of ​​rarefied air space created by the second speaker. This looks approximately like the phenomenon of mutual synchronous cancellation of waves. True, in practice the volume does not drop to zero, and we will hear a highly distorted and weakened sound.

The most accessible way to describe this phenomenon is as follows: two signals with the same oscillations (frequency), but shifted in time. In view of this, it is more convenient to imagine these displacement phenomena using the example of ordinary round pointer clock. Let's imagine that there are several identical round clocks hanging on the wall. When the second hands of this watch run synchronously, on one watch 30 seconds and on the other 30, then this is an example of a signal that is in phase. If the second hands move with a shift, but the speed is still the same, for example, on one watch it is 30 seconds, and on another it is 24 seconds, then this is a classic example of a phase shift. In the same way, phase is measured in degrees, within a virtual circle. In this case, when the signals are shifted relative to each other by 180 degrees (half a period), classical antiphase is obtained. Often in practice, minor phase shifts occur, which can also be determined in degrees and successfully eliminated.

Waves are plane and spherical. A plane wave front propagates in only one direction and is rarely encountered in practice. A spherical wavefront is a simple type of wave that originates from a single point and travels in all directions. Sound waves have the property diffraction, i.e. ability to go around obstacles and objects. The degree of bending depends on the ratio of the sound wavelength to the size of the obstacle or hole. Diffraction also occurs when there is some obstacle in the path of sound. In this case, two scenarios are possible: 1) If the size of the obstacle is much larger than the wavelength, then the sound is reflected or absorbed (depending on the degree of absorption of the material, the thickness of the obstacle, etc.), and an “acoustic shadow” zone is formed behind the obstacle. . 2) If the size of the obstacle is comparable to the wavelength or even less than it, then the sound diffracts to some extent in all directions. If a sound wave, while moving in one medium, hits the interface with another medium (for example, an air medium with a solid medium), then three scenarios can occur: 1) the wave will be reflected from the interface 2) the wave can pass into another medium without changing direction 3) a wave can pass into another medium with a change in direction at the boundary, this is called “wave refraction”.

The ratio of the excess pressure of a sound wave to the oscillatory volumetric velocity is called wave resistance. Speaking in simple words, wave impedance of the medium can be called the ability to absorb sound waves or “resist” them. The reflection and transmission coefficients directly depend on the ratio of the wave impedances of the two media. Wave resistance in a gaseous medium is much lower than in water or solids. Therefore, if a sound wave in air strikes a solid object or the surface of deep water, the sound is either reflected from the surface or absorbed to a large extent. This depends on the thickness of the surface (water or solid) on which the desired sound wave falls. When the thickness of a solid or liquid medium is low, sound waves almost completely “pass”, and vice versa, when the thickness of the medium is large, the waves are more often reflected. In the case of reflection of sound waves, this process occurs according to a well-known physical law: “The angle of incidence is equal to the angle of reflection.” In this case, when a wave from a medium with a lower density hits the boundary with a medium of higher density, the phenomenon occurs refraction. It consists in the bending (refraction) of a sound wave after “meeting” an obstacle, and is necessarily accompanied by a change in speed. Refraction also depends on the temperature of the medium in which reflection occurs.

In the process of propagation of sound waves in space, their intensity inevitably decreases; we can say that the waves attenuate and the sound weakens. In practice, encountering a similar effect is quite simple: for example, if two people stand in a field at some close distance (a meter or closer) and start saying something to each other. If you subsequently increase the distance between people (if they begin to move away from each other), the same level of conversational volume will become less and less audible. This example clearly demonstrates the phenomenon of a decrease in the intensity of sound waves. Why is this happening? The reason for this is various processes of heat exchange, molecular interaction and internal friction of sound waves. Most often in practice, sound energy is converted into thermal energy. Such processes inevitably arise in any of the 3 sound propagation media and can be characterized as absorption of sound waves.

The intensity and degree of absorption of sound waves depends on many factors, such as pressure and temperature of the medium. Absorption also depends on the specific sound frequency. When a sound wave propagates through liquids or gases, a friction effect occurs between different particles, which is called viscosity. As a result of this friction at the molecular level, the process of converting a wave from sound to heat occurs. In other words, the higher the thermal conductivity of the medium, the lower the degree of wave absorption. Sound absorption in gaseous media also depends on pressure (atmospheric pressure changes with increasing altitude relative to sea level). As for the dependence of the degree of absorption on the frequency of sound, taking into account the above-mentioned dependences of viscosity and thermal conductivity, the higher the frequency of sound, the higher the absorption of sound. For example, at normal temperature and pressure in air, the absorption of a wave with a frequency of 5000 Hz is 3 dB/km, and the absorption of a wave with a frequency of 50,000 Hz will be 300 dB/m.

In solid media, all the above dependencies (thermal conductivity and viscosity) are preserved, but several more conditions are added to this. They are associated with the molecular structure of solid materials, which can be different, with its own inhomogeneities. Depending on this internal solid molecular structure, the absorption of sound waves in this case can be different, and depends on the type of specific material. When sound passes through a solid body, the wave undergoes a number of transformations and distortions, which most often leads to the dispersion and absorption of sound energy. At the molecular level, a dislocation effect can occur when a sound wave causes a displacement of atomic planes, which then return to their original position. Or, the movement of dislocations leads to a collision with dislocations perpendicular to them or defects in the crystal structure, which causes their inhibition and, as a consequence, some absorption of the sound wave. However, the sound wave can also resonate with these defects, which will lead to distortion of the original wave. The energy of the sound wave at the moment of interaction with the elements of the molecular structure of the material is dissipated as a result of internal friction processes.

In this article I will try to analyze the features of human auditory perception and some of the subtleties and features of sound propagation.

Space is not a homogeneous nothingness. There are clouds of gas and dust between various objects. They are the remnants of supernova explosions and the site of star formation. In some areas, this interstellar gas is dense enough to propagate sound waves, but they are imperceptible to human hearing.

Is there sound in space?

When an object moves - be it vibration guitar string or an exploding firework - it affects nearby air molecules, as if pushing them. These molecules crash into their neighbors, and those, in turn, into the next ones. Movement travels through the air like a wave. When it reaches the ear, a person perceives it as sound.

When a sound wave passes through air, its pressure fluctuates up and down, like seawater in a storm. The time between these vibrations is called the frequency of sound and is measured in hertz (1 Hz is one oscillation per second). The distance between the highest pressure peaks is called the wavelength.

Sound can only travel in a medium in which the wavelength is no greater than the average distance between particles. Physicists call this the “conditionally free road” - the average distance that a molecule travels after colliding with one and before interacting with the next. Thus, a dense medium can transmit sounds with a short wavelength and vice versa.

Long wavelength sounds have frequencies that the ear perceives as low tones. In a gas with a mean free path greater than 17 m (20 Hz), the sound waves will be too low frequency for humans to perceive. They are called infrasounds. If there were aliens with ears that could hear very low notes, they would know exactly whether sounds were audible in outer space.

Song of the Black Hole

Some 220 million light years away, at the center of a cluster of thousands of galaxies, hums the most low note that the universe has ever heard. 57 octaves below middle C, which is about a million billion times deeper than the frequency a person can hear.

The deepest sound that humans can detect has a cycle of about one vibration every 1/20 of a second. The black hole in the constellation Perseus has a cycle of about one wobble every 10 million years.

This became known in 2003, when NASA's Chandra Space Telescope discovered something in the gas filling the Perseus cluster: concentrated rings of light and darkness, like ripples in a pond. Astrophysicists say these are traces of incredibly low-frequency sound waves. The brighter ones are the tops of the waves, where the pressure on the gas is greatest. The darker rings are depressions where the pressure is lower.

Sound you can see

Hot, magnetized gas swirls around the black hole, similar to water swirling around a drain. As it moves, it creates a powerful electromagnetic field. Strong enough to accelerate gas near the edge of a black hole to almost the speed of light, turning it into huge bursts called relativistic jets. They force the gas to turn sideways on its path, and this effect causes eerie sounds from space.

They are carried through the Perseus cluster hundreds of thousands of light years from their source, but the sound can only travel as far as there is enough gas to carry it. So he stops at the edge of the gas cloud filling Perseus. This means that it is impossible to hear its sound on Earth. You can only see the effect on the gas cloud. It looks like looking through space into a soundproof chamber.

Strange planet

Our planet emits a deep groan every time its crust moves. Then there is no doubt whether sounds travel in space. An earthquake can create vibrations in the atmosphere with a frequency of one to five Hz. If it's strong enough, it can send infrasonic waves through the atmosphere into outer space.

Of course, there is no clear boundary where the Earth's atmosphere ends and space begins. The air simply gradually becomes thinner until it eventually disappears altogether. From 80 to 550 kilometers above the Earth's surface, the free path of a molecule is about a kilometer. This means that the air at this altitude is approximately 59 times thinner than at which it would be possible to hear sound. It is only capable of transmitting long infrasound waves.

When a magnitude 9.0 earthquake rocked Japan's northeast coast in March 2011, seismographs around the world recorded its waves traveling through the Earth, its vibrations causing low-frequency oscillations in the atmosphere. These vibrations travel all the way to where the Gravity Field and stationary satellite Ocean Circulation Explorer (GOCE) compares the Earth's gravity in low orbit to 270 kilometers above the surface. And the satellite managed to record these sound waves.

GOCE has very sensitive accelerometers on board that control the ion thruster. This helps keep the satellite in a stable orbit. GOCE's 2011 accelerometers detected vertical shifts in the very thin atmosphere around the satellite, as well as wave-like shifts in air pressure, as sound waves from the earthquake propagated. The satellite's engines corrected the displacement and stored the data, which became a kind of recording of the infrasound of the earthquake.

This entry was kept secret in the satellite data until a group of scientists led by Rafael F. Garcia published this document.

The first sound in the universe

If it were possible to go back in time, to about the first 760,000 years after the Big Bang, it would be possible to find out whether there was sound in space. At this time, the Universe was so dense that sound waves could travel freely.

Around the same time, the first photons began to travel through space as light. Afterwards, everything finally cooled enough to condense into atoms. Before cooling occurred, the Universe was filled with charged particles - protons and electrons - that absorbed or scattered photons, the particles that make up light.

Today it reaches Earth as a faint glow from the microwave background, visible only to very sensitive radio telescopes. Physicists call this cosmic microwave background radiation. This is the oldest light in the universe. It answers the question of whether there is sound in space. The cosmic microwave background contains a recording of the oldest music in the universe.

Light to the rescue

How does light help us know if there is sound in space? Sound waves travel through air (or interstellar gas) as pressure fluctuations. When gas is compressed, it gets hotter. On a cosmic scale, this phenomenon is so intense that stars are formed. And when the gas expands, it cools. Sound waves traveling through the early universe caused subtle pressure fluctuations in the gaseous environment, which in turn left subtle temperature fluctuations reflected in the cosmic microwave background.

Using temperature changes, University of Washington physicist John Cramer was able to reconstruct those eerie sounds from space - the music of an expanding universe. He multiplied the frequency by 10 26 times so that human ears could hear him.

So no one will actually hear the scream in space, but there will be sound waves moving through clouds of interstellar gas or in the rarefied rays of the Earth's outer atmosphere.

Sounds belong to the section of phonetics. The study of sounds is included in any school curriculum in the Russian language. Familiarization with sounds and their basic characteristics occurs in the lower grades. A more detailed study of sounds from complex examples and takes place in nuances in middle and high schools. This page provides only basic knowledge according to the sounds of the Russian language in a compressed form. If you need to study the structure of the speech apparatus, the tonality of sounds, articulation, acoustic components and other aspects that go beyond the scope of the modern school curriculum, refer to specialized manuals and textbooks on phonetics.

What is sound?

Sound, like words and sentences, is the basic unit of language. However, the sound does not express any meaning, but reflects the sound of the word. Thanks to this, we distinguish words from each other. Words differ in the number of sounds (port - sport, crow - funnel), a set of sounds (lemon - estuary, cat - mouse), a sequence of sounds (nose - sleep, bush - knock) up to complete mismatch of sounds (boat - speedboat, forest - park).

What sounds are there?

In Russian, sounds are divided into vowels and consonants. The Russian language has 33 letters and 42 sounds: 6 vowels, 36 consonants, 2 letters (ь, ъ) do not indicate a sound. The discrepancy in the number of letters and sounds (not counting b and b) is caused by the fact that for 10 vowel letters there are 6 sounds, for 21 consonant letters there are 36 sounds (if we take into account all combinations of consonant sounds: deaf/voiced, soft/hard). On the letter, the sound is indicated in square brackets.
There are no sounds: [e], [e], [yu], [i], [b], [b], [zh'], [sh'], [ts'], [th], [h] , [sch].

Scheme 1. Letters and sounds of the Russian language.

How are sounds pronounced?

We pronounce sounds when exhaling (only in the case of the interjection “a-a-a”, expressing fear, the sound is pronounced when inhaling.). The division of sounds into vowels and consonants is related to how a person pronounces them. Vowel sounds are pronounced by the voice due to exhaled air passing through tense vocal cords and freely exiting through the mouth. Consonant sounds consist of noise or a combination of voice and noise due to the fact that the exhaled air encounters an obstacle in its path in the form of a bow or teeth. Vowel sounds are pronounced loudly, consonant sounds are pronounced muffled. A person is able to sing vowel sounds with his voice (exhaled air), raising or lowering the timbre. Consonant sounds cannot be sung; they are pronounced equally muffled. Hard and soft signs do not represent sounds. They cannot be pronounced as an independent sound. When pronouncing a word, they influence the consonant in front of them, making it soft or hard.

Transcription of the word

Transcription of a word is a recording of the sounds in a word, that is, actually a recording of how the word is correctly pronounced. Sounds are enclosed in square brackets. Compare: a - letter, [a] - sound. The softness of consonants is indicated by an apostrophe: p - letter, [p] - hard sound, [p’] - soft sound. Voiced and voiceless consonants are not indicated in writing in any way. The transcription of the word is written in square brackets. Examples: door → [dv’er’], thorn → [kal’uch’ka]. Sometimes the transcription indicates stress - an apostrophe before the stressed vowel.

There is no clear comparison of letters and sounds. In the Russian language there are many cases of substitution of vowel sounds depending on the place of stress of the word, substitution of consonants or loss of consonant sounds in certain combinations. When compiling a transcription of a word, the rules of phonetics are taken into account.

Color scheme

In phonetic analysis, words are sometimes drawn color schemes: letters are painted in different colors depending on what sound they mean. The colors reflect the phonetic characteristics of sounds and help you visualize how a word is pronounced and what sounds it consists of.

All vowels (stressed and unstressed) are marked with a red background. Iotated vowels are marked green-red: green means the soft consonant sound [й‘], red means the vowel that follows it. Consonants with hard sounds are colored blue. Consonants with soft sounds are colored green. Soft and hard signs are painted gray or not painted at all.

Designations:
- vowel, - iotated, - hard consonant, - soft consonant, - soft or hard consonant.

Note. The blue-green color is not used in phonetic analysis diagrams, since a consonant sound cannot be soft and hard at the same time. The blue-green color in the table above is only used to demonstrate that the sound can be either soft or hard.

A sound wave represents areas of high and low pressure that are perceived by our hearing organs. These waves can travel through solid, liquid and gaseous media. This means that they easily pass through the human body. Theoretically, if the pressure of the sound wave is too high, it could kill a person.

Any sound wave has its own specific frequency. The human ear is capable of hearing sound waves with frequencies ranging from 20 to 20,000 Hz. The level of sound intensity can be expressed in dB (decibels). For example, the intensity level of the sound of a jackhammer is 120 dB - a person standing next to you will not receive the most pleasant sensation from a terrible roar in the ears. But if we sit in front of a speaker playing at a frequency of 19 Hz and set the sound intensity to 120 dB, we will not hear anything. But sound waves and vibrations will all affect us. And after a while you will begin to experience various visions and see phantoms. The thing is that 19 Hz is the resonant frequency for our eyeball.

This is interesting: Scientists learned that 19 Hz is the resonant frequency for our eyeball under rather interesting circumstances. American astronauts, when ascending into orbit, complained of periodic visions. Detailed studies of the phenomenon have shown that the frequency of operation of the engines of the first stage of the rocket coincides with the frequency of operation of the human eyeball. At the required intensity of sound, strange visions arise.

Sound with a frequency below 20 Hz is called infrasound. Infrasound can be extremely dangerous for living beings, since organs in the human and animal bodies operate at infrasound frequencies. The superposition of certain infrasound frequencies on top of each other with the required sound intensity will cause disruptions in the functioning of the heart, vision, nervous system or brain. For example, when rats are exposed to 8 Hz infrasound, 120 dB causes brain damage. [wiki]. When the intensity increases to 180 dB and the frequency remains at 8 Hz, the person will not feel the best - breathing will slow down and become intermittent. Prolonged exposure to such sound waves will cause death.

This is interesting: The record for the loudest car sound system belongs to two engineers from Brazil - Richard Clarke and David Navone, who managed to install a subwoofer in the car with a theoretical sound volume of 180 dB. Needless to say, this system should not be used to its full potential?

During testing, the subwoofer, driven by electric motors and a crankshaft, reached a sound intensity of 168 dB and broke down. After this incident, they decided not to repair the system.

There was a time when a question of necessity sound card didn't stand at all. If you need a sound in your computer that is a little better than the grunting of the speaker in the case, buy a sound card. If you don't need it, don't buy it. However, the cards were quite expensive, especially while they were being made for the prehistoric ISA port.

With the transition to PCI, it became possible to shift part of the calculations to the central processor, and also use RAM for storing music samples (in ancient times, this need was not only for professional musicians, but also for normal people, because the most popular music format on computers 20 years ago was MIDI). So soon sound cards entry level became much cheaper, and then built-in sound appeared in top-end motherboards. It's bad, of course, but it's free. And this dealt a severe blow to sound card manufacturers.

Today, absolutely all motherboards have built-in sound. And in expensive ones it is even positioned as high quality. That's straight up Hi-Fi. But in reality, unfortunately, this is far from the case. Last year I collected new computer, where I installed one of the most expensive and objectively best motherboards. And, of course, they promised high-quality sound on discrete chips, and even with gold-plated connectors. They wrote it so well that I decided not to install a sound card and make do with the built-in one. And he got by. About a week. Then I disassembled the case, installed the card and didn’t bother with any more nonsense.

Why is the built-in sound not very good?

Firstly, the issue of price. A decent sound card costs 5-6 thousand rubles. And it’s not a matter of manufacturers’ greed, it’s just that the components are not cheap, and the requirements for build quality are high. A serious motherboard costs 15-20 thousand rubles. Is the manufacturer ready to add at least three thousand more? Will the user get scared without having time to evaluate the sound quality? It's better not to take risks. And they don't take risks.

Secondly, for really high-quality sound, without extraneous noise, interference and distortion, the components must be located at a certain distance from each other. If you look at the sound card, you will see how unusually much there is on it free space. And on motherboard there's just enough space for it, everything has to be placed very tightly. And, alas, there is simply nowhere to do it really well.

Twenty years ago, consumer sound cards cost more than a computer, and they had memory slots (!) for storing music samples. In the photo, the dream of all computer geeks in the mid-nineties is Sound Blaster AWE 32. 32 is not a bit depth, but maximum quantity simultaneously playing streams in MIDI

Therefore, integrated sound is always a compromise. I have seen boards with seemingly built-in sound, which, in fact, hovered from above in the form of a separate platform connected to the “mother” only by a connector. And yes, it sounded good. But can such a sound be called integrated? Not sure.

A reader who has not tried discrete sound solutions may have a question: what exactly does “good sound in a computer” mean?

1) He's simply louder. Even a budget-level sound card has a built-in amplifier that can “pump” even large speakers or high-impedance headphones. Many people are surprised that the speakers stop wheezing and choking at maximum. This is also a side effect of a normal amplifier.

2) The frequencies complement each other and do not mix and turn into mush.. A normal digital-to-analog converter (DAC) well “draws” the bass, mids and highs, allowing you to very accurately customize them using software to suit your own taste. When listening to music, you will suddenly hear each instrument separately. And the films will delight you with the effect of presence. In general, the impression is as if the speakers were previously covered with a thick blanket, and then it was removed.

3) The difference is especially noticeable in games.. You'll be surprised that the sound of the wind and dripping water doesn't drown out the quiet footsteps of your rivals around the corner. That in headphones, not necessarily expensive ones, there is an understanding of who is moving, where from and at what distance. This directly affects performance. It simply won’t be possible to sneak up/drive up to you on the sly.

What kind of sound cards are there?

When did this type of component become of interest only to connoisseurs? good sound, of which, unfortunately, there are very few, there are very few manufacturers left. There are only two – Asus and Creative. The latter is generally a mastodon of the market, having created it and set all the standards. Asus entered it relatively late, but it still hasn’t left.

New models are released extremely rarely, and old ones are sold for a long time, 5-6 years. The fact is that in terms of sound you can’t improve anything there without a radical increase in price. And few people are willing to pay for audiophile perversions in a computer. I would say no one is ready. The quality bar is already set too high.

The first difference is the interface. There are cards that are only for desktop computers, and they are installed into the motherboard via the PCI-Express interface. Others connect via USB and can be used with both large computers and laptops. The latter, by the way, have disgusting sound in 90% of cases, and an upgrade certainly wouldn’t hurt it.

The second difference is the price. If we're talking about internal maps, then for 2-2.5 thousand Models are sold that are almost similar to built-in sound. They are usually bought in cases where the connector on the motherboard has died (alas, a common phenomenon). An unpleasant feature of cheap cards is their low resistance to interference. If you place them close to the video card, background sounds will be very annoying.

The golden mean for built-in maps is 5-6 thousand rubles. It already has everything to please a normal person: interference protection, high-quality components and flexible software.

For 8-10 thousand The latest models are sold that can reproduce 32-bit sound in the 384 kHz range. This is right here top top. If you know where to get files and games in this quality, be sure to buy them :)

Even more expensive sound cards differ little in hardware from the already mentioned options, but they acquire additional equipment - external modules for connecting devices, companion boards with outputs for professional sound recording, etc. It depends on the actual needs of the user. Personally, I have never needed the body kit, although in the store it seemed like it was needed.

For USB cards, the price range is approximately the same: from 2 thousand alternative to built-in sound, 5-7 thousand strong middle peasants, 8-10 high end and beyond that everything is the same, but with a rich body kit.

Personally, I stop hearing the difference at the golden mean. Simply because cooler solutions also require high-end speakers and headphones, and to be honest, I don’t see much point in playing World of Tanks with thousand-dollar headphones. Probably, every problem has its own solutions.

Several good options

Several sound cards and adapters that I tried and liked.

PCI-Express interface

Creative Sound Blaster Z. It's been on sale for 6 years now, it costs about the same on different computers, and I'm still very happy with it. The CS4398 DAC used in this product is old, but audiophiles compare its sound to CD players in the $500 range. Average price 5500 rubles.

Asus Strix Soar. If everything in the Creative product is shamelessly geared towards games, then Asus has also taken care of music lovers. The ESS SABRE9006A DAC is comparable in sound to the CS4398, but Asus offers more fine tuning parameters for those who like to listen to Pink Floyd on their computer in HD quality. The price is comparable, about 5500 rubles.

USB interface

Asus Xonar U3– a small box, when inserted into a laptop port, translates the sound quality in it to new level. Despite the compact dimensions, there was even room for a digital output. And the software is simply surprisingly flexible. An interesting option to try is why you need a sound card at all. Price 2000 rubles.

Creative Sound BlasterX G5. The device is the size of a pack of cigarettes (smoking is evil) and its characteristics are almost indistinguishable from the internal Sound Blaster Z, but there is no need to climb anywhere, just plug the plug into the USB port. And immediately you have seven-channel sound of impeccable quality, all sorts of gadgets for music and games, as well as a built-in USB port just in case you don't have enough of them. Having space made it possible to add an additional headphone amplifier, and once you hear it in action, it’s hard to get out of the habit. The main functions of the software are duplicated by hardware buttons. The issue price is 10 thousand rubles.

Play and listen to music with pleasure! There are not so many of them, these pleasures.