Audio amplifiers
An audio amplifier is a device used
to amplify audio signals of frequency range from 16Hz to 20 kHz. Two types:
1. Voltage amplifier
2. Power amplifier
Voltage amplifiers are used as
pre-amplifiers, buffer amplifier (or intermediate amplifiers) and driver
amplifier. Their main function is to amplify the audio signal voltages in
staged ,so that finally the driver gives an output voltage sufficient to reduce
the resistance of the power amplifier and hence , to drive it to give power
amplification.
The final amplification stage always
involves a power amplifier which feed audio power to loudspeakers for
conversion of electrical signals into sound waves.
Characteristics
of audio amplifiers:
1. Gain: Ratio of output to input signal
is called gain of an amplifier. It is expressed in decibels(dB)
Voltage
gain (Av) = 20log
Power
gain (Ap) = 10log
Where V2 and v1 are output and
input voltages respectively, and P2 and P1 are output and input power
respectively.
The typical gain of a voltage
amplifier is about 60dB. The typical gain of a power amplifier is about 20dB.
Higher the level of input signal, less is the gain.
2. Bandwidth: An audio amplifier
should pass the whole audible frequency range which is from 16Hz to 20 kHz
known as bandwidth of audio amplifier.
3. Distortion : An amplifier can
suffer from the following types of distortions:
i)
Frequency Distortion: When all the audio
frequencies are not simplified equally well, it causes frequency distortion.
ii)
Phase Distortion: When the relative
phase relationship as in the input signal is not maintained in the output
signal, it causes phase distortion
iii)
Amplitude distortion: It is caused due
to passage of signal through non-linear portion of the characteristics curve of
transistors.
# Amplifier quality is characterized by a list of specifications that include:
· Gain, the ratio between the magnitude of output and input signals
· Bandwidth, the width of the useful frequency range
· Efficiency, the ratio between the power of the output and total power consumption
· Linearity, the degree of proportionality between input and output
· Noise, a measure of undesired noise mixed into the output
· Output dynamic range, the ratio of the largest and the smallest useful output levels
· Slew rate, the maximum rate of change of the output
· Rise time, settling time, ringing and overshoot that characterize the step response
· Stability, the ability to avoid self-oscillation
· Gain, the ratio between the magnitude of output and input signals
· Bandwidth, the width of the useful frequency range
· Efficiency, the ratio between the power of the output and total power consumption
· Linearity, the degree of proportionality between input and output
· Noise, a measure of undesired noise mixed into the output
· Output dynamic range, the ratio of the largest and the smallest useful output levels
· Slew rate, the maximum rate of change of the output
· Rise time, settling time, ringing and overshoot that characterize the step response
· Stability, the ability to avoid self-oscillation
Video
amplifiers
These deal with video signals and have varying bandwidths
depending on whether the video signal is for SDTV, EDTV, HDTV 720p or 1080i/p
etc... The specification of the bandwidth itself depends on what kind of filter
is used—and at which point (-1 dB or -3 dB for example) the
bandwidth is measured.
Not all amplifiers
are the same and are therefore classified according to their circuit
configurations and methods of operation. In “Electronics”, small signal
amplifiers are commonly used devices as they have the ability to amplify a
relatively small input signal, for example from a Sensor such as a
photo-device, into a much larger output signal to drive a relay, lamp or
loudspeaker for example.
The type or classification of an amplifier is given in the
following table.
Classification of Amplifiers
Type of Signal
|
Type of
Configuration |
Classification
|
Frequency of
Operation |
Small Signal
|
Common Emitter
|
Class A Amplifier
|
Direct Current (DC)
|
Large Signal
|
Common Base
|
Class B Amplifier
|
Audio Frequencies (AF)
|
Common Collector
|
Class AB Amplifier
|
Radio Frequencies (RF)
|
|
Class C Amplifier
|
VHF, UHF and SHF
Frequencies |
The small signal
model accounts for the behavior which is linear around an operating point. When
the signal is large in amplitude (say more than 1/5 of VCC, a rule of thumb)
the behavior becomes nonlinear and we have to use the model which accounts for
non-linearity, and thus called large signal model.
In small signal noise is important, efficiency is not. In large signal it is the opposite. Small signal is linear and so the parameters can be calculated from two port parameters or equivalent RLC-generator circuit.
In large signal case the time domain simulation is needed to find the effects of nonlinearities and a Fourier analysis is done of the results to get the amplitude and phase of the fundamental and harmonic amplitudes and input impedance.
Very costly simulators have very complex models for the transistors and can do frequency domain type calculations and get the harmonic effects at the same time.
Types of microphones and speakers
A microphone, colloquially mic or mike (/ˈmark/), is an acoustic-to-electric transducer or sensor that converts sound into an electrical signal.
Most microphones today use electromagnetic induction (dynamic microphones), capacitance change (condenser microphones) or piezoelectricity (piezoelectric microphones) to produce an electrical signal from air pressure variations. Microphones typically need to be connected to a preamplifier before the signal can be amplified with an audio power amplifier and a speaker or recorded.
It is often taught that "sound is vibrations in the air." We are able to enjoy music because we sense these vibrations in the air as sound.
Microphones convert these vibrations into electrical signals. Here are the two main types of microphone:
Most microphones today use electromagnetic induction (dynamic microphones), capacitance change (condenser microphones) or piezoelectricity (piezoelectric microphones) to produce an electrical signal from air pressure variations. Microphones typically need to be connected to a preamplifier before the signal can be amplified with an audio power amplifier and a speaker or recorded.
It is often taught that "sound is vibrations in the air." We are able to enjoy music because we sense these vibrations in the air as sound.
Microphones convert these vibrations into electrical signals. Here are the two main types of microphone:
(1) Dynamic microphones
.
(2)Condenser
microphone
Dynamic Microphones
Dynamic microphones are versatile and
ideal for general-purpose use. They use a simple design with few moving parts.
They are relatively sturdy and resilient to rough handling. They are also
better suited to handling high volume levels, such as from certain musical
instruments or amplifiers. They have no internal amplifier and do not require
batteries or external power.
How Dynamic Microphones Work
As you may recall from your school
science, when a magnet is moved near a coil of wire an electrical current is
generated in the wire. Using this electromagnet principle, the dynamic
microphone uses a wire coil and magnet to create the audio signal.
The diaphragm is attached to the coil.
When the diaphragm vibrates in response to incoming sound waves, the coil moves
backwards and forwards past the magnet. This creates a current in the coil
which is channeled from the microphone along wires. A common configuration is
shown below.
Cross-Section
of a Typical Condenser Microphone
Characteristics
-Construction is
simple and comparatively sturdy.
-No power supply is required.
-Relatively inexpensive.
-No power supply is required.
-Relatively inexpensive.
Condenser
microphone
The condenser microphone, is
also called a capacitor microphone or electrostatic microphone—capacitors
were historically called condensers.
Condenser Microphones
Condenser means capacitor,
an electronic component which stores energy in the form of an electrostatic
field. The term condenser is actually obsolete but has stuck as
the name for this type of microphone, which uses a capacitor to convert
acoustical energy into electrical energy.
Condenser microphones require power
from a battery or external source. The resulting audio signal is stronger
signal than that from a dynamic. Condensers also tend to be more sensitive and
responsive than dynamics, making them well-suited to capturing subtle nuances
in a sound. They are not ideal for high-volume work, as their sensitivity makes
them prone to distort.
How Condenser Microphones Work
A capacitor has two plates with a
voltage between them. In the condenser mic, one of these plates is made of very
light material and acts as the diaphragm. The diaphragm vibrates when struck by
sound waves, changing the distance between the two plates and therefore
changing the capacitance. Specifically, when the plates are closer together,
capacitance increases and a charge current occurs. When the plates are further
apart, capacitance decreases and a discharge current occurs.
A voltage is required across the
capacitor for this to work. This voltage is supplied either by a battery in the
mic or by external phantom
power.
Characteristics
-Good sensitivity at all frequencies.
-Power supply is required.
-Vulnerable to structural vibration and humidity
-Power supply is required.
-Vulnerable to structural vibration and humidity
Quality of AM
and FM reception
AM (or Amplitude Modulation) and FM (or Frequency Modulation) are ways of broadcasting radio signals. Both
transmit the information in the form of electromagnetic waves. AM works by
modulating (varying) the amplitude of
the signal or carrier transmitted according to the information being sent,
while the frequency remains constant. This differs from FM technology in which
information (sound) is encoded by varying the frequency of the wave and the
amplitude is kept constant.
Comparison
chart
AM
|
FM
|
|
Stands for
|
AM stands for
Amplitude Modulation
|
FM stands for
Frequency Modulation
|
Origin
|
AM method of audio
transmission was first successfully carried out in the mid-1870s.
|
FM radio was
developed in the United states in the 1930s, mainly by Edwin Armstrong.
|
Modulating differences
|
In AM, a radio wave
known as the "carrier" or "carrier wave" is modulated in
amplitude by the signal that is to be transmitted. The frequency and phase
remain the same.
|
In FM, a radio wave
known as the "carrier" or "carrier wave" is modulated in
frequency by the signal that is to be transmitted. The amplitude and phase
remain the same.
|
Pros and cons
|
AM has poorer sound
quality compared with FM, but is cheaper and can be transmitted over long
distances. It has a lower bandwidth so it can have more stations available in
any frequency range.
|
FM is less prone to
interference than AM. However, FM signals are impacted by physical barriers.
FM has better sound quality due to higher bandwidth.
|
Frequency Range
|
AM radio ranges from
535 to 1705 KHz (OR) Up to 1200 bits per second.
|
FM radio ranges in a
higher spectrum from 88 to 108 MHz (OR) 1200 to 2400 bits per second.
|
Bandwidth Requirements
|
Twice the highest
modulating frequency. In AM radio broadcasting, the modulating signal has
bandwidth of 15kHz, and hence the bandwidth of an amplitude-modulated signal
is 30kHz.
|
Twice the sum of the
modulating signal frequency and the frequency deviation. If the frequency
deviation is 75kHz and the modulating signal frequency is 15kHz, the
bandwidth required is 180kHz.
|
Zero crossing in modulated signal
|
Equidistant
|
Not equidistant
|
Complexity
|
Transmitter and
receiver are simple but synchronization is needed in case of SSBSC AM
carrier.
|
Transmitter and receiver
are more complex as variation of modulating signal has to be converted and
detected from corresponding variation in frequencies.(i.e. voltage to
frequency and frequency to voltage conversion has to be done).
|
Noise
|
AM is more
susceptible to noise because noise affects amplitude, which is where
information is "stored" in an AM signal.
|
FM is less
susceptible to noise because information in an FM signal is transmitted
through varying the frequency, and not the amplitude
|
1.
Evolution:
Formulated
in the 1870s, AM is a relatively older modulation process compared to FM which
was found in the 1930s by Edwin Armstrong.
2.
Technology:
AM stands for amplitude
modulation where the amplitude of the carrier is
modulated as per the message signal? The other aspects of the carrier wave such
as frequency phase etc. remain constant. On the other hand, FM means frequency modulation and in it only frequency of the carrier wave changes while
amplitude, phase etc. remain constant.
3.
Frequency range of working:
Amplitude
modulation works between 540-1650 KHz while FM works at 88-108MHz.
4.
Power Consumption:
FM based
signal transmission consumes a higher amount of power than an equivalent AM
based signal transmission system.
5.
AM vs. FM: Signal Quality:
Signal
quality is a lot superior in FM than AM as amplitude based signals are
more susceptible to noise than those which use frequency. Moreover, noise
signals are difficult to filter out in AM reception whereas FM receivers easily
filter out noise using the capture effect and pre-emphasis, de-emphasis
effects. In capture effect, the receiver locks itself to catch stronger signal
so that signals received are more synced with that at the transmitting end.
In pre-emphasis, de-emphasis process, the signal is further
amplified to a higher frequency at sending end (pre-emphasis) and vice versa at
receiver end (de-emphasis). These two processes reduce down the chances of a
signal to get mixed with other signals and make FM more immune to noise than
AM.
6.
Fading:
Fading
refers to power variation during signal transmission. Due to fading, the power
with the signal received can vary significantly and reception wouldn’t be of a
good quality. Fading is more prominent in amplitude modulation as compared to
frequency modulation. That is why, AM radio channels often face the problem
where sound intensity varies while FM radio channels have constant good
reception.
7.
Wavelength Difference between AM and FM:
AM waves
work in the range of KHz while in FM waves work in MHz range. As a result, AM
waves have a higher wavelength than the FM ones. A higher wavelength increases
the range of AM signals as compared to FM which have a limited area of
coverage.
8.
Bandwidth consumption:
AM
signals consume 30 KHz of bandwidth for each while in FM 80 KHz is the bandwidth
consumed by each signal? Hence, over a limited range of bandwidth, more number
of signals can be sent in AM than FM.
9.
Circuit Complexity:
Aforesaid,
Amplitude Modulation is an older process and has a very simple circuitry. On
the other hand, frequency modulation requires a complicated circuitry for
transmission and reception of signal. The signals sent in FM are more modulated
and emphasized at the transmitter and they are thoroughly checked and corrected
at the receiving end. This is why circuitry for FM signals is very complicated.
10.
Commercial Aspects:
Setting
up an AM based radio communication system is very economic as there is no
complicated circuitry and processes are easy to understand.
A few radios come with AM and FM
functionality
On the other hand, FM is a fairly complicated communication system
and requires high capital investment and expertise at work. Commercially
FM based radio systems are more popular due to high signal quality (especially
audio) and more immunity to noise.
Stereo and mono sound reproduction systems
Stereo (or Stereophonic sound) is the reproduction of sound
using two or more independent audio channels in a way that creates
the impression of sound heard from various directions, as in natural hearing. Mono (Monaural or monophonic sound
reproduction) has audio in a single channel, often centered in the “sound field
“.
Stereo sound has almost completely replaced mono
because of the improved audio quality that stereo provides.
Comparison
chart
Mono
|
Stereo
|
|
Cost
|
Less expensive for
recording and reproduction
|
More expensive for
recording and reproduction
|
Recording
|
Easy to record,
requires only basic equipment
|
Requires technical
knowledge and skill to record, apart from equipment. It's important to know
the relative position of the objects and events.
|
Key feature
|
Audio signals are
routed through a single channel
|
Audio signals are
routed through 2 or more channels to simulate depth/direction perception,
like in the real world.
|
Stands for
|
Monaural or
monophonic sound
|
Stereophonic sound
|
Usage
|
Public address
system, radio talk shows, hearing aid, telephone and mobile communication,
some AM radio stations
|
Movies, Television,
Music players, FM radio stations
|
Sound recording and reproduction is an electrical or mechanical inscription and re-creation of sound waves, such as spoken voice, singing, instrumental music, or sound effects. The two main classes of sound recording technology are analog recording and digital recording.
Acoustic analog recording is achieved by a small microphone diaphragm that can detect changes in atmospheric pressure (acoustic sound waves) and record them as a graphic representation of the sound waves on a medium such as a phonograph
Digital recording and reproduction converts the analog sound signal picked up by the microphone to a digital form by the process of digitization.
Digital audio and compression techniques
Digital audio is technology that can be used to record, store, generate, manipulate, and reproduce sound using audio signals encoded in digital form.
A microphone converts sound to an analog electrical signal, then an analog-to-digital converter (ADC)—typically using pulse-code modulation—converts the analog signal into a digital signal. A digital-to-analog converter performs the reverse process, converting a digital signal back into an analog signal, which analog circuits amplify and send to a loudspeaker.
Digital audio systems may include compression, storage, processing and transmission components. Conversion to a digital format allows convenient manipulation, storage, transmission and retrieval of an audio signal.
Dynamic range compression (DRC) or simply compression is an electronic effect unit that reduces the volume of loud sounds or amplifies quiet sounds by narrowing or "compressing" an audio signal's dynamic range. Compression is commonly used in sound recording and reproduction, broadcasting, live sound at music concerts and in some instrument amplifiers (usually bass amps).
Audio compression reduces loud sounds which are above a certain threshold while quiet sounds remain unaffected. In the 2000s, compressors are also available in audio software for recording. The dedicated electronic hardware unit or audio software used to apply compression is called a compressor.
Types
Two
main methods of dynamic range reduction
Fig: Downward
compression
Fig:
Upward compression
Downward compression reduces loud sounds over a certain
threshold while quiet sounds remain unaffected.
Upward compression increases the loudness of sounds below
a threshold while leaving louder passages unchanged. Both downward and upward
compression reduce the dynamic range of
an audio signal.
An audio tape recorder, tape deck or tape machine is an audio storage device that records and plays back sounds, including articulated voices, usually using magnetic tape, either wound on a reel or in a cassette, for storage. In its present-day form, it records a fluctuating signal by moving the tape across a tape head that polarizes the magnetic domains in the tape in proportion to the audio signal. Tape-recording devices include reel-to-reel tape deck and the cassette deck.
DAT (Digital Audio Tape)
Digital Audio Tape (DAT or R-DAT) is a signal recording and playback medium developed by Sony and introduced in 1987. In appearance it is similar to a Compact Cassette, using 3.81 mm / 0.15" (commonly referred to as 4mm) magnetic tape enclosed in a protective shell, but is roughly half the size at 73 mm × 54 mm × 10.5 mm. As the name suggests, the recording is digital rather than analog. DAT has the ability to record at higher, equal or lower sampling rates than a CD (48, 44.1 or 32 kHz sampling rate respectively) at 16 bits quantization. If a digital source is copied then the DAT will produce an exact clone, unlike other digital media such as Digital Compact Cassette or non-Hi-MD Minidisc, both of which use a loss data reduction system.
Like most formats of videocassette, a DAT cassette may only be recorded and played in one direction, unlike an analog compact audio cassette.
Uses of DAT
Professional
recording industry
Amateur
and home use
Computer
data storage medium
//
DAT (Digital Audio Tape) is a standard medium and technology
for the digital recording of audio on tape at a professional
level of quality. A DAT drive is a digital tape recorder with rotating heads
similar to those found in a video deck. Most DAT drives can record at sample rates of 44.1 KHz, the CD audio standard, and 48 kHz. DAT has become the
standard archiving technology in professional and semi-professional recording
environments for master recordings. Digital inputs and outputs on professional
DAT decks allow the user to transfer recordings from the DAT tape to an audio
workstation for precise editing. The compact size and low cost of the DAT
medium makes it an excellent way to compile the recordings that are going to be
used to create a CD master.
Optical disc player
An optical disc (OD) is a flat, usually circular disc which encodes binary data (bits) in the form
of pits (binary value of 0 or off, due to lack of reflection
when read) and lands (binary value of 1 or on, due to a reflection when read)
on a special material (often aluminum on one of its flat surfaces.
Optical discs are usually between 7.6 and 30 cm (3 to 12 in) in diameter, with 12 cm (4.75 in) being the most common size. A typical disc is about 1.2 mm (0.05 in) thick, while the track pitch (distance from the center of one track to the center of the next) ranges from 1.6 µm (forCDs) to 320 nm (for Blu-ray discs).
Optical discs are most commonly used for storing music (e.g. for use in a CD player), video (e.g. for use in a Blu-ray player), or data and programs for personal computers (PC).
Video cassette tape recorder/ player
The videocassette recorder, VCR, or video recorder is an electromechanical device that records analog audio and analog video from broadcast television or other source on a removable, magnetic tape videocassette, and can play back the recording. Use of a VCR to record a television program to play back at a more convenient time is commonly referred to as time shifting. VCRs can also play back prerecorded tapes. In the 1980s and 1990s, until the VCR was superseded by the DVD player and PVR, prerecorded videotapes were widely available for purchase and rental, and blank tapes were sold to make recordings.
Most domestic VCRs are equipped with a television broadcast receiver (tuner) for TV reception, and a programmable clock (timer) for unattended recording of a television channel from a start time to an end time specified by the user. These features began as simple mechanical counter-based single-event timers, but were later replaced by more flexible multiple-event digital clock timers. In later models the multiple timer events could be programmed through a menu interface displayed on the playback TV screen ("on-screen display" or OSD).
A video tape recorder (VTR) is a tape recorder designed to record video material on magnetic tape. The first practical video tape recorder, using transverse tape head scanning, was developed by Ampex Corporation in 1956. The early VTRs were reel to reel devices which recorded on individual reels of 2 inch (5.08 cm) wide magnetic tape. They were used in television studios, serving as a replacement for motion picture film stock and making recording for television applications cheaper and quicker. Beginning in 1963, videotape machines made instant replay during televised sporting events possible. Improved formats, in which the tape was contained inside a videocassette, were introduced around 1969; the machines which play them are called videocassette recorders. Agreement by Japanese manufacturers on a common standard recording format, so cassettes recorded on one manufacturer's machine would play on another's, made a consumer market possible, and the first consumer videocassette recorder was introduced by Sony in 1971.
Video format
A video file format is a type of file format for storing digital video data on a computer system. Video is almost always stored in compressed form to reduce the file size.
A video file normally consists of a container format (e.g. Matroska) containing video data in a video coding format (e.g. VP9) alongside audio data in an audio coding format (e.g. Opus). The container format can also contain synchronization information, subtitles, and metadata such as title etc... A standardized (or in some cases de facto standard) video file type such as .webm is a profile specified by a restriction on which container format and which video and audio compression formats are allowed.
The coded video and audio inside a video file container (i.e. not headers, footers and metadata) is called the essence. A program (or hardware) which can decode video or audio is called a codec; playing or encoding a video file will sometimes require the user to install a codec library corresponding to the type of video and audio coding used in the file.
Good design normally dictates that a file extension enables the user to derive which program will open the file from the file extension. That is the case with some video file formats, such as WebM (.webm), Windows Media Video (.wmv), and Ogg Video (.ogv), each of which can only contain a few well-defined subtypes of video and audio coding formats, making it relatively easy to know which codec will play the file. In contrast to that, some very general-purpose container types like AVI (.avi) and Quicktime (.mov) can contain video and audio in almost any format, and have file extensions named after the container type, making it very hard for the end user to use the file extension to derive which codec or program to use to play the files.
Optical discs are most commonly used for storing music (e.g. for use in a CD player), video (e.g. for use in a Blu-ray player), or data and programs for personal computers (PC).
Video cassette tape recorder/ player
The videocassette recorder, VCR, or video recorder is an electromechanical device that records analog audio and analog video from broadcast television or other source on a removable, magnetic tape videocassette, and can play back the recording. Use of a VCR to record a television program to play back at a more convenient time is commonly referred to as time shifting. VCRs can also play back prerecorded tapes. In the 1980s and 1990s, until the VCR was superseded by the DVD player and PVR, prerecorded videotapes were widely available for purchase and rental, and blank tapes were sold to make recordings.
Most domestic VCRs are equipped with a television broadcast receiver (tuner) for TV reception, and a programmable clock (timer) for unattended recording of a television channel from a start time to an end time specified by the user. These features began as simple mechanical counter-based single-event timers, but were later replaced by more flexible multiple-event digital clock timers. In later models the multiple timer events could be programmed through a menu interface displayed on the playback TV screen ("on-screen display" or OSD).
A video tape recorder (VTR) is a tape recorder designed to record video material on magnetic tape. The first practical video tape recorder, using transverse tape head scanning, was developed by Ampex Corporation in 1956. The early VTRs were reel to reel devices which recorded on individual reels of 2 inch (5.08 cm) wide magnetic tape. They were used in television studios, serving as a replacement for motion picture film stock and making recording for television applications cheaper and quicker. Beginning in 1963, videotape machines made instant replay during televised sporting events possible. Improved formats, in which the tape was contained inside a videocassette, were introduced around 1969; the machines which play them are called videocassette recorders. Agreement by Japanese manufacturers on a common standard recording format, so cassettes recorded on one manufacturer's machine would play on another's, made a consumer market possible, and the first consumer videocassette recorder was introduced by Sony in 1971.
Video format
A video file format is a type of file format for storing digital video data on a computer system. Video is almost always stored in compressed form to reduce the file size.
A video file normally consists of a container format (e.g. Matroska) containing video data in a video coding format (e.g. VP9) alongside audio data in an audio coding format (e.g. Opus). The container format can also contain synchronization information, subtitles, and metadata such as title etc... A standardized (or in some cases de facto standard) video file type such as .webm is a profile specified by a restriction on which container format and which video and audio compression formats are allowed.
The coded video and audio inside a video file container (i.e. not headers, footers and metadata) is called the essence. A program (or hardware) which can decode video or audio is called a codec; playing or encoding a video file will sometimes require the user to install a codec library corresponding to the type of video and audio coding used in the file.
Good design normally dictates that a file extension enables the user to derive which program will open the file from the file extension. That is the case with some video file formats, such as WebM (.webm), Windows Media Video (.wmv), and Ogg Video (.ogv), each of which can only contain a few well-defined subtypes of video and audio coding formats, making it relatively easy to know which codec will play the file. In contrast to that, some very general-purpose container types like AVI (.avi) and Quicktime (.mov) can contain video and audio in almost any format, and have file extensions named after the container type, making it very hard for the end user to use the file extension to derive which codec or program to use to play the files.
Name
|
File extension(s)
|
Container
|
Video coding format(s)
|
Audio coding format(s)
|
Notes
|
WebM
|
.webm
|
Matroska
|
VP8, VP9
|
Vorbis, Opus
|
Free
and libre format created
for HTML5
video.
|
Matroska
|
.mkv
|
Matroska
|
Any
|
Any
|
|
Flash
Video(FLV)
|
.flv
|
FLV
|
VP6,Sorenson
Spark,
Screen video, Screen video 2, H.264
|
||
F4V
|
.flv
|
MPEG-4
Part 12
|
H.264
|
MP3, AAC
|
Replacement
for FLV.
|
Vob
|
.vob
|
VOB
|
H.262/MPEG-2 Part 2 orMPEG-1
Part 2
|
PCM, DTS, MPEG-1,Audio Layer II(MP2), or Dolby
Digital (AC-3)]
|
|
Ogg Video
|
.ogv, .ogg
|
Ogg
|
Theora, Dirac
|
Vorbis, FLAC
|
Open source
|
Dirac
|
.drc
|
?
|
Dirac
|
?
|
Open source
|
GIF
|
.gif
|
N/A
|
N/A
|
none
|
Simple
animation, inefficient compression, no sound, widely supported
|
Video camcorders
A video camera is a camera used for electronic motion picture acquisition (as opposed to a movie camera, which records images on film), initially developed for the television industry but now common in other applications as well.
Video cameras are used primarily in two modes. The first, characteristic of much early broadcasting, is live television, where the camera feeds real time images directly to a screen for immediate observation. A few cameras still serve live television production, but most live connections are for security, military/tactical, and industrial operations where surreptitious or remote viewing is required. In the second mode the images are recorded to a storage device for archiving or further processing; for many years, videotape was the primary format used for this purpose, but was gradually supplanted by optical disc, hard disk, and then flash memory. Recorded video is used in television production, and more often surveillance and monitoring tasks in which unattended recording of a situation is required for later analysis.
Modern video cameras have numerous designs and uses.
· Professional video cameras, such as those used in television production, may be television studio-based or mobile in the case of an electronic field production (EFP). Such cameras generally offer extremely fine-grained manual control for the camera operator, often to the exclusion of automated operation. They usually use three sensors to separately record red, green and blue.
· Camcorders combine a camera and a VCR or other recording device in one unit; these are mobile, and were widely used for television production, home movies, electronic news gathering (ENG) (including citizen journalism), and similar applications. Since the transition to digital video cameras, most cameras have in-built recording media and as such are also camcorders.
· Closed-circuit television (CCTV) generally uses pan tilt zoom cameras (PTZ), for security, surveillance, and/or monitoring purposes. Such cameras are designed to be small, easily hidden, and able to operate unattended; those used in industrial or scientific settings are often meant for use in environments that are normally inaccessible or uncomfortable for humans, and are therefore hardened for such hostile environments (e.g. radiation, high heat, or toxic chemical exposure).
· Webcams are video cameras which stream a live video feed to a computer.
· Camera phones - nowadays most video cameras are incorporated into mobile phones.
· Special camera systems are used for scientific research, e.g. on board a satellite or a spaceprobe, in artificial intelligence and robotics research, and in medical use. Such cameras are often tuned for non-visible radiation for infrared (for night vision and heat sensing) or X-ray (for medical and video astronomy use).
Video digitization techniques
Digitization is the process of converting information into a digital format . In this format, information is organized into discrete units of data (called bit s) that can be separately addressed (usually in multiple-bit groups called byte s). This is the binary data that computers and many devices with computing capacity (such as digital camera s and digital hearing aid s) can process.
Digitizing or digitization is the representation of an object, image, sound, document or signal (usually an analog signal) by generating a series of numbers that describe a discrete set of its points or samples. The result is called digital representation or, more specifically, digital, for the object, and digital form, for the signal. In modern practice, the digitized data is in the form of binary numbers, which facilitate computer processing and other operations, but strictly speaking, digitizing simply means the conversion of analog source material into a numerical format; the decimal or any other number system can be used instead.
Digitization is of crucial importance to data processing, storage and transmission, because it "allows information of all kinds in all formats to be carried with the same efficiency and also intermingled".[2] Unlike analog data, which typically suffers some loss of quality each time it is copied or transmitted, digital data can, in theory, be propagated indefinitely with absolutely no degradation.
Digitization occurs in two parts:
Discretization
The reading of an analog signal A, and, at regular time intervals (frequency), sampling the value of the signal at the point. Each such reading is called a sample and may be considered to have infinite precision at this stage.
Quantization
Samples are rounded to a fixed set of numbers (such as integers), a process known as quantization.
Video optical discs (DVD):
DVD is an optical
disc technology with a 4.7 gigabyte storage capacity on a single-sided,
one-layered disk, which is enough for a 133-minute movie. DVDs can be single-
or double-sided, and can have two layers on each side; a double-sided,
two-layered DVD will hold up to 17 gigabytes of video, audio, or other
information. This compares to 650 megabytes (.65 gigabyte) of storage for a CD-ROM
disk.
DVD uses the
MPEG-2 file and compression standard. MPEG-2 images have four times the
resolution of MPEG-1 images and can be delivered at 60 interlaced fields per
second where two fields constitute one image frame. (MPEG-1 can deliver 30 no
interlaced frames per second.) Audio quality on DVD is comparable to that of
current audio compact discs.
Formats:
·
DVD-Video
is the format designed for full-length movies that work with your television
set.
·
DVD-ROM
is the type of drive and disc for use on computers. The DVD drive will usually
also play regular CD-ROM discs and DVD-Video disks.
·
DVD-RAM is the writeable
version.
·
DVD-Audio
is a CD-replacement format.
·
There
are a number of recordable DVD formats, including DVD-R for General, DVD-R for
Authoring, DVD-RAM, DVD-RW, DVD+RW, and DVD+R.
DVD was originally
said to stand for digital video disc, and later for digital
versatile disc. The current official stance of the DVD Forum is
that the format should just be referred to as DVD
Hi-Fi audio
amplifiers
??
Audio recording systems
Analog (or analogue) recording (Greek, ana is "according to" and logos "relationship") is a
technique used for the recording of analog signals which, among many possibilities, allows analog audio and analog video for later playback.
Analog recording
methods store signals as a continuous signal in or on the
media. The signal may be stored as a physical texture on a phonograph record, or a fluctuation in the field of a magnetic recording.
This is different from digital recording which digital signals are
represented as discrete numbers.
A Digital Recording/Processing
System
A
block diagram of a digital recording/processing system is shown in figure 2.
The processes at each of the numbered blocks 1 to 7 are described below:
Figure 2: Block diagram of
digital recording processing system. Both sources of noise [N1 (t), N2
(t)] are needed in order to avoid digital distortions of the signal V (t) in
the form of coherent noise ND (t). Properly chosen N1 (t)
and N2 (t) add only a little noise to the output, but remove
coherence of ND (t) (digital noise) with the signal V (t).
1.
Following Nakajima (1983),
Mieszkowski (1989) and Wannamaker, Lipshitz and Vanderkooy (1989), analog
dither must be added to the input signal in order to
a) Linearize the A/D converter
b) Make possible improvement of S/N by averaging process
according to formula:
(S/N) after averaging = (S/N) before averaging
|
(5)
|
Where: n = No. of averaged signals
c) Eliminate harmonic distortions (created when digital
noise ND (t) is coherent with signal V (t)).
d) Eliminate intermodulation distortion (created as well
when digital noise ND (t) is coherent with signal V (t)).
e) eliminate "digital deafness" (when the signal
V(t) falls below
, where
is the step size in the A/D converter, the signal will
not be recorded at all unless there is a noise N1(t) on the input).
f) Eliminate noise modulation by the signal
2.
Input low pass filter (antialiasing
filter) should eliminate all frequencies above fess / 2 , where
fs= sampling frequency, in order to avoid aliasing distortion (Folding
of frequencies into passband: fnew = fs - foriginal where
foriginal
fs / 2).
3.
A/D converter converts analog signal
into a digital number (for example, 10110110 represents a binary coded 8-bit
amplitude). Sampling speeds range from 2 kHz to 10 GHz and amplitude resolution
ranges from 4 bits to 20 bits.
4.
If DSP is performed on the signal,
one must add digital dither N2(t) (box 5) to avoid digital
distortions and coherent noise ND (t) on the output of D/A
converter. Digital processing should also be performed using sufficiently
precise real numbers to avoid round-off errors.
Storage of digital data can be performed on magnetic tape,
optical disk, magnetic disk, or RAM (Random Access Memory). Prior to storage,
extra code is generated to allow for error correction. This error correction
code allows detection and correction of
errors during playback of the audio signal. Redundant information must be added
to the original signal in order to combat noise inherent in any
storage/communication system. The particular type of code and error correction
system depends on storage medium, communication channel used and immunity from
errors (an arbitrarily small probability of error can be obtained, Nakajima,
1983; Shannon, 1949/1975).
5.
Prior to D/A conversion, digital
dither must be added to numbers representing amplitude of the signal if DSP has
been performed. Optimal digital dither has triangular probability density
function (PDF) (Wannamaker, et al. 1989).
6.
D/A converter converts digital
numbers into analog signal. Available conversion speeds are 2 kHz to 200 MHz
and available amplitude resolution is 4 bits to 20 bits.
7.
Output low pass filter should
eliminate all frequencies above fs /2 which are generated
during D/A conversion.
Video camera
A video camera is a camera used for electronic motion picture acquisition (as opposed to a movie camera, which records images on film),
initially developed for the television industry but now common in other applications as well.
Modern video
cameras have numerous designs and uses.
·
Professional video cameras, such as those
used in television production, may be television
studio-based
or mobile in the case of an electronic field production (EFP). Such
cameras generally offer extremely fine-grained manual control for the camera
operator, often to the exclusion of automated operation. They usually use three
sensors to separately record red, green and blue.
·
Camcorders combine a camera
and a VCR or other
recording device in one unit; these are mobile, and were widely used for
television production, home
movies, electronic news gathering (ENG) (including citizen journalism), and similar applications. Since the
transition to digital video cameras, most cameras have in-built recording media
and as such are also camcorders.
·
Closed-circuit television (CCTV) generally
uses pan tilt zoom cameras (PTZ), for
security, surveillance, and/or monitoring purposes. Such cameras are designed
to be small, easily hidden, and able to operate unattended; those used in
industrial or scientific settings are often meant for use in environments that
are normally inaccessible or uncomfortable for humans, and are therefore
hardened for such hostile environments (e.g. radiation, high heat, or toxic chemical exposure).
·
Webcams are video
cameras which stream a live video
feed to a computer.
·
Camera
phones - nowadays most
video cameras are incorporated into mobile
phones.
·
Special
camera systems are used for scientific research, e.g. on board a satellite or a space
probe,
in artificial intelligence and robotics research, and in medical use. Such
cameras are often tuned for non-visible radiation for infrared (for night
vision and heat
sensing) or X-ray (for medical and video
astronomy use).
·
Video recording systems
??
Digital video is a representation of moving
visual images in the form of encoded digital data. This is in contrast
to analog video, which represents
moving visual images with analog signals.
Basic TV broadcasting techniques
The various methods of TV transmission
Programming
broadcast is the transmission of television stations’ programming (sometimes
called channels) that is often directed to a specific audience.
There are several
types of TV broadcast systems:
§ Analogue
Terrestrial TV
§ Systems for
sound transmission
§ Digital
Satellite TV
§ Cable TV:
analogue and digital systems
§ New
technologies:
§ Digital
terrestrial TV (DTTV)
§ High Definition
Television (HDTV)
§ Pay-per-view
§ Video-on-demand
§ Web TV
§ IPTV
The various methods of TV transmission
Programming broadcast is the
transmission of television stations’ programming (sometimes called channels)
that is often directed to a specific audience.
There are several types of TV
broadcast systems:
§ Analogue
Terrestrial TV
§ Systems for
sound transmission
§ Digital
Satellite TV
§ Cable TV:
analogue and digital systems
§ New
technologies:
§ Digital
terrestrial TV (DTTV)
§ High Definition
Television (HDTV)
§ Pay-per-view
§ Video-on-demand
§ Web TV
§ IPTV
ANALOGUE TERRESTRIAL TV
Terrestrial television is a term
which refers to modes of television broadcasting which do not involve satellite
transmission or via underground cables.
Terrestrial television broadcasting
dates back to the very beginnings of television as a medium itself and there
was virtually no other method of television delivery until the 1950s with the
beginnings of cable television, or community antenna television (CATV).
The first non-terrestrial method of
delivering television signals that in no way depended on a signal originating
from a traditional terrestrial source began with the use of communications
satellites during the 1960s and 1970s of the twentieth century.
Analogue TV encodes the image and
sound information and transmits them as an analogue signal in which the message
transmitted by the broadcasting signal is composed of amplitude and/or
frequency variations and modulated into a VHF or UHF carrier.
The analogue television picture is
"drawn" several times on the screen (25 in PAL system) as a whole
each time, as in a motion picture film, regardless of the content of the image.
DIGITAL SATELLITE TV
Satellite television is television
signals delivered by means of communications satellites and received by
satellite dishes and set-top boxes. In many areas of the world it provides a
wide range of channels and services, often to areas that are not serviced by
terrestrial or cable providers.
Satellite television, like other
communications relayed by satellite, starts with a transmitting antenna located
at an uplink facility which have very large uplink satellite dishes, as much as
9 to 12 meters (30 to 40 feet) in diameter what results in more accurate aiming
and increased signal strength at the satellite.
The uplink dish is pointed toward a
specific satellite and the uplinked signals are transmitted within a specific
frequency range, so as to be received by one of the transponders tuned to that
frequency range aboard that satellite, which 'retransmits' the signals back to
Earth but at a different frequency band, a process known as “translation”, used
to avoid interference with the uplink signal, typically in the C-band (4–8 GHz)
or Ku-band (12–18 GHz) or both.
CABLE TV
Cable Television or
Community Antenna Television (CATV) is a system for distribution of audiovisual
content for television, FM radio and other services to consumers through fixed
coaxial cables, avoiding the traditional system of radio broadcasting antennas
(broadcast television) and have widespread use, mainly through the pay-tv
services.
Technically, the
cable TV involves the distribution of a number of television channels received
and processed in a central location (known as head-end) to subscribers within a
community through a network of optical fiber and/or coaxial cables and
broadband amplifiers.
The use of
different frequencies allows many channels to be distributed through the same
cable, without separate wires for each, and the tuner of the TV or Radio
selects the desired channel from among all transmitted.
A cable television
system begins at the head end, where the program is received (and sometimes
originated), amplified, and then transmitted over a coaxial cable network.
The architecture of
the network takes the form of a tree, with the "trunk" that carries
the signals in the streets, the "branches" carrying the signals for
buildings and, finally, the "arms" carrying the signals to individual
homes.
The coaxial cable
has a bandwidth capable of carrying a hundred television channels with six
megahertz of bandwidth each, but the signals decay quickly with distance, hence
the need to use amplifiers to "renew" the signals periodically to
boost them.
Backbone trunks in
a local cable network frequently use optical fiber to minimize noise and
eliminate the need for amplifiers as optical fiber has considerably more
capacity than coaxial cable and allows more programs to be carried without
signal lost or noise adding.
Most of the TV
tuners are able to directly receive the cable channels, which are usually
transmitted in the RF (radio frequency) band, however, many programs are
encrypted and subject to a tariff itself and in such cases, you must install a
converter between the cable and the receiver.
DIGITAL TERRESTRIAL TV
Digital Terrestrial Television (DTTV
or DTT) is an implementation of digital television technology to provide a
greater number of channels and/or better quality of picture and sound using
aerial broadcasts to a conventional antenna (or aerial) instead of a satellite
dish or cable connection.
The technology used in Europe is DVB-T that is immune to multipath
distortion.
DTTV is transmitted on radio
frequencies through the airwaves that are similar to standard analogue
television, with the primary difference being the use of multiplex transmitters
to allow reception of multiple channels on a single frequency range (such as a
UHF or VHF channel).
The amount of data that can be
transmitted (and therefore the number of channels) is directly affected by the
modulation method of the channel.
The modulation method in DVB-T is COFDM with either 64 or 16 state
Quadrature Amplitude Modulation (QAM). In general a 64QAM channel is capable of
transmitting a greater bit rate, but is more susceptible to interference. 16
and 64QAM can be combined in a single multiplex, providing a controllable
degradation for more important programmer streams. This is called hierarchical
modulation.
New developments in compression have
resulted in the MPEG-4/AVC
standard which will enable two high definition services to be coded into a 24
Mbit/s European terrestrial transmission channel.
DTTV is received via a digital
set-top box, or integrated receiving device, that decodes the signal received
via a standard aerial antenna, however, due to frequency planning issues, an
aerial with a different group (usually a wideband) may be required if the DTTV
multiplexes lie outside the bandwidth of the originally installed aerial.
In Portugal, as detailed in the
information published by ANACOM in February 2008, Set Top Boxes (STB) or TV
receivers must be capable of decoding MPEG-4,
H.264 AVC coded transmissions and also be suitable to display HD signals in at
least 720p format, as this is the format to be broadcast on the country.
In the case of STB’s, ANACOM advises
that an HDMI connection should also be available and that it should be version
1.3 and that the box should of course decode the transmitted HDTV format.
HDTV
The high-definition television, also
known as HDTV (High Definition Television) is a television system with a
resolution significantly higher than in the traditional formats (NTSC, SECAM,
PAL).
The HDTV is transmitted digitally
and therefore its implementation generally coincides with the introduction of
digital television (DTV), technology that was launched during the 1990s.
Although several patterns of
high-definition television have been proposed or implemented, the current HDTV
standards are defined by ITU-R BT.709 as 1080i (interlaced), 1080p
(progressive) or 720p using the 16:9 screen format.
The term "high definition"
can refer to the specification of the resolution itself or, more generally, the
media capable of such a definition as the video media support or the television
set.
What will be of interest in the near
future is high definition video, through the successors of the DVD, HD DVD and
Blu-Ray (is expected that the last one will be adopted as a standard) and,
consequently, the projectors and LCD and plasma televisions sets as well as
retro projectors and video recorders with 1080p resolution/definition.
High-definition television (HDTV)
yields a better-quality image than standard television does, because it has a
greater number of line resolution.
The visual information is some 2 to
5 times sharper because the gaps between the scan lines are narrower or
invisible to the naked eye.
The larger the size of the
television the HD picture is viewed on, the greater the improvement in picture
quality. On smaller televisions there may be no noticeable improvement in
picture quality.
The lower-case "I"
appended to the numbers denotes interlaced; the lower-case "p"
denotes progressive: With the interlaced scanning method, the 1,080 lines of
resolution are divided into pairs, the first 540 alternate lines are painted on
a frame and then the second 540 lines are painted on a second frame; the
progressive scanning method simultaneously displays all 1,080 lines on every
frame, requiring a greater bandwidth.
PAY-PER-VIEW
Pay-per-view (often abbreviated PPV)
offers a system by which a television audience can purchase events to view on
TV-monitors via private telecast of that event to their homes.
The broadcaster shows the event at
the same time to everyone ordering it (as opposed to video-on-demand systems,
which allow viewers to see the event at any time) and can be purchased using an
on-screen guide, an automated telephone system, or through a live customer
service representative.
Events often include feature films,
sporting events, adult content movies and "special" events.
VIDEO-ON-DEMAND
Video-on-Demand (VoD) or
Audio-Video-on-Demand (AVoD) systems allow users to select and watch/listen to
video or audio content on demand.
VoD systems either stream content
through a set-top-box, allowing viewing in real time, or download it to a
device such as a computer, digital video recorder, personal video recorder or
portable media player for viewing at any time.
Download and streaming
video-on-demand systems provide the user with a large subset of VCR
functionality including pause, fast forward, fast rewind, slow forward, slow
rewind, jump to previous/future frame etc., these functions are called trick
modes.
For disk-based streaming systems
which store and stream programs from hard disk drive, trick modes require
additional processing and storage on the part of the server, because separate
files for fast forward and rewind must be stored.
Memory-based VoD streaming systems
have the advantage of being able to perform trick modes directly from RAM,
which requires no additional storage or CPU cycles on the part of the
processor.
It is possible to put video servers
on LANs, in which case they can provide very rapid response to users. Streaming
video servers can also serve a wider community via a WAN, in which case the
responsiveness may be reduced. Download VoD services are practical to homes
equipped with cable modems or DSL connections.
WEB TV
Web TV, TVIP, or TV on the Internet
is the transmission of a programming grid through the Internet. It can be known
"normal" TV channels or channels specifically designed for the
Internet.
Web TV, in a simplified form, is
nothing more than the provision of video and audio over the Internet; and the
way to assist the transmission varies from the monitor of a computer through
the use of an iPod or a mobile phone to the TV set if one have the decoder.
IPTV (TV over Internet Protocol)
The recent introduction of
Television over Internet Protocol technology, commonly known as IPTV, made a
revolution on the distribution networks for TV signals, allowing eliminate many
of the problems associated with a distribution network based on coaxial cables,
in particular those related with the degradation of signal, interference,
signal levels, and capacity of the transmission of the channel’s band.
Moreover, thanks to IP (Internet
Protocol), will be possible the combination of several interfaces in a
multi-service unit and the broadcast and distribution of diverse and varied
services on the same network, which previously required differentiated
infrastructure, including: TV signals, telephone service and broadband Internet
access, setting a platform we know today as Triple Play.
In essence, the triple play concept
is not entirely new because, in terms of services, there are some years ago
that are available some solutions combining a mix of TV services, telephony and
Internet access.
Studies show that the churn rate
(voluntary abandonment of service) of the offer triple play subscribers is
substantially lower than that observed when the voice, data and TV are sold on
a non-convergent way.
Another factor is the progress in
access technologies and platforms for packet telephony and video. A variant of
ADSL (asymmetrical digital subscriber line), known as ADSL2+, represents a
change in the effective performance of Internet connection on the original
format, not to mention the more recent developments, such as VDSL
(very-high-bit-rate DSL).
The access over optical fiber in its
more popular form, known as PON (passive optical network), reflects an even
more daring way, resulted in significant investments in that technology,
seeking for high-speed Internet access, voice and multi-channel of
high-definition TV union.
Progress in video distribution
systems is on the way too. In recent years, a number of innovations and
developments in the industry of hardware and software systems for the TV
industry have started to TV over IP (also known as IPTV).
The main driver is integrated
platforms consisting of set-top-boxes, servers and video content protection
system (DRM - digital rights management), together with appropriate tools,
middleware and billing, allow the provision of a variety of TV services in
several formats, such as streaming, video on demand and time-shifted TV, based
on a combination of underlying IP networks and DSL or optical access systems.
In this context, the sophistication
of algorithms for compression of video signals has a relevant role. Techniques
such as MPEG-4 AVC (advanced video coding), for
example, enable the transmission of signals in high definition TV over IP
networks.
The search for a strategy to offer
multiple play-based (dual, triple, quadruple etc.) is an irreversible
phenomenon in the communications industry but at the same time it impose
enormous challenges - particularly in terms of selection of technology
platforms, control and regulation - opens a huge horizon of possibilities, both
supply and demand.
Mono sound reception:
Mono or monophonic describes a system where all the
audio signals are mixed together and routed through a single audio channel.
Mono systems can have multiple loudspeakers, and even multiple widely separated
loudspeakers. The key is that the signal contains no level and arrival
time/phase information that would replicate or simulate directional cues.
Common types of mono systems include single channel center clusters, mono split
cluster systems, and distributed loudspeaker systems with and without
architectural delays. Mono systems can still be full-bandwidth and
full-fidelity and are able to reinforce both voice and music effectively. The
big advantage to mono is that everyone hears the very same signal, and, in
properly designed systems, all listeners would hear the system at essentially
the same sound level. This makes well-designed mono systems very well suited
for speech reinforcement as they can provide excellent speech intelligibility.
Stereo Sound reception
True stereophonic sound systems have two independent
audio signal channels, and the signals that are reproduced have a specific
level and phase relationship to each other so that when played back through a
suitable reproduction system, there will be an apparent image of the original
sound source. Stereo would be a requirement if there is a need to replicate the
aural perspective and localization of instruments on a stage or platform, a
very common requirement in performing arts centers.
This also means that a mono signal that is panned
somewhere between the two channels does not have the requisite phase information
to be a true stereophonic signal, although there can be a level difference
between the two channels that simulates a position difference, this is a
simulation only. That's a discussion that could warrant a couple of web pages
all by itself.
An additional requirement of the stereo playback system
is that the entire listening area must have equal
coverage of both the left and right channels, at essentially equal levels. This
is why your home stereo system has a "sweet spot" between the two
loudspeakers, where the level differences and arrival time differences are
small enough that the stereo image and localization are both maintained. This
sweet spot is limited to a fairly small area between the two loudspeakers and
when a listener is outside that area, the image collapses and only one or the
other channel is heard. Living with this sweet spot in your living room may be
OK, since you can put your couch there, but in a larger venue, like a church
sanctuary or theatre auditorium, that sweet spot might only include 1/3 the
audience, leaving 2/3 of the audience wondering why they only hear half the
program.
HI-FI Audio:
a.Streaming
b. Headphones
c. Wires & cables
d. Speakers
e. Stereo amplifier & receivers
f. Turntable & cartridges
No comments:
Post a Comment