Arquitectura
 

  Indice Principal INDICE
PRINCIPAL
BUSCAR BUSCAR
MENSAJES
USUARIOS EN LINEA USUARIOS
EN LINEA
Log in ACCESO A
USUARIOS
 
¡Mantente Informado(a)!

Recibe GRATIS una suscripción al "Boletín de Arquitectura" con decenas de ¡noticias, eventos, links de interés y archivos!

Recibirás una copia directamente en tu buzón de correo quincenalmente. Completamente ¡GRATIS!

Nombre::
Apellidos:
Correo Electrónico:
Página Principal: Búsqueda de Información: Interiores:
Dimensiones sala de conciertos

 



bubble_fish
Usuario Nuevo

May 2, 2009, 4:46 AM

Mensaje #1 de 2 (12436 visitas)
Link
Dimensiones sala de conciertos Responder Citando El Mensaje | Responder

Hola!
Estoy haciendo un proyecto para la escuela y necesito saber las dimensiones necesarias a tener en cuenta para diseñar una sala de conciertos, que tenga aforo para unas 50 personas

muchisimas gracias!Sly


robertsanchez
Usuario Regular


May 24, 2009, 10:06 PM

Mensaje #2 de 2 (12207 visitas)
Link
Re: [bubble_fish] Dimensiones sala de conciertos [En respuesta a ] Responder Citando El Mensaje | Responder

http://www.concerthalls.unomaha.edu/


EBU Technical Review Winter 1997 31
Essert
Progress in concert hall design
– developing an awareness of spatial sound
and learning how to control it
R. Essert (Arup Acoustics)
For many decades, the acoustical
design of rooms for music performances
was driven almost exclusively
by considerations of the time history
of sound. However, the propagation
of sound is a function of both time
and space: our hearing and
perception of sound are sensitive to
spatial as well as temporal attributes.
This article traces the development of
spatial acoustics in the design of halls
during the late 20th century, in
relation to the advancement of
acoustical knowledge and related
technologies. An outline is given of
current directions in modelling and
measurement systems that may lead
to a greater understanding of which
spatial sound fields are preferred for
different events, and how the
geometrical form can influence them.
1. Introduction
Over the last quarter of a century, progress has
accelerated in our understanding of the effects that
spatial distribution of sound have on our perception.
We can consider the sound propagation in a
room as the change in spatial attributes of the
sound field over time, or as the change in time/
frequency response over space for a given input
and output. The room response is a function of
space and time, and the data can be “sliced†in
many different ways for our understanding of the
process. This article focuses mostly on the impulse
response of a room, i.e. the response at the
output due to a given input. This is an essential
part of the analysis in modelling, measurement
and control.
The room response for a single-point sound
source and a single-point receiver (a single ear) is
called the 3D impulse response (3DIR). This
includes the effects of source and receiver directivities.
It can encompass several channels of data
which, together, provide complete information on
the amplitude as a function of time and direction. Original language: English
Manuscript received 15/12/97.
32 EBU Technical Review Winter 1997
Essert
The binaural room impulse response (BRIR) is
sufficient to describe the inputs to our two ears,
which is enough to render perceptual models, but
it does not explicitly relate to direction. The 3DIR
includes directional information and, therefore,
relationships with the room geometry (the architecture
of the space).
Two questions needs to be explored. What are a
necessary and sufficient number of:
a) degrees of freedom?
b) data channels?
Only upon answering these can we discuss data
compression. If we rely totally on perceptionbased
models and the measurement of binaural
room impulse responses, we may know how a
room sounds, but may not be able to link the sound
specifically to the architecture. We need to know
both the perceptual and the spatial models in order
to relate the sound field to the architecture and to
perception.
Much of the recent work that has focused on the
spatial aspects of sound fields is relevant to, indeed
driven by, the analysis and design of auditoria. In
this article we will review some aspects of spatial
hearing, spatial measurements, modelling and
auralisation. We will also look at what spatial
sound fields are preferred for different events and
how geometrical form can influence them.
Over the years, auditorium designs have responded
to the growing understanding of spatial sound,
but not enough. By understanding the links between
architecture and acoustics, we are making
greater progress in translating acoustical goals
into room shapes. Through deep involvement
with music and theatre performance, we understand
which goals are appropriate for which
rooms and which uses.
2. Perception
Our understanding of sound perception has come
a long way since W.C. Sabine measured reverberation
times by ear in Sanders Theatre [1]. An
important leap came in the 1970s with the suggestions
by Marshall [2] and Barron [3] and by the
Göttingen [4] and Berlin groups that room width
is critical to our sense of acoustical space. Their
deduction that lateral energy has something to do
with it has been accepted ever since. Just how
much, and through what means, still remain the
subject of debate. But we now do understand that
all aspects of a room’s shape – i.e. the locations,
shapes and angles of its boundary surfaces – are
audible.
Work by Jens Blauert [5] and others on spatial
hearing has illuminated a great deal about the
mechanisms and reasons behind our perception of
space and timbre. Our ears, head and torso filter
the sound before it gets to the auditory nerve,
creating binaural dissimilarity that varies with
frequency. The same mechanism creates a dependence
of the perceived timbre (and loudness) of a
sound, on its direction of arrival. An ensemble of
reflected waves arriving from different directions
is processed by the brain as an ensemble according
to a complex set of rules.
Our sense of envelopment is due to amplitude and
phase differences between the sounds reaching
our two ears. Ando [6] and others have focused on
the Interaural Correlation Coefficient (IACC) as
a better indicator of the perception of envelopment.
Griesinger [7] has related Room Impression
to fluctuations in the amplitude and timing differences
between the two ears.
Our perception of acoustical space seems to be
multidimensional. Among the distinct perceptions
of music in concert halls are what we currently
call (a) source broadening and (b) envelopment.
These have been linked to (a) lower
frequency, earlier sound and (b) higher frequency,
later sound, respectively. Can we control them independently
with architecture? Or with electronics?
Do we want to? It has been clear since the
work of Keet [8] that some spatial effects are dependent
on the overall sound level or, in effect, the
absolute measure of lateral energy. There is little
argument that these aspects of spaciousness (envelopment,
etc.) are important in music acoustics,
but there is little agreement on how much of them
is enough. Is there an optimum? If so, it would
seem to be dependent on the type of performance
or repertoire of music which, in the end, is informed
by the listeners’ expectations and historical
perspective.
In researching how we hear and what we like, we
keep in mind the practical analysis and modelling
applications. What is sufficient accuracy? If we
try to model all the physics and hearing/psychological
processes to the highest possible accuracy,
we may be overdetermining the result if we cannot
hear all the dimensions or all of the accuracy.
Abbreviations
3DIR 3D impulse response
BRIR Binaural room impulse response
HRTF Head-related transfer function
IACC Interaural correlation coefficient
RT Reverberation time
EBU Technical Review Winter 1997 33
Essert
3. Spatial sound
measurement
Sabine used his ears and a stopwatch to measure
sound decays. Since then, the vast majority of
acoustics measurements on auditoria have been,
and still are, carried out with a single omnidirectional
microphone. In the 60s and 70s we began to
record and analyse impulse responses, looking at
various energy ratios. These, for the most part,
still involved single-channel data. Directional information
was sometimes investigated with directional
microphones and parabolic reflectors.
With the recognition that lateral sound and binaural
dissimilarity are important in concert hall
acoustics, Barron and others began to measure the
lateral fraction, and Ando pushed forward with
Interaural Cross Correlation and the binaural
room impulse response. The lateral energy fraction
at a point has been measured in halls for some
years now, although Bradley [9], Beranek [10]
and others have produced evidence that it is not
well correlated with perception.
For concert hall and theatre designers, information
on the spatial aspects of the sound field is
helpful in relating the sound to the architecture in
order to help us understand which surfaces ultimately
reflect the sound to the listener. Moreover,
we need to study the directional attributes of
sound fields in halls and correlate them with perceptual
attributes.
Our current goals for 3D measurements include
the following:
– development of diagnostic tools to help understand
the directional behaviour of the sound
field in time;
– ability to assess the full 3D spatial impulse
response, including pressure as a function of
time and direction;
– ability to slice data across time and space;
– development of new approaches to visualisation,
including animation;
– auralisation with measured impulse responses,
independent of the specific ears used in recording;
– development of a library of 3D measurements
made in many different facilities.
Large arrays for high directional resolution have
been developed by several teams, including Elko
(Bell Labs), Broadhurst, and Hanyu & Kimura.
These can achieve high spatial resolution, but they
are large and unwieldy, and depend on the precise
alignment of many elements. However, we can
recognize that four channels of information are
sufficient in principle to describe fully the 3D spatial
sound field, although at the expense of lower
spatial resolution. The four channels are three orthogonal
directional vectors and a total pressure.
Figure 1
Four-channel B-format pressure output from a Soundfield microphone.
Measurement of balloon burst impulse in Boston Symphony Hall
(unoccupied), a tall, narrow, reverberant hall. Digitized to 16-bit
resolution at 22050 Hz. The traces shown are the omni (W) , X, Y and Z
components respectively with a vertical scale ranging from –1 to +1.
Figure 2
Smoothed directional fractions of the 4-channel response. The top trace
is the smoothed omni (W) pressure response and the other traces are
ratios of the dipole patterns to the omni channel, i.e. Fx, Fy and Fz
respectively. This approach maintains the polarity of the pressure signal
so that Fx is front-back, Fy is left-right and Fz is up-down.
The vertical axis for each trace ranges from –1 to +1.
34 EBU Technical Review Winter 1997
Essert
Several groups have developed room acoustics
measurement systems based on four omnidirectional
pressure microphones in a tetrahedral array:
– Yamasaki and Itow
– Sekiguchi, Kimura & Hanyu
– Korenaga
Another group – Abdou and Guy – has developed
a 3D intensity method.
The Author has developed an approach, based on
the Soundfield microphone, which was pioneered
by Michael Gerzon [11] and Duane Cooper in the
70s. This device is a very closely-spaced tetrahedral
array of four cardioid microphones, time
aligned to measure the sound at a point at the
centre of the array. The four signals are combined
to give a pressure gradient (dipole, or “figureof-
8†directivity) response in the X, Y, Z directions
and the omni-directional pressure response,
W. This set of outputs has been called B-format.
The Author had been using an omni/dipole microphone
pair for lateral energy fraction measurements
and, along the way, has developed an approach
to show the instantaneous lateral fraction.
With the dipole microphone directed in the X, Y,
Z directions, one could gather fractional energy in
all six quadrants. Since the Soundfield microphone
B-format outputs are equivalent to the
cosine directivity pressure gradient microphone,
we can use the same formula to derive the fractions
for each direction X, Y, Z with the common
W pressure response (Fig. 1).
The process is a windowed product of the pressure
and gradient channels, normalized by a sliding
window average of the squared pressure channel,
for a short time window () that, ideally, would be
chosen according to perceptual relevance.
The directional fractions for the X (front-back)
direction are given by:
FX(t) 

t

2
t

2
X()W()

t

2
t

2
W()W()
where W() is the pressure response
X() is the pressure gradient
(cosine directivity) response.
Smoothed directional fractions for the same data
are shown in Fig. 2.
Figure 3
3DIR “cloud†plot
derived from the
directional fractions.
Each of the 512
points in a fraction
plot is mapped to a
direction in 3D
Cartesian coordinates,
with the
distance from the
axis corresponding to
time (total = 1300ms).
Interpretation of the
plot is enhanced by
animation.
EBU Technical Review Winter 1997 35
Essert
The X, Y, Z fractions constitute the amplitude
shading in each direction according to the cosine
weighting of the microphone. We can therefore
consider the three directional fractions to be “directional
cosines†in order to establish the general
resultant direction of sound at a particular instant
with respect to the receiver (listener). Results can
be displayed on a 3D axis in a “cloud†of energy
that evolves over time (Fig. 3), or in a “Mercatorâ€
projection (Fig. 4).
4. Modelling and auralisation
(sound rendering)
Acoustical modelling and auralisation techniques
have helped us to understand spatial aspects of
sound by visualising explicitly the 3D sound paths
in the model and by listening to modelled phenomena.
They have challenged us to think explicitly
about some of the more detailed aspects of the
behaviour of sound in halls, and of sound sources,
as well as of perception.
Mainstream acoustical modelling in architectural
projects is based fundamentally on geometrical
acoustics, with ad hoc extensions for non-trivial
phenomena such as edge diffraction, diffusion,
and oblique angle absorption coefficients (Fig. 5).
Auralisation is the rendering of sound of modelled
phenomena, a tremendously complex undertaking.
Anechoic source sound is filtered through the
synthetic (or measured) impulse response of the
space and the appropriately modelled effects of
the ears, head and shoulders (called the headrelated
transfer function, or HRTF). The resulting
sound is played through headphones or a surround
sound playback system such as Ambisonics.
Auralisation allows us to listen to the phenomena
we have heretofore judged on the basis of comparative
numbers or graphics.
The directivity of instruments and voices has an
influence on our perception of the timbre of the
instruments and their loudness (and therefore their
balance with others in the ensemble). Sound radiation
from instruments is complex, the quality of
sound being different in various directions. How
many directions are sufficient for modelling?
Loudspeaker manufacturers are now publishing
the directivities of their horns at 10 degrees, but
that may be overkill. Auralisation will help us to
find what is the appropriate amount of detail.
5. Design evolution
The design of concert rooms has more-or-less
followed the state of knowledge in concert hall
acoustics. Certain basic shapes evolved for each
performance/event type. This was not driven by a
knowledge of any deterministic connection between
room shape and sound, but rather (i) because
of the way people gather naturally (for proximity
and good sight-lines to the sound source),
(ii) for structural capacity reasons and (iii) for
social reasons.
Figure 4
We can visualise
the amplitude
distribution with
respect to direction
and time on the
inside of an
expanding spherical
shell by correlating
the directional
fractions with the
directivity matrix of
the soundfield mic on
a frame-by-frame
basis. This plot is a
Mercator projection
of one time frame of
such a correlation,
using a Matlab
routine developed by
Pierre-Antoine
Grison. The circles
show the scatter of
different sub-values
within the smoothing
time window. (The
data is from the 3D
impulse response of
a small theatre.)
36 EBU Technical Review Winter 1997
Essert
At first, distance, clear sight-lines and shielding
from noise were the principal factors considered.
The plan shape and steep rake of open Greek and
Roman amphitheatres brought people as close as
possible to the performers, and the steep rake
allowed the 1st-order floor reflection to benefit the
listeners, and also served as a barrier from the
street activity. Without a roof, this was essentially
a 2-dimensional space with two parameters – distance
and seating slope. Yet the Greeks and
Romans also built roofed theatres that behaved as
contained 3-dimensional spaces. Whether the ancients,
including Vitruvius, knew the reasons for
the acoustical differences between roofed and
open spaces is an open question. Did the higher
level of loudness and reverberance under a roof
influence the composition or performance of the
odes and oratories of the day?
Through much of the Middle Ages and the Renaissance,
churches and cathedrals became more
and more reverberant as buildings were designed
taller. The sound absorption in these buildings is
concentrated at the floor plane: the upper reaches
are mostly vertical, hard, and rectilinear (except
in the case of domes). The upper hard volume
sustains the reverberation stronger and longer
than in the lower portion near the audience. This
is a so-called loosely-coupled volume system. In
tall churches, one is familiar with the sense that
the reverberance moves upwards with time.
Eighteenth and nineteenth century music rooms
and concert halls were still limited in width by the
clear span of timber trusses. Into the mid 19th
century, the shaping was still mostly empirical,
and the “shoebox†form was popular. Music of
the time was composed with these performance
rooms in mind, and these rooms provided a strong,
laterally-biased reverberation.
Opera grew up in acoustically “drier†spaces, with
the audience stacked along the side walls up to the
ceiling. Still, a complete absence of reflections is
not what was desired or designed. Beauty of tone
and some sense of room sound is important for
both the audience and the performers.
Chinese opera, typical of many Asian performing
arts, evolved outdoors. Here there is no sense of
indoor space, and not much in the way of reflecting
surfaces. The piercing vocal techniques, the
percussive orchestrations and the small audience
sizes have been influenced accordingly.
5.1. The 20th century
At the turn of the century, Sabine found a simple
relation between volume, area and sound decay
time. We know this as the reverberation time (RT
or T60), a one-dimensional parameter depending
on volume and area.
After considering the volume and area, the next
level of detail includes specific reflections. Cer-
Figure 5
Computer model of a
concert hall (near
wall cut away),
showing primary
sound reflection
paths between the
source on stage and
a listener in the front
seating area. The
ray colours
correspond to
reflections up to 3rd
order which arrive
between 0 and 80ms
(cyan), 80 and
120ms (yellow) and
120 and 240ms (red)
after the direct sound
arrival.
EBU Technical Review Winter 1997 37
Essert
tain 1st-order reflectors arrive at the listener from
overhead. A low ceiling tends to promote low
reverberance, lack of envelopment and a generally
inadequate increase in loudness (as it directs
sound into the absorbing audience).
In the early 60s, Leo Beranek postulated the importance
of Initial Time Delay Gap, and this often
led to arrays of small reflectors suspended below
the ceiling. These allowed the ceiling to be higher,
in order to sustain reverberance, but the reverberation
was still addressed as:
– simply a function of volume and area, or the
number of seats;
– the twin assumptions that (a) a diffuse field is
perceptually desirable and (b) the late sound
field, in large perceptually-reverberant halls, is
diffuse.
Designs based on this approach resulted in several
wide fan-shaped and oval halls with overhead reflectors.
The halls of the early 1980s in Toronto
(Fig. 6) and San Francisco were not received well.
The analysis of what these halls are missing has
led us to look at the importance of running liveness
and the loudness of reverberation, and to take
more seriously the notion that envelopment is related
to the loudness of the lateral sound, both
early and late lateral sound.
The importance of lateral reflections was advanced
by Barron and Marshall. At first this
spawned “first-order designs†where wall elements
or applied wall panels were tilted downwards
and inwards in order to direct strong 1storder
reflections to the centre of the audience.
Examples include halls in Christchurch (Fig. 7)
and Wellington (New Zealand) Nottingham
(England), Colorado Springs (USA) and Glasgow
(Scotland). One attribute of this sort of hall is a
faster decay and greater clarity, because the tilted
reflectors send the sound back into the audience.
This fact has been used to advantage in multipurpose
halls such as Colorado Springs and Basingstoke
(England) among others. The development
of reverberance in the auditorium is strong and lateral,
but dies away fairly quickly, which is good
for opera and musical theatre, where intelligibility
is important.
In realizing that the reverberant level was important,
we looked for ways to achieve strong lateralisation
of the sound, and a strong reverberant
level, or reverberation efficiency. The next step
was to design for 1st- & 2nd-order lateral reflections.
We can learn from the old rectangular halls
that narrow, tall “shoebox†spaces provide
1st-order side-wall and ceiling reflections, and
sustain the reverberance horizontally above the
audience plane. This can result in a muddy sound
if there is not enough early energy.
Adding a second and perhaps a third side-tier
soffit returns more energy immediately to the
lower levels. With appropriate dimensions, this
geometry adds 2nd-order strong lateral reflections
that promote clarity, envelopment and strength,
Figure 6
Roy Thomson Hall, Toronto (opened 1982, 2812 seats).
Plastic reflectors above the performance platform were incorporated to
provide early reflections in order to make up for the great distance
between most of the audience and the side walls. The sound has great
(some say, too much) clarity but lacks envelopment, strength and bloom.
Figure 7
Christchurch Town Hall, New Zealand (opened 1972, 2662 seats).
Suspended reflecting surfaces at the sides are angled to provide lateral
reflections to much of the audience. With so much sound directed
initially into the audience, this hall does not sustain running liveness so
well as one with vertical parallel walls.
38 EBU Technical Review Winter 1997
Essert
and it also retains the vertically-opposed surfaces
that sustain reverberance. This is reverberation
efficiency. A few older halls, such as Carnegie
Hall (New York), have the audience densely
stacked at the rear, and sparsely arranged on the
side tiers. This supports lateral energy and not
much extended front-back energy flow. In halls
where there are few people on either the side or
rear walls at high level, reverberance is developed
between the side walls and between the front and
rear walls, but there is a different time constant, or
group delay, between the two. This has been applied
to excellent effect in the design of contemporary
“rectangular hybrid†halls in Birmingham
and Manchester (Fig. 8).
5.2. Variable absorption
Listeners want to feel surrounded by reverberance
in the case of symphony, organ, and choral concerts,
in balance with an appropriate measure of
directional fidelity. In amplified events, the clarity,
intelligibility and directional fidelity are considered
more important.
Variable sound absorption systems affect the spatial
qualities as well as the time response. Spatial
definition can be controlled by varying the lateral
energy. When the absorbing system covers the
lateral reflection surfaces, the apparent source
width and envelopment are reduced, and the loudness
and clarity are reduced more than if the ceiling
were covered.
Likewise, reverberance can be controlled most
efficiently by covering the surfaces that are most
responsible for sustaining the reverberance: in the
case of a shoebox hall, the upper side walls
5.3. Variable volume coupling
Coupled volumes have been used to provide extended
reverberance. In multipurpose halls,
coupled spaces have been developed with variable
success from “found space†such as stage fly
space. In concert halls, coupled volumes have
surrounded the top of the room (Fig. 9). New
designs will bring the chamber down lower
around the performers and audience.
This leads to a consideration of variable dimensions.
Movable ceilings have been incorporated
in quite a few facilities in order to provide variable
height. Often the resulting variation in volume
drove the design criteria. Variable width is also
being considered.
5.4. Electronic spatial control
The developments outlined in Section 5 are leading
towards an ability to tailor the acoustical spaciousness
of a room, much as we have been tailoring
the decay rate. Just as our control of time
response has moved from a period of architectural
Mr Robert Essert is an acoustician specializing in the design of concert halls and theatres, and in research
and development of modelling and measurement technology. He holds a BS in Engineering and Music
from Yale University, USA, and an MSc in Mechanical Engineering from the University of Texas, Austin.
From 1980 to 1996 he worked at Artec Consultants, concentrating on design and research in concert hall
and theatre acoustics. His projects there included the acoustical design of the Ford Performing Arts
Centre in North York, Ontario; the Chan Shun Concert Hall at UBC in Vancouver; the R.F. Kravis Center
in West Palm Beach, and the development of 3D measurement room modelling and measurement software
and systems.
From 1997 to the present time, Bob Essert has been with Arup Acoustics in London where his current projects
include a new concert hall in Gateshead, UK, a new lyric theatre in Cardiff, Wales, and renovations
to the Hackney Empire Theatre, London. Under his guidance, Arup Acoustics is developing a 3D auralisation
studio to complement the group’s acoustics consulting work.
Mr Essert is a member of the Institute of Acoustics, the Acoustical Society of America, the Audio Engineering
Society and the International Society for the Performing Arts. He is a founding member of the Concert
Hall Research Group.
Figure 8
Bridgewater Hall,
Manchester (opened
1996, 2400 seats). A
hybrid design with
sparsely-populated
side tiers whose
soffits work with the
side walls to serve as
“2nd-order†lateral
reflectors.
EBU Technical Review Winter 1997 39
Essert
development into electronic solutions, so our
control of spatial aspects is moving through a
stage of mechanical/architectural control systems
into electronic mimicry of the architectural solutions.
Electronic control is beginning to address
areas that are not, or cannot be, dealt with architecturally,
such as:
– surround sound effects;
– surround sound cinema;
– home theatre;
– virtual environments;
– variable spaciousness;
– real-time direct performer control (e.g. MIT
Medialab’s “Hyperinstrumentsâ€).
6. Conclusions
In this article we have reviewed some aspects of
spatial hearing, spatial measurements, modelling
and auralisation. We have also looked at how
auditorium designs have responded to the growing
understanding of spatial sound. Increased understanding
of the links between architecture and
acoustics is allowing greater progress in translating
acoustical goals into room shapes. As hall designers
we have become more proactive, with the
acoustical characteristics of room-shaping playing
a more important role in the overall design.
Bibliography
[1] Sabine, W.C.: Reverberation
The American Architect, 1900. (Republished
in Collected papers on Acoustics, Dover
Publications, 1964).
[2] Marshall, A.H.: A note on the importance of
room cross section in concert halls
J. Sound, Vib. 5, pps 100 – 112, 1967.
[3] Barron, M.: The subjective effects of first
reflections in concert halls – the need for
lateral reflections
J. Sound, Vib. 15, pps 475 – 494, 1971.
[4] Schroeder, M.,Gottlob, D. and Siebrasse, K.:
Comparative study of European concert
halls: Correlation of subjective preference
with geometric and acoustic parameters
J. Acoust. Soc. Am., Vol. 56, No. 1195, 1974.
[5] Blauert, J.: Spatial Hearing
MIT Press, Cambridge, MA, 1983.
[6] Ando, Y.: Concert Hall Acoustics
Springer Verlag, Berlin, 1985.
[7] Griesinger, D.: Quantifying musical acoustics
through audibility
J. Acoust. Soc. Am., Vol. 94, No. 1891, 1993.
[8] de V. Keet, W.: The influence of early lateral
reflections on spatial impression
Proc. 6th International Congress on Acoustics,
Tokyo, 1968.
[9] Bradley, J.S.: Contemporary approaches
to evaluating auditorium acoustics
Proc. AES International Conference, 3 – 6
May, 1990.
[10] Beranek, L.: How They Sound – Concert
and Opera Halls
Acoustical Society of America, Woodbury,
New York, 1996.
[11] Gerzon, M.: General Metatheory of Auditory
Localisation
Preprint 3306 of the 92nd AES Convention,
Vienna, March 1992.
Figure 9
Top-view diagram of Meyerson Symphony Center in Dallas (opened
1989, 2065 seats). A partially-covered reverberation chamber (shown in
green) wraps around the upper part of the hall. The flow of sound energy
between the audience chamber and the outer chamber is controlled with
a set of large concrete doors. This approach has provided variability of,
and independence between, the clarity and reverberance. A similar
approach was used in the Symphony Hall, Birmingham.
roberto sanchez,RCDD

Facilius Per. Partes in cognitionem totius adducimur. Seneca -Es mas fácil entender por partes que entenderlo todo-


 
 


Buscar en mensajes que contengan (opciones) Archivo v. 1.2.3