събота, 20 ноември 2010 г.
Adding Sound Effects and Audio controls to Sony Movie Studio
http://www.youtube.com/watch?v=BRKh9VJowLY&feature=related
петък, 19 ноември 2010 г.
how to synch audio and video in Sony Vegas
http://www.youtube.com/watch?v=y3Hk2-zYoE8&feature=related
четвъртък, 18 ноември 2010 г.
понеделник, 15 ноември 2010 г.
петък, 12 ноември 2010 г.
четвъртък, 11 ноември 2010 г.
Overdrive pedal
http://www.youtube.com/watch?v=Gpqc984Hg7Q
Етикети:
bass guitar,
guital effects,
Overdrive Pedal,
playing a guitar
вторник, 9 ноември 2010 г.
петък, 5 ноември 2010 г.
What is a DI box. When it is used?
DI Box - What is It?
This content is brought to you by Audiocourses dot com
Recording direct is also known as Direct Injection or DI. The electric guitar produces an electrical signal, so it can be plugged right into the mixing console - no microphone is needed. Because the mix and guitar amp are bypassed, the sound is clean and clear, it lacks distortion and colouration or the amp.
You need a DI box because there is a frequent requirement to interface equipment that has basically non standard unbalanced outputs with the standard balanced inputs of mixers, either at line level or microphone level. An electric guitar, for example, has an unbalanced output of fairly high impedance - around 10 kilo ohms or so. The standard output socket is the 'mono' quarter-inch jack, and output voltage levels of around a volt or so (with the guitar's volume controls set to maximum) can be expected.
Plugging the guitar directly into the mic or line level input of a mixer is unsatisfactory for several reasons:
* the input impedance of the mixer will be too low for the guitar, which likes to drive impedances of 500 kilo ohms or more;
* the guitar output is unbalanced so the interference-rejecting properties of the mixer's balanced input will be lost;
* the high output impedance of the guitar renders it incapable of driving long studio tie-lines;
* and the guitarist will frequently wish to plug the instrument into an amplifier as well as the mixer, and simply using the same guitar output to feed both via a splitter lead electrically connects the amplifier to the studio equipment which causes severe interference and low-frequency hum problems.
Similar problems are encountered with other instruments such as synthesisers, electric pianos, and pickup systems for acoustic instruments.
To connect such an instrument with the mixer, a special interfacing unit known as a DI box (DI=direct injection) is therefore employed. This unit will convert the instrument's output to a low-impedance balanced signal, and also reduce its output level to the milli volt range suitable for feeding a microphone input. In addition to the input jack socket, it will also have an output jack socket so that the instrument's unprocessed signal can be passed to an amplifier as well. The low-impedance balanced output appears on a standard three-pin XLR panel-mounted plug which can now be looked upon as the output of a microphone.
An earth-lift switch is also provided which isolates the earth of the input and output jack sockets from the XLR output, to trap earth loop problems.
Passive DI box
The simplest DI boxes contain just a transformer, and are termed 'passive' because they require no power supply. The transformer in this case has a 20:1 step-down ratio, converting the fairly high output of the instrument to a lower output suitable for feeding microphone lines. Impedance is converted according to the square of the turns ratio (400:1), so a typical guitar output impedance of 15 kilo ohms will be stepped down to about 40 ohms which is comfortably low enough to drive long microphone lines. But the guitar itself likes to look into a high impedance.
The transformer isolates the instrument from phantom power on the microphone line.
This type of DI box design has the advantages of being cheap, simple, and requiring no power source - there are no internal batteries to forget to change. On the other hand, its input and output impedances are entirely dependent on the reflected impedances each side of the transformer. Unusually low microphone input impedances will give insufficiently high impedances for many guitars. Also, instruments with passive volume controls can exhibit output impedances as high as 200 kilo Ohms with the control turned down a few numbers from maximum, and this will cause too high an impedance at the output of the DI box for driving long lines. The fixed turns ratio of the transformer is not equally suited to the wide variety of instruments the DI box will encounter, although several units have additional switches which alter the transformer tapping giving different degrees of attenuation.
Active DI box
The active DI box replaces the transformer with an electronic circuit which presents a constant very high impedance to the instrument and provides a constant low- impedance output. The box is powered either by internal batteries, or preferably by the phantom power on the microphone line.
If batteries are used, the box should include an indication of battery status; a 'test switch is often included which lights an LED when the battery is good. Alternatively, an LED comes on as a warning when the voltage of the battery drops below a certain level. The make-and-break contacts of the input jack socket are often configured so that insertion of the jack plug automatically switches the unit on. One should be mindful of this because if the jack plug is left plugged into the unit overnight, for instance, this will waste battery power. Usually the current consumption of the DI box is just a few milliamps, so the battery will last for perhaps a hundred hours.
Some guitar and keyboard amplifiers offer a separate balanced output on an XLR socket labelled 'DI' or 'studio' which is intended to replace the DI box, and it is often convenient to use this instead.
DI boxes are generally small and light, and they spend much of their time on the floor being kicked around and trodden on by musicians and sound engineers. Therefore, rugged metal boxes should be used (not plastic) and any switches, LEDs, etc. should be mounted such that they are recessed or shrouded for protection. Switches should not be easily moved by trailing guitar leads and feet. The DI box can also be used for interfacing domestic hi-fi equipment such as cassette recorders and radio tuners with balanced microphone inputs.
Related: Looking for a good DI Box
More info
Ohm
impedance
This content is brought to you by Audiocourses dot com
Recording direct is also known as Direct Injection or DI. The electric guitar produces an electrical signal, so it can be plugged right into the mixing console - no microphone is needed. Because the mix and guitar amp are bypassed, the sound is clean and clear, it lacks distortion and colouration or the amp.
You need a DI box because there is a frequent requirement to interface equipment that has basically non standard unbalanced outputs with the standard balanced inputs of mixers, either at line level or microphone level. An electric guitar, for example, has an unbalanced output of fairly high impedance - around 10 kilo ohms or so. The standard output socket is the 'mono' quarter-inch jack, and output voltage levels of around a volt or so (with the guitar's volume controls set to maximum) can be expected.
Plugging the guitar directly into the mic or line level input of a mixer is unsatisfactory for several reasons:
* the input impedance of the mixer will be too low for the guitar, which likes to drive impedances of 500 kilo ohms or more;
* the guitar output is unbalanced so the interference-rejecting properties of the mixer's balanced input will be lost;
* the high output impedance of the guitar renders it incapable of driving long studio tie-lines;
* and the guitarist will frequently wish to plug the instrument into an amplifier as well as the mixer, and simply using the same guitar output to feed both via a splitter lead electrically connects the amplifier to the studio equipment which causes severe interference and low-frequency hum problems.
Similar problems are encountered with other instruments such as synthesisers, electric pianos, and pickup systems for acoustic instruments.
To connect such an instrument with the mixer, a special interfacing unit known as a DI box (DI=direct injection) is therefore employed. This unit will convert the instrument's output to a low-impedance balanced signal, and also reduce its output level to the milli volt range suitable for feeding a microphone input. In addition to the input jack socket, it will also have an output jack socket so that the instrument's unprocessed signal can be passed to an amplifier as well. The low-impedance balanced output appears on a standard three-pin XLR panel-mounted plug which can now be looked upon as the output of a microphone.
An earth-lift switch is also provided which isolates the earth of the input and output jack sockets from the XLR output, to trap earth loop problems.
Passive DI box
The simplest DI boxes contain just a transformer, and are termed 'passive' because they require no power supply. The transformer in this case has a 20:1 step-down ratio, converting the fairly high output of the instrument to a lower output suitable for feeding microphone lines. Impedance is converted according to the square of the turns ratio (400:1), so a typical guitar output impedance of 15 kilo ohms will be stepped down to about 40 ohms which is comfortably low enough to drive long microphone lines. But the guitar itself likes to look into a high impedance.
The transformer isolates the instrument from phantom power on the microphone line.
This type of DI box design has the advantages of being cheap, simple, and requiring no power source - there are no internal batteries to forget to change. On the other hand, its input and output impedances are entirely dependent on the reflected impedances each side of the transformer. Unusually low microphone input impedances will give insufficiently high impedances for many guitars. Also, instruments with passive volume controls can exhibit output impedances as high as 200 kilo Ohms with the control turned down a few numbers from maximum, and this will cause too high an impedance at the output of the DI box for driving long lines. The fixed turns ratio of the transformer is not equally suited to the wide variety of instruments the DI box will encounter, although several units have additional switches which alter the transformer tapping giving different degrees of attenuation.
Active DI box
The active DI box replaces the transformer with an electronic circuit which presents a constant very high impedance to the instrument and provides a constant low- impedance output. The box is powered either by internal batteries, or preferably by the phantom power on the microphone line.
If batteries are used, the box should include an indication of battery status; a 'test switch is often included which lights an LED when the battery is good. Alternatively, an LED comes on as a warning when the voltage of the battery drops below a certain level. The make-and-break contacts of the input jack socket are often configured so that insertion of the jack plug automatically switches the unit on. One should be mindful of this because if the jack plug is left plugged into the unit overnight, for instance, this will waste battery power. Usually the current consumption of the DI box is just a few milliamps, so the battery will last for perhaps a hundred hours.
Some guitar and keyboard amplifiers offer a separate balanced output on an XLR socket labelled 'DI' or 'studio' which is intended to replace the DI box, and it is often convenient to use this instead.
DI boxes are generally small and light, and they spend much of their time on the floor being kicked around and trodden on by musicians and sound engineers. Therefore, rugged metal boxes should be used (not plastic) and any switches, LEDs, etc. should be mounted such that they are recessed or shrouded for protection. Switches should not be easily moved by trailing guitar leads and feet. The DI box can also be used for interfacing domestic hi-fi equipment such as cassette recorders and radio tuners with balanced microphone inputs.
Related: Looking for a good DI Box
More info
Ohm
impedance
Етикети:
DI box,
GUITAR,
musical instruments
сряда, 3 ноември 2010 г.
About VST Instuments
http://www.soundonsound.com/sos/dec00/articles/vst.asp
Етикети:
VST INSTRUMENTS Cubase
вторник, 2 ноември 2010 г.
A Basic Guide For Mixing Rock Music
When mixing music, especially when you’re just getting started, it’s helpful to have a guide--some kind of step-by-step plan in front of you to follow. That is the purpose of this brief article. Below you will find the basic steps I use every time I mix a song. I hope you find it helpful.
As you begin it’s helpful to remember that mixing is an art form. While there is a bit of science involved and the more you learn about the scientific laws of acoustics, frequencies, etc. the better you will become at mixing. But, in the end—it all comes down to the ears of the mixing engineer.
This article is just a very basic guide. I would encourage you to do further study on musical frequencies, EQing audio, acoustics, using a compressor, etc. You can do a web search for just about any aspect of the mixing process and find loads of helpful articles on the subject. Become a student of mixing! It will pay off big-time. In the mean time this guide will get you started in the right direction.
NOTE: This is general advice, so feel free to ignore it if you are going for a specific sound.
FIRST--RECORDING: I'm assuming you have all the tracks recorded. Don’t devalue the importance of getting a great recording. Learn all you can about acoustics, rooms, miking techniques, etc. A great recording is easy to mix and master. A bad recording can easily become a nightmare that all the mixing in the world can’t fix.
SECOND--EQing: You want to cut out any frequencies below 100Hz--from all instruments except the BASS GUITAR.. you want the bass to carry those low frequencies. By cutting these frequencies out of the other tracks (guitars, keyboard, vocals, etc.) you will: a) keep your mix from becoming muddy down low, and b) allow the bass guitar to really shine through in the mix. You may or may not want to apply this cut to your kick drum. It all depends on the recorded sound of the kick drum and the sound you are going for. But generally speaking, kick drums benefit from rolling of all the frequencies under 50 Hz. These are inaudible frequencies. They are important—in that they are “felt”, but they can also be “trouble makers” because they can easily over power a mix making it “boomy” and hard to manage. So, rolling these low frequencies off of everything except the bass guitar is generally a good thing. You can also give the kick drum a boost around 10K to enhance the sound of the beater and get that cool “click” sound so popular in modern hard rock and metal music.
THRID--DO AN INITIAL ROUGH MIX: Set the instruments and vocals in the stereo field where you want them by panning things to the left and right to taste. Always keep your kick drum, bass guitar, and lead vocal in the dead center. And set the volume of each track to a level that sounds relatively good to you. THIS ISYOUR STARTING POINT. From this point on you begin adding effects and tweaking things to taste.
FOURTH--MORE Eqing: At this point you are going to be making smaller changes to tracks in order to help them blend and work together. You won’t be cranking on the faders. You’ll be much more subtle in your approach. The best method for tweak tracks and getting them to blend is “negative EQing.” In negative EQing you cut frequencies at specific places in order to let other instruments shine through in the mix.
This is where you need a “spectrum analyzer”. You can pick a free plugin VST version online for free or very cheap. Just do a search for “Free Spectrum Analyzer.” I would also recommend that you pick up a parametric EQ. A graphic EQ will work for this, but a parametric is better. If you aren’t sure what the difference is between this. Do some research. Look up some articles on both and educate yourself.
Now, let’s say you want your vocals to shine through. You would first solo your vocal track and turn the Spectrum Analyzer on in that track. Observe the frequency range for the vocal track. Where is it the strongest and best sounding? Most vocals are strong and sound best somewhere between 1 & 6 K. So, you identify the “sweet spot” in your vocal track … say it’s at 4K. You would then go in and cut the 4K frequency in the "competing" tracks (guitars, bass, etc.) minus the drums – I generally don’t mess with the drums. You don’t need to make a BIG cut… just about 1dB. You’ll be amazed what a small amount will do. Then go listen and see if your vocal cuts through better. If you’re happy—great. If you want more, simply go in and make the cut a little bigger. Continue this process until you get it dialed in.
FIVE—REPEAT THIS PROCESS FOR EACH INSTRUMENT: Locate the “sweet spot” frequency” – the place where that instrument sounds best and make a little cut in the competing instruments to allow it to shine through.
A COUPLE OBVIOUS THOUGHTS:
Yes—you will occasionally run into a situation where two instruments sound best within the same frequency range. In this case—you simply have to experiment. Try cutting frequencies here and there (not in a permanent destructive fashion—of course) and see what sounds best. Often times you’ll find that the cuts you make won’t necessarily sound good when the track is soloed, but when everything is played together is sounds great!
Also—you will want to give special priority to certain tracks. For example. Your rhythm guitars can handle a lot of cuts and still provide a nice solid rhythm section. So don’t stress about making the rhythm guitars “shine through” in the mix. You want to focus on the “spot light” tracks like, lead vocals, lead guitar, and any other track that you just feel is especially important to the sound of that song.
Feel free to apply ANY effects – phase, flange, chorus, reverb, delay, compression-- etc., you like in the mixing stage--THIS IS THE TIME TO DO IT. It won't affect the mastering process at all.
Once you get a track sounding the way you want it--mix the whole song down to a single stereo wav file and send me that file.
THE ONE THING YOU DO NOT WANT is "distortion"... i.e. when you listen to the tracks--you shouldn't hear them "crackling" or "breaking up" anywhere... if you want to know what digital distortion sounds like... just crank all your tracks up above "0 dB" and you'll get a taste... it's nasty... and mastering can't fix it.
As you begin it’s helpful to remember that mixing is an art form. While there is a bit of science involved and the more you learn about the scientific laws of acoustics, frequencies, etc. the better you will become at mixing. But, in the end—it all comes down to the ears of the mixing engineer.
This article is just a very basic guide. I would encourage you to do further study on musical frequencies, EQing audio, acoustics, using a compressor, etc. You can do a web search for just about any aspect of the mixing process and find loads of helpful articles on the subject. Become a student of mixing! It will pay off big-time. In the mean time this guide will get you started in the right direction.
NOTE: This is general advice, so feel free to ignore it if you are going for a specific sound.
FIRST--RECORDING: I'm assuming you have all the tracks recorded. Don’t devalue the importance of getting a great recording. Learn all you can about acoustics, rooms, miking techniques, etc. A great recording is easy to mix and master. A bad recording can easily become a nightmare that all the mixing in the world can’t fix.
SECOND--EQing: You want to cut out any frequencies below 100Hz--from all instruments except the BASS GUITAR.. you want the bass to carry those low frequencies. By cutting these frequencies out of the other tracks (guitars, keyboard, vocals, etc.) you will: a) keep your mix from becoming muddy down low, and b) allow the bass guitar to really shine through in the mix. You may or may not want to apply this cut to your kick drum. It all depends on the recorded sound of the kick drum and the sound you are going for. But generally speaking, kick drums benefit from rolling of all the frequencies under 50 Hz. These are inaudible frequencies. They are important—in that they are “felt”, but they can also be “trouble makers” because they can easily over power a mix making it “boomy” and hard to manage. So, rolling these low frequencies off of everything except the bass guitar is generally a good thing. You can also give the kick drum a boost around 10K to enhance the sound of the beater and get that cool “click” sound so popular in modern hard rock and metal music.
THRID--DO AN INITIAL ROUGH MIX: Set the instruments and vocals in the stereo field where you want them by panning things to the left and right to taste. Always keep your kick drum, bass guitar, and lead vocal in the dead center. And set the volume of each track to a level that sounds relatively good to you. THIS ISYOUR STARTING POINT. From this point on you begin adding effects and tweaking things to taste.
FOURTH--MORE Eqing: At this point you are going to be making smaller changes to tracks in order to help them blend and work together. You won’t be cranking on the faders. You’ll be much more subtle in your approach. The best method for tweak tracks and getting them to blend is “negative EQing.” In negative EQing you cut frequencies at specific places in order to let other instruments shine through in the mix.
This is where you need a “spectrum analyzer”. You can pick a free plugin VST version online for free or very cheap. Just do a search for “Free Spectrum Analyzer.” I would also recommend that you pick up a parametric EQ. A graphic EQ will work for this, but a parametric is better. If you aren’t sure what the difference is between this. Do some research. Look up some articles on both and educate yourself.
Now, let’s say you want your vocals to shine through. You would first solo your vocal track and turn the Spectrum Analyzer on in that track. Observe the frequency range for the vocal track. Where is it the strongest and best sounding? Most vocals are strong and sound best somewhere between 1 & 6 K. So, you identify the “sweet spot” in your vocal track … say it’s at 4K. You would then go in and cut the 4K frequency in the "competing" tracks (guitars, bass, etc.) minus the drums – I generally don’t mess with the drums. You don’t need to make a BIG cut… just about 1dB. You’ll be amazed what a small amount will do. Then go listen and see if your vocal cuts through better. If you’re happy—great. If you want more, simply go in and make the cut a little bigger. Continue this process until you get it dialed in.
FIVE—REPEAT THIS PROCESS FOR EACH INSTRUMENT: Locate the “sweet spot” frequency” – the place where that instrument sounds best and make a little cut in the competing instruments to allow it to shine through.
A COUPLE OBVIOUS THOUGHTS:
Yes—you will occasionally run into a situation where two instruments sound best within the same frequency range. In this case—you simply have to experiment. Try cutting frequencies here and there (not in a permanent destructive fashion—of course) and see what sounds best. Often times you’ll find that the cuts you make won’t necessarily sound good when the track is soloed, but when everything is played together is sounds great!
Also—you will want to give special priority to certain tracks. For example. Your rhythm guitars can handle a lot of cuts and still provide a nice solid rhythm section. So don’t stress about making the rhythm guitars “shine through” in the mix. You want to focus on the “spot light” tracks like, lead vocals, lead guitar, and any other track that you just feel is especially important to the sound of that song.
Feel free to apply ANY effects – phase, flange, chorus, reverb, delay, compression-- etc., you like in the mixing stage--THIS IS THE TIME TO DO IT. It won't affect the mastering process at all.
Once you get a track sounding the way you want it--mix the whole song down to a single stereo wav file and send me that file.
THE ONE THING YOU DO NOT WANT is "distortion"... i.e. when you listen to the tracks--you shouldn't hear them "crackling" or "breaking up" anywhere... if you want to know what digital distortion sounds like... just crank all your tracks up above "0 dB" and you'll get a taste... it's nasty... and mastering can't fix it.
Етикети:
mixing lessons,
mixing rock music,
tutorial for mixing
четвъртък, 28 октомври 2010 г.
How to Use a Guitar Effect Pedal : Harmonizer ,& Pitch Shift Effects for the Electric Guitar
http://www.youtube.com/watch?v=EcXnB1MsjKI
Етикети:
Guitar Effect Pedal,
Harmonizer,
How to,
Pitch Shift Effects
сряда, 27 октомври 2010 г.
петък, 22 октомври 2010 г.
четвъртък, 21 октомври 2010 г.
сряда, 20 октомври 2010 г.
ADR
ADR stand for "Automated" or "Automatic" Dialog Replacement.
Диалозите, които не могат да се използват от записа, направен на терен, трябва да се презапишат.Този процес на повторен запис се нарича looping or ADR.
Looping originally involved recording an actor who spoke lines in
sync to "loops" of the image which were played over and over along
with matching lengths of recording tape. ADR, though faster, is still
painstaking work.
Актьорът наблюдава картината многократно, докато слуша оригиналния запис на слушалки като отправна точка. Актьорът изпълнява повторно всеки щрих за да съвпадне с движението на устните.Актьорите разнообразяват в умението си да постигнат синхрон и да схванат отново емоционалния тон на своето пъврвоначално изпълнение.
Марион Брандо харесва този тип дублаж.Това е, защото не обича да се вкопчва в изпълнението, докато не е вниквал напълно в окончателния контекст./Хората дори биха казали, че той мрънка с цел да направи записа от продукцията неизползваем и да го преозвучи в студио./
ADR обикновено се счита за необходимо зло, но има моменти, когато looping може да се използва не поради технически причини, а за да добави ново звучене или интерпретация към звука. Само чрез промяната на няколко ключови думи или фрази един актьор може да добави нова емоционалност към сцената.
Диалозите, които не могат да се използват от записа, направен на терен, трябва да се презапишат.Този процес на повторен запис се нарича looping or ADR.
Looping originally involved recording an actor who spoke lines in
sync to "loops" of the image which were played over and over along
with matching lengths of recording tape. ADR, though faster, is still
painstaking work.
Актьорът наблюдава картината многократно, докато слуша оригиналния запис на слушалки като отправна точка. Актьорът изпълнява повторно всеки щрих за да съвпадне с движението на устните.Актьорите разнообразяват в умението си да постигнат синхрон и да схванат отново емоционалния тон на своето пъврвоначално изпълнение.
Марион Брандо харесва този тип дублаж.Това е, защото не обича да се вкопчва в изпълнението, докато не е вниквал напълно в окончателния контекст./Хората дори биха казали, че той мрънка с цел да направи записа от продукцията неизползваем и да го преозвучи в студио./
ADR обикновено се счита за необходимо зло, но има моменти, когато looping може да се използва не поради технически причини, а за да добави ново звучене или интерпретация към звука. Само чрез промяната на няколко ключови думи или фрази един актьор може да добави нова емоционалност към сцената.
Етикети:
adr
понеделник, 30 август 2010 г.
сряда, 7 юли 2010 г.
Търсене на музика
Намиране на музика по сходни критерии
Намиране на музика
Намиране на музика
Други сайтове за книги
www.bookarmy.com
azcheta.com
makingof.com
Намиране на музика
Намиране на музика
Други сайтове за книги
www.bookarmy.com
azcheta.com
makingof.com
събота, 19 юни 2010 г.
What is a critical listening
http://www.berkleemusic.com/school/course?course%5fitem%5fid=2141033&usca%5fp=t
четвъртък, 29 април 2010 г.
http://www.youtube.com/watch?v=tXlKWYz4lNk- Best of ...
http://en.wikipedia.org/wiki/Progressive_rock
сряда, 28 април 2010 г.
понеделник, 19 април 2010 г.
четвъртък, 15 април 2010 г.
Soundscape
http://en.wikipedia.org/wiki/Soundscape
Bleach OST
http://www.youtube.com/watch?v=rqNOrvPOA8Q&feature=related
Bleach OST
http://www.youtube.com/watch?v=rqNOrvPOA8Q&feature=related
понеделник, 12 април 2010 г.
четвъртък, 1 април 2010 г.
сряда, 31 март 2010 г.
About DVD formats
http://www.webopedia.com/DidYouKnow/Hardware_Software/2007/DVDFormatsExplained.asp
четвъртък, 25 март 2010 г.
вторник, 16 март 2010 г.
Акира Ямаока
http://bg.wikipedia.org/wiki/%D0%90%D0%BA%D0%B8%D1%80%D0%B0_%D0%AF%D0%BC%D0%B0%D0%BE%D0%BA%D0%B0
петък, 12 март 2010 г.
понеделник, 22 февруари 2010 г.
вторник, 16 февруари 2010 г.
петък, 12 февруари 2010 г.
Разсъждение за метода за правилно ръководене на разума и за търсене на истината в науките
Съдържание
Ако тази беседа се стори някому твърде дълга, за да се изчете наведнъж, тя може да се раздели на шест части. В първата читателят ще намери различни съображения, отнасящи се до науките: във втората - основните правила на метода, който авторът е издирил; в третата - някои от нравствените правила, които той извлича от този метод; в четвъртата - доводите, чрез които той доказва съществуването на бога и на човешката душа, представляващи основите на неговата метафизика; в петата - последователното изложение на въпросите на физиката, които той е изследвал, и в частност обяснението на движението на сърцето и на някои трудни въпроси, спадащи към медицината, а след това също и разликата между нашата душа и душата на животните; и в последната - какво авторът смята за необходимо, за да напреднем още повече в изследването на природата, както и причините, които са го накарали, да пише.
Част първа
Съображения относно науките
От всичко на света здравият смисъл е разпределен най-справедливо; защото всеки се смята тъй щедро надарен с него, че дори онези, които най-трудно се задоволяват във всяко друго отношение, обикновено не искат повече здрав смисъл, отколкото имат. Невероятно е да се лъжат всички, които мислят така. Това по-скоро свидетелствува, че способността да съдим правилно и да различаваме истината от неистината, която собствено наричаме здрав смисъл или разум, е по природа еднаква у всички хора, а също, че нашите мнения се различават не защото едни хора са по-разумни от други, а само защото насочваме мислите си по различни пътища и не обръщаме внимание на едни и същи неща. Понеже не е достатъчно да имаш добър ум, а главното е добре да го прилагаш. Най-великите души са способни както на най-големи пороци, така и на най-висши добродетели. Онези, които вървят много бавно, но винаги следват правия път, могат да направят много повече от други, които тичат, но се отдалечават от него.
Що се отнася до мен, никога не съм си въобразявал, че моят ум е с нещо по-съвършен от този на обикновените хора и даже често съм пожелавал мисълта ми да е така бърза или въображението - така ясно и отчетливо, или паметта ми - така богата и будна, както у някои хора. А освен тези не зная други качества, които да служат за усъвършенствуване на ума, защото, доколкото единствено разумът или разсъдъкът ни отличава от животните и ни прави човеци, искам да вярвам, че у всеки той е налице изцяло, и да следвам по този въпрос общото мнение на философите, за повече или по-малко може да се говори само по отношение на акциденциите, но не и по отношение на формите или природите на индивидите от един и същ вид.
Но аз не ще се побоя да кажа, че се смятам за много щастлив, понеже още от младини попаднах на пътища, които ме доведоха до съображения и правила, от които съставих един метод. Струва ми се, че с негова помощ мога постепенно да увеличавам познанията си и малко по малко да ги издигна до най-високото равнище, до което ще им позволят да стигнат посредствеността на моя ум и краткостта на живота ми. Защото благодарение на този метод аз вече набрах такива плодове, че макар, когато съдя за самия себе си, да се стремя винаги да бъда по-скоро недоверчив, отколкото самомнителен, а когато гледам философски на постъпките и делата па хората, те почти всички да ми изглеждат суетни и безполезни, все пак не мога да не изпитвам огромно задоволство от напредъка, който мисля, че вече съм постигнал в търсенето на истината, и да не храня същите надежди за бъдещето. И ако сред чисто човешките занятия има някое действително добро, и важно, смея да вярвам, че това е именно занятието, което си избрах.
Възможно е обаче да се лъжа и да вземам за злато и за диаманти онова, което може би е само малко мед и стъкло. Зная колко сме склонни да се самоизмамваме по отношение на нещата, които засягат самите нас, и колко подозрителни трябва да ни изглеждат съжденията на нашите приятели, когато са в наша полза. Но в тази беседа с удоволствие ще покажа пътищата, които следвах, като представя живота си като на картина. А аз, като разбера от хорската мълва какви са мненията за тях, ще прибавя едно ново средство за поука към онези, с които съм навикнал да си служа.
Така че аз не възнамерявам да преподавам тук метода, който всеки трябва да следва, за да ръководи правилно своя разум, а само да покажа по какъв начин аз съм се старал да ръководя своя разум. Тези, които си наумят да дават предписания, трябва да смятат себе си за по-изкусни от онези, на които ги дават, и ако допуснат и най-малката грешка, те следва да бъдат порицани. Но тъй като аз предлагам това съчинение само като история или ако предпочитате, като приказка, където наред с някои примери, достойни за подражание, ще се намерят може би и много други, които с пълно право няма да бъдат следвани, надявам се, че то ще бъде полезно за някои хора, без да бъде вредно за никого, и че всички ще ми бъдат благодарни за моята откровеност.
Още от дете аз бях закърмен с науките и понеже ми внушаваха, че чрез тях може да се придобие ясно и сигурно познание за всичко полезно в живота, имах извънредно голямо желание да ги изуча. Но щом завърших курса на обучение, в края на който човек обикновено бива приет в редове на учените, аз напълно промених мнението си. Измъчваха ме толкова съмнения и заблуди, че ми се струваше, че усилията да се уча ми бяха донесли само една полза, а именно все повече бях откривал своето невежество. А при това аз учех в едно от най-прочутите училища в Европа и мислех, че ако някъде по света има учени мъже, те трябва да се намират именно в него. Тук бях изучил всичко, което учеха и другите, и като не се задоволявах с преподаваните науки, дори бях прегледал всички попаднали ми под ръка книги, които се занимаваха с науки, смятани за най-любопитни и най-редки. При това знаех какво мнение имаха другите за мен и не виждах ни най-малко те да ме поставят по-долу от моите съученици, макар между тях да имаше вече няколко, които бяха определени за бъдещи заместници на нашите учители. И, най-после, нашият век ми се струваше така цветущ, така богат на големи умове, както никой от предишните. Ето защо се осмелявах да съдя за другите по себе си и да мисля, че в света няма такова учение, каквото отначало ми бяха внушили да очаквам.
Въпреки това обаче аз продължавах да ценя упражненията, с които се занимават в училищата. Аз съзнавах, че езиците, изучавани в тях, са необходими за разбиране на древните книги; че прелестните измислици събуждат ума; че паметните дела, описани в историята, го възвисяват и ако се четат разумно, помагат за формирането на разсъдъка; че четенето на хубавите книги е като разговор с най-блестящо надарените хора на миналото, които са техни автори, и дори един предварително подготвен разговор, в който те ни разкриват само най-добрите си мисли; че красноречието притежава несравнима сила и красота; че поезията има завладяваща прелест и сладост; че в математиката има твърде изкусни открития, конто могат да бъдат много полезни както за задоволяване на любознателните, така и за улесняване на всички занаяти и за намаляване труда на хората; че съчиненията върху нравствеността съдържат много полезни поучения и подтикват към добродетелност; че теологията учи как да достигнем царството небесно; че философията ни дава възможност да говорим правдоподобно за всякакви неща и да предизвикваме възхищение у по-малко знаещите; че юриспруденцията, медицината и останалите науки носят почести и богатства на онези, които се занимават с тях; и най-после, че е добре да сме запознати с всички науки - дори и с най-суеверните и най-лъжливите, - за да разберем истинската им стойност и да не се поддаваме на тяхната измама.
Но струваше ми се, че вече съм отделил достатъчно време на езиците и дори на четенето на древни книги и на техните истории и измислици. Защото да разговаряш с писатели от миналите векове е почти същото, както да пътуваш. Добре е да знаем някои неща за нравите на различни народи, за да можем да съдим по-трезво за нашите и да не мислим, че всичко противно на нашите обичаи е смешно и неразумно, както имат навика да правят онези, които нищо не са видели. Но когато човек употребява много време, за пътуване, накрая се отчуждава от собствената си страна, а когато проявява прекалено голям интерес към нещата, станали в миналите векове, обикновено остава твърде неосведомен за онова, което се върши в сегашния. Освен това приказките карат хората да си представят като възможни редица събития, които съвсем не са такива, а и най-достоверните исторически описания, дори и да не променят и да не пресилват значението на нещата, за да ги направят по-достойни за четене, почти винаги пропускат ако не друго, то поне най-долните и най-безславните обстоятелства. Именно поради това останалото не изглежда такова, каквото е в действителност, и онези, които съобразяват поведението си с извличаните оттам примери, могат да изпаднат в чудатостите на рицарите от нашите романи и да започнат да кроят планове, надвишаващи силите им.
Аз високо ценях красноречието и бях влюбен в поезията, но смятах, че и двете са по-скоро дарба на ума, отколкото плод на ученето. Хората, които разсъждават най-здраво и най-добре подреждат мислите си, за да ги направят ясни и разбираеми, винаги най-добре могат да ви убедят в своите предложения, макар и да говорят на долнобретонски и никога да не са изучавали реторика. А ония, които създават най-приятни измислици и умеят да ги изразяват най-цветисто и най-нежно, непременно ще бъдат най-добри поети, макар и да не познават поетическото изкуство.
Особено ми харесваше математиката заради сигурността и очевидността на нейните доводи. Но тогава аз все още не виждах истинското й приложение и като мислех, че тя служи само на техническите изкуства, учудвах се защо, след като тя има такива здрави и твърди основи, досега върху нея не е било изградено нищо по-възвишено. И, обратно, писанията на древните езичници аз сравнявах с твърде великолепни и пищни дворци, построени само върху пясък и кал. Те превъзнасят добродетелите и ги представят за най-ценното от всичко на света, но недостатъчно ни учат как да ги разпознаваме и често онова, което наричат с толкова красивото име "добродетел", е само безчувственост, гордост, отчаяние или отцеубийство.
Аз благоговеех пред нашата теология и исках като всички други да достигна царството небесно. Но като разбрах, и то съвсем сигурно, че пътят към него е открит не по-малко за най-невежите, отколкото за най-учените и че откровенията, които водят към небето, надвишават нашия разум, аз не се осмелих да ги подложа на преценката на моите слаби разсъждения и смятах, че за да се наеме човек да ги проучи и да успее в това, той трябва да получи някаква необикновена подкрепа от небето и да бъде нещо повече от човек.
За философията ще кажа само едно: като виждах, че макар тя да е била разработвана в продължение на много векове от най-превъзходни умове, в нея все още няма нищо, което да не е спорно и следователно съмнително, аз не бях достатъчно самонадеян, за да разчитам на по-голям успех от останалите. А като имах пред вид колко много и различни мнения, поддържани от учени хора, може да съществуват в нея по един и същ въпрос, смятах едва ли не за погрешно всичко, което беше само правдоподобно.
По-нататък, що се отнася до останалите науки, доколкото те заимствуват своите начала от философията, аз смятам, че нищо здраво не може да се изгради върху толкова слаби основи. И нито почестите, нито печалбите, които те обещават, не бяха достатъчни, за да ме накарат да ги изучавам. Защото, слава богу, моето положение не ме принуждаваше да правя от науката занаят с цел да осигуря имотното си състояние. И макар да не си давах вид на човек, презиращ славата, както правят циниците, аз отдавах твърде малко значение на онази, която можех да придобия без никакво право. Най-после, що се отнася до лошите учения, мислех, че вече зная достатъчно добре какво, струват те, за да не бъда подмамен, нито от обещанията, на някой алхимик, нито от предсказанията на някой астролог, нито от лъжите на някой магьосник, нито от фокусите и хвалбите на когото ида било от ония, които се хвалят, че знаят повече, отколкото знаят в действителност.
Ето защо веднага щом възрастта ми позволи да се освободя от опеката на моите преподаватели, аз изоставих изцяло изучаването на хуманитарните науки (l'etude des lettres). И като реших да не търся вече никаква, друга наука освен тази, която бих могъл да намеря в самия себе си или пък в голямата книга на света, останалата част от младостта си използувах, за да пътувам да видя разни дворове и армии, да общувам с хора с различни характери и различно обществено положение, да натрупам разнообразен опит, да изпитам себе си в срещите, конто съдбата ми предложи, и навсякъде така да размишлявам върху разкриващите се пред мен неща, че да извлека от това някаква полза. Защото струваше ми се, че бих могъл да намеря много повече истина в разсъжденията на всеки един относно онези дела, които са важни за него и чийто изход скоро след това сигурно ще го накаже, ако е съдил погрешно, отколкото в кабинетните разсъждения на един книжен учен, отнасящи се до безполезни спекулации, единственият резултат от които е, че той може би ще се перчи с тях толкова повече, колкото по-далеч те стоят от здравия смисъл, защото в стремежа си да ги направи правдоподобни, той ще употреби повече ум и изкусност. А аз винаги съм имал извънредно голямо желание да се науча да различавам истината от неистината, за да имам ясен поглед върху постъпките си и да вървя уверено в този живот.
Вярно е, че докато само наблюдавах нравите на другите хора, аз не намерих нищо, което да ми внуши увереност, и забелязах в тях почти такова голямо разнообразие, каквото преди това бях установил в мненията на философите. По такъв начин най-голямата полза, която извлякох, бе тази, че се научих да не вярвам прекалено твърдо на онова, което ми е било внушено само чрез пример и обичай, понеже виждах как много неща, които ни изглеждат крайно необичайни и смешни, все пак се приемат и одобряват от други велики народи. Така постепенно се отърсвах от много заблуди, които могат да затъмнят нашата природна светлина и да ни направят по-малко способни да се вслушваме в гласа на разума. Но след като употребих няколко години да уча по този начин от книгата на света и се стремях да придобия известен опит, един ден реших да изуча също така и себе си и да използувам всичките си умствени сили, за да избера пътищата, които трябва да следвам. Струва ми се, че това ми се удаде много по-добре, отколкото ако не бях се отдалечавал никога от страната си или от книгите си.
Ако тази беседа се стори някому твърде дълга, за да се изчете наведнъж, тя може да се раздели на шест части. В първата читателят ще намери различни съображения, отнасящи се до науките: във втората - основните правила на метода, който авторът е издирил; в третата - някои от нравствените правила, които той извлича от този метод; в четвъртата - доводите, чрез които той доказва съществуването на бога и на човешката душа, представляващи основите на неговата метафизика; в петата - последователното изложение на въпросите на физиката, които той е изследвал, и в частност обяснението на движението на сърцето и на някои трудни въпроси, спадащи към медицината, а след това също и разликата между нашата душа и душата на животните; и в последната - какво авторът смята за необходимо, за да напреднем още повече в изследването на природата, както и причините, които са го накарали, да пише.
Част първа
Съображения относно науките
От всичко на света здравият смисъл е разпределен най-справедливо; защото всеки се смята тъй щедро надарен с него, че дори онези, които най-трудно се задоволяват във всяко друго отношение, обикновено не искат повече здрав смисъл, отколкото имат. Невероятно е да се лъжат всички, които мислят така. Това по-скоро свидетелствува, че способността да съдим правилно и да различаваме истината от неистината, която собствено наричаме здрав смисъл или разум, е по природа еднаква у всички хора, а също, че нашите мнения се различават не защото едни хора са по-разумни от други, а само защото насочваме мислите си по различни пътища и не обръщаме внимание на едни и същи неща. Понеже не е достатъчно да имаш добър ум, а главното е добре да го прилагаш. Най-великите души са способни както на най-големи пороци, така и на най-висши добродетели. Онези, които вървят много бавно, но винаги следват правия път, могат да направят много повече от други, които тичат, но се отдалечават от него.
Що се отнася до мен, никога не съм си въобразявал, че моят ум е с нещо по-съвършен от този на обикновените хора и даже често съм пожелавал мисълта ми да е така бърза или въображението - така ясно и отчетливо, или паметта ми - така богата и будна, както у някои хора. А освен тези не зная други качества, които да служат за усъвършенствуване на ума, защото, доколкото единствено разумът или разсъдъкът ни отличава от животните и ни прави човеци, искам да вярвам, че у всеки той е налице изцяло, и да следвам по този въпрос общото мнение на философите, за повече или по-малко може да се говори само по отношение на акциденциите, но не и по отношение на формите или природите на индивидите от един и същ вид.
Но аз не ще се побоя да кажа, че се смятам за много щастлив, понеже още от младини попаднах на пътища, които ме доведоха до съображения и правила, от които съставих един метод. Струва ми се, че с негова помощ мога постепенно да увеличавам познанията си и малко по малко да ги издигна до най-високото равнище, до което ще им позволят да стигнат посредствеността на моя ум и краткостта на живота ми. Защото благодарение на този метод аз вече набрах такива плодове, че макар, когато съдя за самия себе си, да се стремя винаги да бъда по-скоро недоверчив, отколкото самомнителен, а когато гледам философски на постъпките и делата па хората, те почти всички да ми изглеждат суетни и безполезни, все пак не мога да не изпитвам огромно задоволство от напредъка, който мисля, че вече съм постигнал в търсенето на истината, и да не храня същите надежди за бъдещето. И ако сред чисто човешките занятия има някое действително добро, и важно, смея да вярвам, че това е именно занятието, което си избрах.
Възможно е обаче да се лъжа и да вземам за злато и за диаманти онова, което може би е само малко мед и стъкло. Зная колко сме склонни да се самоизмамваме по отношение на нещата, които засягат самите нас, и колко подозрителни трябва да ни изглеждат съжденията на нашите приятели, когато са в наша полза. Но в тази беседа с удоволствие ще покажа пътищата, които следвах, като представя живота си като на картина. А аз, като разбера от хорската мълва какви са мненията за тях, ще прибавя едно ново средство за поука към онези, с които съм навикнал да си служа.
Така че аз не възнамерявам да преподавам тук метода, който всеки трябва да следва, за да ръководи правилно своя разум, а само да покажа по какъв начин аз съм се старал да ръководя своя разум. Тези, които си наумят да дават предписания, трябва да смятат себе си за по-изкусни от онези, на които ги дават, и ако допуснат и най-малката грешка, те следва да бъдат порицани. Но тъй като аз предлагам това съчинение само като история или ако предпочитате, като приказка, където наред с някои примери, достойни за подражание, ще се намерят може би и много други, които с пълно право няма да бъдат следвани, надявам се, че то ще бъде полезно за някои хора, без да бъде вредно за никого, и че всички ще ми бъдат благодарни за моята откровеност.
Още от дете аз бях закърмен с науките и понеже ми внушаваха, че чрез тях може да се придобие ясно и сигурно познание за всичко полезно в живота, имах извънредно голямо желание да ги изуча. Но щом завърших курса на обучение, в края на който човек обикновено бива приет в редове на учените, аз напълно промених мнението си. Измъчваха ме толкова съмнения и заблуди, че ми се струваше, че усилията да се уча ми бяха донесли само една полза, а именно все повече бях откривал своето невежество. А при това аз учех в едно от най-прочутите училища в Европа и мислех, че ако някъде по света има учени мъже, те трябва да се намират именно в него. Тук бях изучил всичко, което учеха и другите, и като не се задоволявах с преподаваните науки, дори бях прегледал всички попаднали ми под ръка книги, които се занимаваха с науки, смятани за най-любопитни и най-редки. При това знаех какво мнение имаха другите за мен и не виждах ни най-малко те да ме поставят по-долу от моите съученици, макар между тях да имаше вече няколко, които бяха определени за бъдещи заместници на нашите учители. И, най-после, нашият век ми се струваше така цветущ, така богат на големи умове, както никой от предишните. Ето защо се осмелявах да съдя за другите по себе си и да мисля, че в света няма такова учение, каквото отначало ми бяха внушили да очаквам.
Въпреки това обаче аз продължавах да ценя упражненията, с които се занимават в училищата. Аз съзнавах, че езиците, изучавани в тях, са необходими за разбиране на древните книги; че прелестните измислици събуждат ума; че паметните дела, описани в историята, го възвисяват и ако се четат разумно, помагат за формирането на разсъдъка; че четенето на хубавите книги е като разговор с най-блестящо надарените хора на миналото, които са техни автори, и дори един предварително подготвен разговор, в който те ни разкриват само най-добрите си мисли; че красноречието притежава несравнима сила и красота; че поезията има завладяваща прелест и сладост; че в математиката има твърде изкусни открития, конто могат да бъдат много полезни както за задоволяване на любознателните, така и за улесняване на всички занаяти и за намаляване труда на хората; че съчиненията върху нравствеността съдържат много полезни поучения и подтикват към добродетелност; че теологията учи как да достигнем царството небесно; че философията ни дава възможност да говорим правдоподобно за всякакви неща и да предизвикваме възхищение у по-малко знаещите; че юриспруденцията, медицината и останалите науки носят почести и богатства на онези, които се занимават с тях; и най-после, че е добре да сме запознати с всички науки - дори и с най-суеверните и най-лъжливите, - за да разберем истинската им стойност и да не се поддаваме на тяхната измама.
Но струваше ми се, че вече съм отделил достатъчно време на езиците и дори на четенето на древни книги и на техните истории и измислици. Защото да разговаряш с писатели от миналите векове е почти същото, както да пътуваш. Добре е да знаем някои неща за нравите на различни народи, за да можем да съдим по-трезво за нашите и да не мислим, че всичко противно на нашите обичаи е смешно и неразумно, както имат навика да правят онези, които нищо не са видели. Но когато човек употребява много време, за пътуване, накрая се отчуждава от собствената си страна, а когато проявява прекалено голям интерес към нещата, станали в миналите векове, обикновено остава твърде неосведомен за онова, което се върши в сегашния. Освен това приказките карат хората да си представят като възможни редица събития, които съвсем не са такива, а и най-достоверните исторически описания, дори и да не променят и да не пресилват значението на нещата, за да ги направят по-достойни за четене, почти винаги пропускат ако не друго, то поне най-долните и най-безславните обстоятелства. Именно поради това останалото не изглежда такова, каквото е в действителност, и онези, които съобразяват поведението си с извличаните оттам примери, могат да изпаднат в чудатостите на рицарите от нашите романи и да започнат да кроят планове, надвишаващи силите им.
Аз високо ценях красноречието и бях влюбен в поезията, но смятах, че и двете са по-скоро дарба на ума, отколкото плод на ученето. Хората, които разсъждават най-здраво и най-добре подреждат мислите си, за да ги направят ясни и разбираеми, винаги най-добре могат да ви убедят в своите предложения, макар и да говорят на долнобретонски и никога да не са изучавали реторика. А ония, които създават най-приятни измислици и умеят да ги изразяват най-цветисто и най-нежно, непременно ще бъдат най-добри поети, макар и да не познават поетическото изкуство.
Особено ми харесваше математиката заради сигурността и очевидността на нейните доводи. Но тогава аз все още не виждах истинското й приложение и като мислех, че тя служи само на техническите изкуства, учудвах се защо, след като тя има такива здрави и твърди основи, досега върху нея не е било изградено нищо по-възвишено. И, обратно, писанията на древните езичници аз сравнявах с твърде великолепни и пищни дворци, построени само върху пясък и кал. Те превъзнасят добродетелите и ги представят за най-ценното от всичко на света, но недостатъчно ни учат как да ги разпознаваме и често онова, което наричат с толкова красивото име "добродетел", е само безчувственост, гордост, отчаяние или отцеубийство.
Аз благоговеех пред нашата теология и исках като всички други да достигна царството небесно. Но като разбрах, и то съвсем сигурно, че пътят към него е открит не по-малко за най-невежите, отколкото за най-учените и че откровенията, които водят към небето, надвишават нашия разум, аз не се осмелих да ги подложа на преценката на моите слаби разсъждения и смятах, че за да се наеме човек да ги проучи и да успее в това, той трябва да получи някаква необикновена подкрепа от небето и да бъде нещо повече от човек.
За философията ще кажа само едно: като виждах, че макар тя да е била разработвана в продължение на много векове от най-превъзходни умове, в нея все още няма нищо, което да не е спорно и следователно съмнително, аз не бях достатъчно самонадеян, за да разчитам на по-голям успех от останалите. А като имах пред вид колко много и различни мнения, поддържани от учени хора, може да съществуват в нея по един и същ въпрос, смятах едва ли не за погрешно всичко, което беше само правдоподобно.
По-нататък, що се отнася до останалите науки, доколкото те заимствуват своите начала от философията, аз смятам, че нищо здраво не може да се изгради върху толкова слаби основи. И нито почестите, нито печалбите, които те обещават, не бяха достатъчни, за да ме накарат да ги изучавам. Защото, слава богу, моето положение не ме принуждаваше да правя от науката занаят с цел да осигуря имотното си състояние. И макар да не си давах вид на човек, презиращ славата, както правят циниците, аз отдавах твърде малко значение на онази, която можех да придобия без никакво право. Най-после, що се отнася до лошите учения, мислех, че вече зная достатъчно добре какво, струват те, за да не бъда подмамен, нито от обещанията, на някой алхимик, нито от предсказанията на някой астролог, нито от лъжите на някой магьосник, нито от фокусите и хвалбите на когото ида било от ония, които се хвалят, че знаят повече, отколкото знаят в действителност.
Ето защо веднага щом възрастта ми позволи да се освободя от опеката на моите преподаватели, аз изоставих изцяло изучаването на хуманитарните науки (l'etude des lettres). И като реших да не търся вече никаква, друга наука освен тази, която бих могъл да намеря в самия себе си или пък в голямата книга на света, останалата част от младостта си използувах, за да пътувам да видя разни дворове и армии, да общувам с хора с различни характери и различно обществено положение, да натрупам разнообразен опит, да изпитам себе си в срещите, конто съдбата ми предложи, и навсякъде така да размишлявам върху разкриващите се пред мен неща, че да извлека от това някаква полза. Защото струваше ми се, че бих могъл да намеря много повече истина в разсъжденията на всеки един относно онези дела, които са важни за него и чийто изход скоро след това сигурно ще го накаже, ако е съдил погрешно, отколкото в кабинетните разсъждения на един книжен учен, отнасящи се до безполезни спекулации, единственият резултат от които е, че той може би ще се перчи с тях толкова повече, колкото по-далеч те стоят от здравия смисъл, защото в стремежа си да ги направи правдоподобни, той ще употреби повече ум и изкусност. А аз винаги съм имал извънредно голямо желание да се науча да различавам истината от неистината, за да имам ясен поглед върху постъпките си и да вървя уверено в този живот.
Вярно е, че докато само наблюдавах нравите на другите хора, аз не намерих нищо, което да ми внуши увереност, и забелязах в тях почти такова голямо разнообразие, каквото преди това бях установил в мненията на философите. По такъв начин най-голямата полза, която извлякох, бе тази, че се научих да не вярвам прекалено твърдо на онова, което ми е било внушено само чрез пример и обичай, понеже виждах как много неща, които ни изглеждат крайно необичайни и смешни, все пак се приемат и одобряват от други велики народи. Така постепенно се отърсвах от много заблуди, които могат да затъмнят нашата природна светлина и да ни направят по-малко способни да се вслушваме в гласа на разума. Но след като употребих няколко години да уча по този начин от книгата на света и се стремях да придобия известен опит, един ден реших да изуча също така и себе си и да използувам всичките си умствени сили, за да избера пътищата, които трябва да следвам. Струва ми се, че това ми се удаде много по-добре, отколкото ако не бях се отдалечавал никога от страната си или от книгите си.
How to mix and master Vocals with Adobe Audition
Етикети:
adobe audition,
first audio lessons,
mixing audio
четвъртък, 11 февруари 2010 г.
Dubbing (filmmaking)
Beta movement-http://en.wikipedia.org/wiki/Beta_movement
http://en.wikipedia.org/wiki/Dubbing_(filmmaking)#Automated_dialogue_replacement_.2F_post-sync
http://en.wikipedia.org/wiki/Dubbing_(filmmaking)#Automated_dialogue_replacement_.2F_post-sync
четвъртък, 4 февруари 2010 г.
Microphone connection and connector types
http://wiki.audacityteam.org/wiki/Connecting_your_Equipment
Normalize
The Normalize dialog allows you to raise the volume of a selection so that the highest level sample reaches a user-defined level. Use normalization to ensure you are using all of the dynamic range available to you without clipping. To display this dialog, choose Normalize from the Process menu.
When normalizing stereo data, if the selection includes both channels, normalization is computed on the loudest sample value found in either channel and the same gain is applied to both. If a single channel is selected, normalization will effect only that channel.
When converting to compressed formats, you'll achieve the best results if the audio has been normalized before the conversions occur.
http://wiki.audacityteam.org/index.php?title=Amplify_and_Normalize
What do you want to do?
Normalize using a peak value
When you normalize to a peak value, you can specify the level to which the maximum detected sample value will be set. Sound Forge applies a constant gain to the selection to bring the peak to this level.
From the Process menu, choose Normalize.
Click the Peak level radio button.
Click the Scan Levels button.
When previewing, the entire file must be scanned--even when previewing a small selection. Clicking the Scan Levels button stores the current Peak and RMS values. This allows you to preview different Normalize to level settings without rescanning the entire file.
Drag the Normalize to fader to specify the level to which the highest peak should be set.
Click the OK button.
Normalize using average RMS power
When you normalize using average RMS power, Sound Forge will normalize the sound file using the detected average RMS value of the sound file to a value you specify. This is helpful for matching the apparent loudness of different recordings.
From the Process menu, choose Normalize.
Click the Average RMS power radio button.
Click the Scan Levels button.
When previewing, the entire file must be scanned--even when previewing a small selection. Clicking the Scan Levels button stores the current Peak and RMS values. This allows you to preview different Normalize to level settings without rescanning the entire file.
Drag the Normalize to fader to specify the new average RMS power for the selection.
When using RMS levels, set the Normalize to fader to 6 dB or less. Normalizing to 0 dB boosts the signal so that it has the same apparent loudness as a 0 dB square wave, meaning incredibly loud. If you were to do so, all of the dynamic range of the signal would be squashed and all the peaks would either be clipped or seriously compressed. The lesson is, normalizing a peak to 0 dB is OK, but normalizing RMS to anything above -6 dB can compromise sound quality.
Adjust scan settings:
Item
Description
Ignore below
Drag the fader to determine the level of material you want to include in the RMS calculation. Any sound material below the threshold will be ignored in the calculation. This is useful to eliminate any silent sections from the RMS calculation. You should set this parameter a few dB above what you consider to be silence.
If you set this value to minus infinity, all sound data will be used. If the value is set too high (above -10 dB), there is a good chance that the RMS value is always below the threshold. In this case, no normalization will occur. Therefore, it is good to test the threshold by using the Scan Levels button.
Attack time
Specify how quickly the scan should respond to transient peaks in the sound file. A slower attack time will tend to ignore fast-peaking material.
Release time
Specify how quickly the scan should stop using transient peak material after it has begun to drop in level. A slower release time will increase the amount of material included in the RMS calculation.
Use equal loudness contour
Select this check box if you want the RMS calculation to compensate for high- and low-frequency audio. Very low and high frequencies are less audible than mid-range frequencies.
Select an option from the If clipping occurs drop-down list:
Item
Description
Apply dynamic compression
Any peaks that would clip are limited to below 0 dB using nonzero attack and release times to minimize distortion. In other words, a time-varying gain is used to ensure that no hard clipping occurs.
This option is useful for getting very loud, yet clear sound during the mastering process.
Normalize peak value to 0 dB
The selection’s peak amplitude level is normalized to 0 dB. This applies the maximum possible constant gain that doesn’t clip to the selection. Less gain is applied than would be necessary to achieve the Normalize to RMS level.
Ignore (saturate)
Sound data is allowed to clip. Use this option only if the clipping samples are very short and infrequent.
Stop processing
Any sound data that would clip causes the Normalize function to stop processing and display a notification.
Click the OK button.
Normalize using average RMS power
When you normalize using average RMS power, Sound Forge will normalize the sound file using the detected average RMS value of the sound file to a value you specify. This is helpful for matching the apparent loudness of different recordings.
From the Process menu, choose Normalize.
Click the Average RMS power radio button.
Click the Scan Levels button.
When previewing, the entire file must be scanned--even when previewing a small selection. Clicking the Scan Levels button stores the current Peak and RMS values. This allows you to preview different Normalize to level settings without rescanning the entire file.
Drag the Normalize to fader to specify the new average RMS power for the selection.
When using RMS levels, set the Normalize to fader to 6 dB or less. Normalizing to 0 dB boosts the signal so that it has the same apparent loudness as a 0 dB square wave, meaning incredibly loud. If you were to do so, all of the dynamic range of the signal would be squashed and all the peaks would either be clipped or seriously compressed. The lesson is, normalizing a peak to 0 dB is OK, but normalizing RMS to anything above -6 dB can compromise sound quality.
Adjust scan settings:
Item
Description
Ignore below
Drag the fader to determine the level of material you want to include in the RMS calculation. Any sound material below the threshold will be ignored in the calculation. This is useful to eliminate any silent sections from the RMS calculation. You should set this parameter a few dB above what you consider to be silence.
If you set this value to minus infinity, all sound data will be used. If the value is set too high (above -10 dB), there is a good chance that the RMS value is always below the threshold. In this case, no normalization will occur. Therefore, it is good to test the threshold by using the Scan Levels button.
Attack time
Specify how quickly the scan should respond to transient peaks in the sound file. A slower attack time will tend to ignore fast-peaking material.
Release time
Specify how quickly the scan should stop using transient peak material after it has begun to drop in level. A slower release time will increase the amount of material included in the RMS calculation.
Use equal loudness contour
Select this check box if you want the RMS calculation to compensate for high- and low-frequency audio. Very low and high frequencies are less audible than mid-range frequencies.
Select an option from the If clipping occurs drop-down list:
Item
Description
Apply dynamic compression
Any peaks that would clip are limited to below 0 dB using nonzero attack and release times to minimize distortion. In other words, a time-varying gain is used to ensure that no hard clipping occurs.
This option is useful for getting very loud, yet clear sound during the mastering process.
Normalize peak value to 0 dB
The selection’s peak amplitude level is normalized to 0 dB. This applies the maximum possible constant gain that doesn’t clip to the selection. Less gain is applied than would be necessary to achieve the Normalize to RMS level.
Ignore (saturate)
Sound data is allowed to clip. Use this option only if the clipping samples are very short and infrequent.
Stop processing
Any sound data that would clip causes the Normalize function to stop processing and display a notification.
Click the OK button.
Normalize using levels from another selection or file
Select the data you want to use to normalize your data.
From the Process menu, choose Normalize.
Click the Scan Levels button.
Close the Normalize dialog.
Select the data you want to normalize.
From the Process menu, choose Normalize.
Select the Use current scan level check box. The selection is normalized to the level displayed in the Peak or RMS fields without rescanning.
For more information about using processing dialog controls, click here.
When normalizing stereo data, if the selection includes both channels, normalization is computed on the loudest sample value found in either channel and the same gain is applied to both. If a single channel is selected, normalization will effect only that channel.
When converting to compressed formats, you'll achieve the best results if the audio has been normalized before the conversions occur.
http://wiki.audacityteam.org/index.php?title=Amplify_and_Normalize
What do you want to do?
Normalize using a peak value
When you normalize to a peak value, you can specify the level to which the maximum detected sample value will be set. Sound Forge applies a constant gain to the selection to bring the peak to this level.
From the Process menu, choose Normalize.
Click the Peak level radio button.
Click the Scan Levels button.
When previewing, the entire file must be scanned--even when previewing a small selection. Clicking the Scan Levels button stores the current Peak and RMS values. This allows you to preview different Normalize to level settings without rescanning the entire file.
Drag the Normalize to fader to specify the level to which the highest peak should be set.
Click the OK button.
Normalize using average RMS power
When you normalize using average RMS power, Sound Forge will normalize the sound file using the detected average RMS value of the sound file to a value you specify. This is helpful for matching the apparent loudness of different recordings.
From the Process menu, choose Normalize.
Click the Average RMS power radio button.
Click the Scan Levels button.
When previewing, the entire file must be scanned--even when previewing a small selection. Clicking the Scan Levels button stores the current Peak and RMS values. This allows you to preview different Normalize to level settings without rescanning the entire file.
Drag the Normalize to fader to specify the new average RMS power for the selection.
When using RMS levels, set the Normalize to fader to 6 dB or less. Normalizing to 0 dB boosts the signal so that it has the same apparent loudness as a 0 dB square wave, meaning incredibly loud. If you were to do so, all of the dynamic range of the signal would be squashed and all the peaks would either be clipped or seriously compressed. The lesson is, normalizing a peak to 0 dB is OK, but normalizing RMS to anything above -6 dB can compromise sound quality.
Adjust scan settings:
Item
Description
Ignore below
Drag the fader to determine the level of material you want to include in the RMS calculation. Any sound material below the threshold will be ignored in the calculation. This is useful to eliminate any silent sections from the RMS calculation. You should set this parameter a few dB above what you consider to be silence.
If you set this value to minus infinity, all sound data will be used. If the value is set too high (above -10 dB), there is a good chance that the RMS value is always below the threshold. In this case, no normalization will occur. Therefore, it is good to test the threshold by using the Scan Levels button.
Attack time
Specify how quickly the scan should respond to transient peaks in the sound file. A slower attack time will tend to ignore fast-peaking material.
Release time
Specify how quickly the scan should stop using transient peak material after it has begun to drop in level. A slower release time will increase the amount of material included in the RMS calculation.
Use equal loudness contour
Select this check box if you want the RMS calculation to compensate for high- and low-frequency audio. Very low and high frequencies are less audible than mid-range frequencies.
Select an option from the If clipping occurs drop-down list:
Item
Description
Apply dynamic compression
Any peaks that would clip are limited to below 0 dB using nonzero attack and release times to minimize distortion. In other words, a time-varying gain is used to ensure that no hard clipping occurs.
This option is useful for getting very loud, yet clear sound during the mastering process.
Normalize peak value to 0 dB
The selection’s peak amplitude level is normalized to 0 dB. This applies the maximum possible constant gain that doesn’t clip to the selection. Less gain is applied than would be necessary to achieve the Normalize to RMS level.
Ignore (saturate)
Sound data is allowed to clip. Use this option only if the clipping samples are very short and infrequent.
Stop processing
Any sound data that would clip causes the Normalize function to stop processing and display a notification.
Click the OK button.
Normalize using average RMS power
When you normalize using average RMS power, Sound Forge will normalize the sound file using the detected average RMS value of the sound file to a value you specify. This is helpful for matching the apparent loudness of different recordings.
From the Process menu, choose Normalize.
Click the Average RMS power radio button.
Click the Scan Levels button.
When previewing, the entire file must be scanned--even when previewing a small selection. Clicking the Scan Levels button stores the current Peak and RMS values. This allows you to preview different Normalize to level settings without rescanning the entire file.
Drag the Normalize to fader to specify the new average RMS power for the selection.
When using RMS levels, set the Normalize to fader to 6 dB or less. Normalizing to 0 dB boosts the signal so that it has the same apparent loudness as a 0 dB square wave, meaning incredibly loud. If you were to do so, all of the dynamic range of the signal would be squashed and all the peaks would either be clipped or seriously compressed. The lesson is, normalizing a peak to 0 dB is OK, but normalizing RMS to anything above -6 dB can compromise sound quality.
Adjust scan settings:
Item
Description
Ignore below
Drag the fader to determine the level of material you want to include in the RMS calculation. Any sound material below the threshold will be ignored in the calculation. This is useful to eliminate any silent sections from the RMS calculation. You should set this parameter a few dB above what you consider to be silence.
If you set this value to minus infinity, all sound data will be used. If the value is set too high (above -10 dB), there is a good chance that the RMS value is always below the threshold. In this case, no normalization will occur. Therefore, it is good to test the threshold by using the Scan Levels button.
Attack time
Specify how quickly the scan should respond to transient peaks in the sound file. A slower attack time will tend to ignore fast-peaking material.
Release time
Specify how quickly the scan should stop using transient peak material after it has begun to drop in level. A slower release time will increase the amount of material included in the RMS calculation.
Use equal loudness contour
Select this check box if you want the RMS calculation to compensate for high- and low-frequency audio. Very low and high frequencies are less audible than mid-range frequencies.
Select an option from the If clipping occurs drop-down list:
Item
Description
Apply dynamic compression
Any peaks that would clip are limited to below 0 dB using nonzero attack and release times to minimize distortion. In other words, a time-varying gain is used to ensure that no hard clipping occurs.
This option is useful for getting very loud, yet clear sound during the mastering process.
Normalize peak value to 0 dB
The selection’s peak amplitude level is normalized to 0 dB. This applies the maximum possible constant gain that doesn’t clip to the selection. Less gain is applied than would be necessary to achieve the Normalize to RMS level.
Ignore (saturate)
Sound data is allowed to clip. Use this option only if the clipping samples are very short and infrequent.
Stop processing
Any sound data that would clip causes the Normalize function to stop processing and display a notification.
Click the OK button.
Normalize using levels from another selection or file
Select the data you want to use to normalize your data.
From the Process menu, choose Normalize.
Click the Scan Levels button.
Close the Normalize dialog.
Select the data you want to normalize.
From the Process menu, choose Normalize.
Select the Use current scan level check box. The selection is normalized to the level displayed in the Peak or RMS fields without rescanning.
For more information about using processing dialog controls, click here.
Applying film post-production techniques to game sound design
Author: Nick Peck
It has been my experience that game companies often rely on commerical CD sound
effects libraries for the majority of their raw sound material. While these libraries are very
useful, there are other methods of collecting sound material that can achieve excellent
creative results. The film industry often uses custom field and foley recording to give each
project a personal and unique flavor, augmenting their needs with CD libraries where
appropriate. In this session, techniques of field and foley recording will be discussed, using
examples and parallels between the film Being John Malkovich and the game Escape from
Monkey Island. Audio portions of the game will be broken down track by track, showing how
the dialog, music, hard sfx, foley, and ambient layers combine to create a unified sound
experience.
Let’s whet our appetite by hearing some great film sound. Here are snippets from three very
different films with excellent sound: Apocalypse Now, The Exorcist, and Castaway.
Our challenge is to meet this level. Games don’t sound like that. Even taking into account
the differences in the medium, games often don’t sound as good as they could. Why not?
Four reasons immediately come to mind: time, money, communication and delay across the
team, and accepted work techniques.
How can we improve the situation?
Not having enough time or money are always a huge problem. The solution is to budget
more of both for sound! Sound Designers are often limited by having poor, outdated
equipment, not enough off-the-shelf sound libraries, but most importantly, not enough time
to go out and get new, original sounds for the game project. Remember: SOUND IS ART.
To make a game sound artful, let the sound designers have the time and money to practice
their art!
The next problem is poor communication with the rest of the team, and delays in production
propogating to further diminish the amount of time to develop sound for the game. We can
say that effluent follows the laws of gravity in changing state from higher potential energy to
lower potential energy. The translation will be left to the reader, but the point is that sound is
a post-production process. We are at the end of the line, where everyone is out of money,
out of time, and out of patience. This is true in film as well. By the time sound is done, the
programmers are burned out, the deadlines are absurdly close, and it is very hard to get the
sound wired up with the level of detail you’d like. You can address this somewhat by clearly
communicating your needs early on. If all else fails, become a programmer.
The final problem limiting sound production in games that I’d like to touch on is accepted
work techniques. It seems that pulling most or all raw sound materials from commercial
SFX libraries is often the primary approach. It is a model that is well-understood, and easy
to implement. While libraries are hugely useful, though, they do limit your creativity, and
give you the same raw sound as everyone else. It is true that the crafty sound designer will
take these materials as a starting point and manipulate them, often to the point of
unrecognizability, but there are still only so many wind recordings in the Sound Ideas 6000
series library, and most every game company owns that library and uses it.
To some degree, these problems will always be there. Bringing awareness of sound needs
to the people that pay the bills is not always easy. But there are ways that we can improve
game audio incrementally: by bringing more film post-production techniques into game
audio.
Why apply film techniques to games? Simply put, the movie industry has been around for a
long time. Film sound designers have honed their craft and figured out what works. They
know how to make films sound unique and interesting. As game sound designers, we can
steal their ideas to make games sound unique and interesting too.
Film sound is broken into a series of layers: dialog, music, hard SFX, foley, and ambience.
Let’s examine these each rather briefly, looking at their relation to the greater whole.
Dialog comes first. Always remember that dialog is king. It must be intelligible above all
else, or your story is lost. In game audio, this usually means that the dialog is compressed
and limited severely, to make sure it reads above the music and SFX within the limited
dynamic range we have to work with. Film dialog is not compressed as much, because the
sound is carefully massaged at the mixing stage to make sure of intelligibility.
The music sets the emotional context of the project. It tells the player what to feel, whether
a moment is placid or tense, majestic or scary. Music and sound effects share the same
space, and work together in it (or not). Both film and games have the same problem of
these elements competing with each other. The best compromise is to try to make both
audible. This can be a tight-rope act, particularly in interactive settings. I have found that
having greater dynamic range, particularly in the music, allows the elements to rise and fall
in audibility, poking through each other when appropriate.
Hard SFX are the meat and potatoes of game sound. These are spells, weapon hits,
engine loops, door slams, and all other foreground sound material. The sonic character of
the game is most strongly defined by the choices the sound designer makes in creating the
hard SFX.
Foley is sound made by humans: footsteps, clothing rustles, the manipulation of props and
tools. In film, foley is recorded to picture to cover the movements the characters made on
the screen that could not be picked up by production microphones, due to the noise on the
set. Games are primarily animated, so of course there is no production recording to be
used, and all movements are recorded after the fact. Or more often, not recorded at all.
Foley is the sound layer that brings subtle realism to film. We can bring it to games as well.
Practically by definition, foley work involves recording custom material every time. There
are limited footsteps and clothing rustles available in some SFX libraries, that are often
used by game sound designers to fill in some movement. In my experience, foley recording
sounds better than using canned material, and is cheaper than editing it as well. It actually
takes less time to walk footsteps against picture than to edit library footsteps against it. Of
course, foley recording does require a foley pit with multiple surfaces for different types of
footsteps, as well as a good-sized prop collection and an extremely quiet recording
environment. But in my opinion, any game with a reasonably sized budget should do at
least some custom foley recording.
The last layer of sound is ambience. Ambience is the background recording of a particular
place that identifies it aurally. Swamp ambiences are filled with birds and frogs, beach
ambiences have the endless rumble of waves crashing on the shore, cave ambiences
might have a slow, reverberant dripping of water, restaurant ambiences might have muffled
conversation and the clatter of silverware on plates, and factory ambiences would have low
rumbling and the clatter of huge machines in the background. If music sets the mood,
ambience brings the location to life.
Ambiences have two components: The ambient loop, which is a long, streaming, stereo
recording that can be mixed with the music track, and specifics, which are separate, short
elements (bird chirps, foghorns, etc) that trigger randomly to break up repetition.
The way to bring convincing ambiences into your game is through field recording. Portable
DAT machines and stereo microphones expand your horizons to the end of the Earth. Field
recording is great fun, and rewards you with original material that has never been used in a
game before. As an added bonus, it is a great way to get out of the office for a while. I went
on vacation to Canada right before beginning production on Escape from Monkey Island. I
took the opportunity to record every type of water setting I could, from every distance and at
different times of day: waterfalls, beaches, gentle harbors, and streams. All of this material
made it into Monkey, and the result is a rich aural environment.
I’d like to show an example of how these elements fit together in film by using a scene I
sound designed for the film Being John Malkovich. In this scene, Malkovich enters his own
mind and ends up in a restaurant, where everyone he sees is a version of himself. I’ll show
how foley, hard sfx, ambience, music, and dialog fit together to create a complete picture.
Games have two different types of segments: Linear segments, often called animations or
cutscenes, and interactive segments. Each of these can benefit from a filmic approach.
Linear segments are short, animated movies with no interactive elements. The approach is
clear: Make sure to create all the layers of sound described above to create a rich
experience.
I’d like to show the GMRR (Giant Monkey Robot) cutscene from Escape from Monkey
Island as a case study. Just as in the restaurant scene from Being John Malkovich, I will
play back the scene several times, soloing the ambient, foley, hard sfx, and music tracks
separately. I will then play back a mix, to show how all the elements fit together.
Interactive segments are, of course, the part of the game where the player is actually
making things happen. Events are not completely predictable, and take place as a result of
the player’s decisions. Clearly, the interactive portions of games are very different animals
than linear film.
But there are still concepts to steal that can improve these segments.
Filmic improvements to interactive game sound would include: Filling the environment with
ambient loops and specifics, minimizing repetition by having as many alternate SFX as they
will let you (especially for footsteps and weapon hits), and letting the soundtrack have some
dynamics. Don’t compress/limit the life out of everything! Finally, to repeat my main theme,
do as much custom recording as possible! Make new footsteps, grunts, hits, weapons fires,
UI clicks, and anything else you can. It will sound different than other games, and will be a
lot more fun as well.
As a case study in interactive segments, I’d like to look at the sushi boat puzzle from
Escape from Monkey Island. This complex puzzle involves a lot of careful timing and
thinking outside the box by the player. There are various mechanisms at work, each with
their own sounds. By carefully working with these different sounds, turning them on and off
and changing their volume relative to decisions the player makes, the audio soundtrack
enhances the puzzle logic, and gives the player clues to help solve the puzzle.
To put my ideas in a nutshell: We can improve the sound of games by borrowing
techniques that have been widely used in film production for a long time, and adapting them
to our needs. I suggest making liberal use of foley, field recording of custom ambiences,
and recording as many hard SFX from scratch in the studio as possible, rather than relying
on sound effect libraries. Be careful not to over-compress the audio, and strive for the best
mix of dialog, music, and effects possible. The results will be game soundtracks that are
more unique, interesting, and beautiful to listen to.
CONTACT INFORMATION:
Nick Peck
Perceptive Sound Design
37 Matilda Ave, Mill Valley, CA 94941
Tel/Fax: 415-388-2628
Email: nick@tyedye.com
Web: http://www.perceptivesound.com
BIOGRAPHY
Nick Peck owns and operates Perceptive Sound Design, a firm specializing in audio postproduction
for the game and film industries. His sound design projects have included such
games as Escape from Monkey Island, Vampire the Masquerade: Redemption, Grim
Fandango, Star Wars Super Bombad Racing, and New Legends, as well as the films Being
John Malkovich and the remake of Vampire Hunter D. Peck is also a composer and
keyboardist, holding an MFA in Electronic Music from Mills College. He has released six
albums, ranging from avant garde electronic music to progressive rock, and performs
frequently with his quintetTen Ton Chicken. In March, 2000, Peck completed construction of
a new post production recording facility in Mill Valley, California. Featuring a foley pit,
voiceover booth, grand piano, 5.1 surround sound, 2 Pro Tools systems, high quality
microphones, synthesizers, and recording gear, excellent acoustics, and a 200 gigabyte
online sound effects library.
It has been my experience that game companies often rely on commerical CD sound
effects libraries for the majority of their raw sound material. While these libraries are very
useful, there are other methods of collecting sound material that can achieve excellent
creative results. The film industry often uses custom field and foley recording to give each
project a personal and unique flavor, augmenting their needs with CD libraries where
appropriate. In this session, techniques of field and foley recording will be discussed, using
examples and parallels between the film Being John Malkovich and the game Escape from
Monkey Island. Audio portions of the game will be broken down track by track, showing how
the dialog, music, hard sfx, foley, and ambient layers combine to create a unified sound
experience.
Let’s whet our appetite by hearing some great film sound. Here are snippets from three very
different films with excellent sound: Apocalypse Now, The Exorcist, and Castaway.
Our challenge is to meet this level. Games don’t sound like that. Even taking into account
the differences in the medium, games often don’t sound as good as they could. Why not?
Four reasons immediately come to mind: time, money, communication and delay across the
team, and accepted work techniques.
How can we improve the situation?
Not having enough time or money are always a huge problem. The solution is to budget
more of both for sound! Sound Designers are often limited by having poor, outdated
equipment, not enough off-the-shelf sound libraries, but most importantly, not enough time
to go out and get new, original sounds for the game project. Remember: SOUND IS ART.
To make a game sound artful, let the sound designers have the time and money to practice
their art!
The next problem is poor communication with the rest of the team, and delays in production
propogating to further diminish the amount of time to develop sound for the game. We can
say that effluent follows the laws of gravity in changing state from higher potential energy to
lower potential energy. The translation will be left to the reader, but the point is that sound is
a post-production process. We are at the end of the line, where everyone is out of money,
out of time, and out of patience. This is true in film as well. By the time sound is done, the
programmers are burned out, the deadlines are absurdly close, and it is very hard to get the
sound wired up with the level of detail you’d like. You can address this somewhat by clearly
communicating your needs early on. If all else fails, become a programmer.
The final problem limiting sound production in games that I’d like to touch on is accepted
work techniques. It seems that pulling most or all raw sound materials from commercial
SFX libraries is often the primary approach. It is a model that is well-understood, and easy
to implement. While libraries are hugely useful, though, they do limit your creativity, and
give you the same raw sound as everyone else. It is true that the crafty sound designer will
take these materials as a starting point and manipulate them, often to the point of
unrecognizability, but there are still only so many wind recordings in the Sound Ideas 6000
series library, and most every game company owns that library and uses it.
To some degree, these problems will always be there. Bringing awareness of sound needs
to the people that pay the bills is not always easy. But there are ways that we can improve
game audio incrementally: by bringing more film post-production techniques into game
audio.
Why apply film techniques to games? Simply put, the movie industry has been around for a
long time. Film sound designers have honed their craft and figured out what works. They
know how to make films sound unique and interesting. As game sound designers, we can
steal their ideas to make games sound unique and interesting too.
Film sound is broken into a series of layers: dialog, music, hard SFX, foley, and ambience.
Let’s examine these each rather briefly, looking at their relation to the greater whole.
Dialog comes first. Always remember that dialog is king. It must be intelligible above all
else, or your story is lost. In game audio, this usually means that the dialog is compressed
and limited severely, to make sure it reads above the music and SFX within the limited
dynamic range we have to work with. Film dialog is not compressed as much, because the
sound is carefully massaged at the mixing stage to make sure of intelligibility.
The music sets the emotional context of the project. It tells the player what to feel, whether
a moment is placid or tense, majestic or scary. Music and sound effects share the same
space, and work together in it (or not). Both film and games have the same problem of
these elements competing with each other. The best compromise is to try to make both
audible. This can be a tight-rope act, particularly in interactive settings. I have found that
having greater dynamic range, particularly in the music, allows the elements to rise and fall
in audibility, poking through each other when appropriate.
Hard SFX are the meat and potatoes of game sound. These are spells, weapon hits,
engine loops, door slams, and all other foreground sound material. The sonic character of
the game is most strongly defined by the choices the sound designer makes in creating the
hard SFX.
Foley is sound made by humans: footsteps, clothing rustles, the manipulation of props and
tools. In film, foley is recorded to picture to cover the movements the characters made on
the screen that could not be picked up by production microphones, due to the noise on the
set. Games are primarily animated, so of course there is no production recording to be
used, and all movements are recorded after the fact. Or more often, not recorded at all.
Foley is the sound layer that brings subtle realism to film. We can bring it to games as well.
Practically by definition, foley work involves recording custom material every time. There
are limited footsteps and clothing rustles available in some SFX libraries, that are often
used by game sound designers to fill in some movement. In my experience, foley recording
sounds better than using canned material, and is cheaper than editing it as well. It actually
takes less time to walk footsteps against picture than to edit library footsteps against it. Of
course, foley recording does require a foley pit with multiple surfaces for different types of
footsteps, as well as a good-sized prop collection and an extremely quiet recording
environment. But in my opinion, any game with a reasonably sized budget should do at
least some custom foley recording.
The last layer of sound is ambience. Ambience is the background recording of a particular
place that identifies it aurally. Swamp ambiences are filled with birds and frogs, beach
ambiences have the endless rumble of waves crashing on the shore, cave ambiences
might have a slow, reverberant dripping of water, restaurant ambiences might have muffled
conversation and the clatter of silverware on plates, and factory ambiences would have low
rumbling and the clatter of huge machines in the background. If music sets the mood,
ambience brings the location to life.
Ambiences have two components: The ambient loop, which is a long, streaming, stereo
recording that can be mixed with the music track, and specifics, which are separate, short
elements (bird chirps, foghorns, etc) that trigger randomly to break up repetition.
The way to bring convincing ambiences into your game is through field recording. Portable
DAT machines and stereo microphones expand your horizons to the end of the Earth. Field
recording is great fun, and rewards you with original material that has never been used in a
game before. As an added bonus, it is a great way to get out of the office for a while. I went
on vacation to Canada right before beginning production on Escape from Monkey Island. I
took the opportunity to record every type of water setting I could, from every distance and at
different times of day: waterfalls, beaches, gentle harbors, and streams. All of this material
made it into Monkey, and the result is a rich aural environment.
I’d like to show an example of how these elements fit together in film by using a scene I
sound designed for the film Being John Malkovich. In this scene, Malkovich enters his own
mind and ends up in a restaurant, where everyone he sees is a version of himself. I’ll show
how foley, hard sfx, ambience, music, and dialog fit together to create a complete picture.
Games have two different types of segments: Linear segments, often called animations or
cutscenes, and interactive segments. Each of these can benefit from a filmic approach.
Linear segments are short, animated movies with no interactive elements. The approach is
clear: Make sure to create all the layers of sound described above to create a rich
experience.
I’d like to show the GMRR (Giant Monkey Robot) cutscene from Escape from Monkey
Island as a case study. Just as in the restaurant scene from Being John Malkovich, I will
play back the scene several times, soloing the ambient, foley, hard sfx, and music tracks
separately. I will then play back a mix, to show how all the elements fit together.
Interactive segments are, of course, the part of the game where the player is actually
making things happen. Events are not completely predictable, and take place as a result of
the player’s decisions. Clearly, the interactive portions of games are very different animals
than linear film.
But there are still concepts to steal that can improve these segments.
Filmic improvements to interactive game sound would include: Filling the environment with
ambient loops and specifics, minimizing repetition by having as many alternate SFX as they
will let you (especially for footsteps and weapon hits), and letting the soundtrack have some
dynamics. Don’t compress/limit the life out of everything! Finally, to repeat my main theme,
do as much custom recording as possible! Make new footsteps, grunts, hits, weapons fires,
UI clicks, and anything else you can. It will sound different than other games, and will be a
lot more fun as well.
As a case study in interactive segments, I’d like to look at the sushi boat puzzle from
Escape from Monkey Island. This complex puzzle involves a lot of careful timing and
thinking outside the box by the player. There are various mechanisms at work, each with
their own sounds. By carefully working with these different sounds, turning them on and off
and changing their volume relative to decisions the player makes, the audio soundtrack
enhances the puzzle logic, and gives the player clues to help solve the puzzle.
To put my ideas in a nutshell: We can improve the sound of games by borrowing
techniques that have been widely used in film production for a long time, and adapting them
to our needs. I suggest making liberal use of foley, field recording of custom ambiences,
and recording as many hard SFX from scratch in the studio as possible, rather than relying
on sound effect libraries. Be careful not to over-compress the audio, and strive for the best
mix of dialog, music, and effects possible. The results will be game soundtracks that are
more unique, interesting, and beautiful to listen to.
CONTACT INFORMATION:
Nick Peck
Perceptive Sound Design
37 Matilda Ave, Mill Valley, CA 94941
Tel/Fax: 415-388-2628
Email: nick@tyedye.com
Web: http://www.perceptivesound.com
BIOGRAPHY
Nick Peck owns and operates Perceptive Sound Design, a firm specializing in audio postproduction
for the game and film industries. His sound design projects have included such
games as Escape from Monkey Island, Vampire the Masquerade: Redemption, Grim
Fandango, Star Wars Super Bombad Racing, and New Legends, as well as the films Being
John Malkovich and the remake of Vampire Hunter D. Peck is also a composer and
keyboardist, holding an MFA in Electronic Music from Mills College. He has released six
albums, ranging from avant garde electronic music to progressive rock, and performs
frequently with his quintetTen Ton Chicken. In March, 2000, Peck completed construction of
a new post production recording facility in Mill Valley, California. Featuring a foley pit,
voiceover booth, grand piano, 5.1 surround sound, 2 Pro Tools systems, high quality
microphones, synthesizers, and recording gear, excellent acoustics, and a 200 gigabyte
online sound effects library.
вторник, 2 февруари 2010 г.
How to connect a midi to a computer
http://www.ehow.com/how_4617535_connect-midi-keyboard-pc.html
http://www.musiconmypc.co.uk/art_keyboard_connection.php
How to convert MIDI toaudio
http://www.musiconmypc.co.uk/art_keyboard_connection.php
How to convert MIDI toaudio
четвъртък, 28 януари 2010 г.
Chorus effect
The Chorus effect simulates the sound of several of the same instrument playing the same notes. Adding a Chorus effect works by adding very short delays and modulating the delay times.
Chorusing is very effective on stringed instruments and can be used as a special effect on vocals and other instruments.
What do you want to do?
Apply a simple chorus
Open the Sonic Foundry Chorus dialog.
Choose a preset from the Name drop-down list, or adjust the controls as desired:
a. Drag the Input gain fader to set the gain that is applied to the signal before processing.
b. Drag the Dry out fader to set the level of the unprocessed signal that will be mixed into the output.
c. Drag the Chorus out fader to set the level of the processed signal that will be mixed into the output.
d. Drag the Chorus out delay slider to select the delay time that will be the middle point for the modulation.
Chorusing effects are typically created with delay times between 25 and 50 milliseconds, depending on the source material. Shorter delay times will create a flanging effect, and longer delay times will create a doubling or slap-back delay effect.
e. Drag the Modulation rate slider to determine how fast the delay time is modulated. Choose values of 0.3 to 1 Hz for subtle modulation. Higher values will produce more intense effects.
Modulation will not be heard until you increase the Modulation depth setting.
f. Drag the Modulation depth slider to determine how far outside of the initial setting the delay time is modulated. Higher settings will create detuning effects. Lower settings are better for creating lush guitar and string effects.
Increase the chorus size
Drag the Chorus size slider to specify how many times the selection is processed with the chorus algorithm.
A larger Chorus size setting will add depth to the effect and will emphasize the effects of the Feedback setting.
Add feedback
Drag the Feedback slider to specify the percentage of the processed signal that you want to re-process.
Increasing feedback is another way to thicken up the chorus effect. By increasing the Feedback setting, additional delays are added to the signal. The result can range from subtly increasing the girth of the chorus to adding dramatic discrete echoes.
Invert the phase of the chorus or feedback signal
Select the Invert the chorus phase check box if you want to invert the phase of the processed signal before mixing it with the unprocessed signal.
Select the Invert the feedback phase check box if you want to invert the phase of the feedback signal before adding it to the chorused signal.
Inverting sound data reverses the polarty of a waveform around its baseline. Inverting a waveform does not change the sound of a file; however, when you mix different sound files, phase cancellation can occur, producing a "hollow" sound. Inverting one of the files can prevent phase cancellation.
In the following example, the red line represents the baseline, and the lower waveform is the inverted image of the upper waveform:
Attenuate high frequencies
Select the Attenuate high frequencies above check box and drag the slider to apply a low-pass filter to your selection. Frequencies above the frequency specified by the slider will be filtered.
A wide variety of non-chorus-like effects can be created with this function: if the Modulation depth is high, a vibrato effect will occur; if the Chorus out delay is small, flanging occurs.
Apply a simple chorus
Open the Sonic Foundry Chorus dialog.
Choose a preset from the Name drop-down list, or adjust the controls as desired:
a. Drag the Input gain fader to set the gain that is applied to the signal before processing.
b. Drag the Dry out fader to set the level of the unprocessed signal that will be mixed into the output.
c. Drag the Chorus out fader to set the level of the processed signal that will be mixed into the output.
d. Drag the Chorus out delay slider to select the delay time that will be the middle point for the modulation.
Chorusing effects are typically created with delay times between 25 and 50 milliseconds, depending on the source material. Shorter delay times will create a flanging effect, and longer delay times will create a doubling or slap-back delay effect.
e. Drag the Modulation rate slider to determine how fast the delay time is modulated. Choose values of 0.3 to 1 Hz for subtle modulation. Higher values will produce more intense effects.
Modulation will not be heard until you increase the Modulation depth setting.
f. Drag the Modulation depth slider to determine how far outside of the initial setting the delay time is modulated. Higher settings will create detuning effects. Lower settings are better for creating lush guitar and string effects.
Increase the chorus size
Drag the Chorus size slider to specify how many times the selection is processed with the chorus algorithm.
A larger Chorus size setting will add depth to the effect and will emphasize the effects of the Feedback setting.
Add feedback
Drag the Feedback slider to specify the percentage of the processed signal that you want to re-process.
Increasing feedback is another way to thicken up the chorus effect. By increasing the Feedback setting, additional delays are added to the signal. The result can range from subtly increasing the girth of the chorus to adding dramatic discrete echoes.
Invert the phase of the chorus or feedback signal
Select the Invert the chorus phase check box if you want to invert the phase of the processed signal before mixing it with the unprocessed signal.
Select the Invert the feedback phase check box if you want to invert the phase of the feedback signal before adding it to the chorused signal.
Inverting sound data reverses the polarty of a waveform around its baseline. Inverting a waveform does not change the sound of a file; however, when you mix different sound files, phase cancellation can occur, producing a "hollow" sound. Inverting one of the files can prevent phase cancellation.
In the following example, the red line represents the baseline, and the lower waveform is the inverted image of the upper waveform:
Attenuate high frequencies
Select the Attenuate high frequencies above check box and drag the slider to apply a low-pass filter to your selection. Frequencies above the frequency specified by the slider will be filtered.
A wide variety of non-chorus-like effects can be created with this function: if the Modulation depth is high, a vibrato effect will occur; if the Chorus out delay is small, flanging occurs.
Chorusing is very effective on stringed instruments and can be used as a special effect on vocals and other instruments.
What do you want to do?
Apply a simple chorus
Open the Sonic Foundry Chorus dialog.
Choose a preset from the Name drop-down list, or adjust the controls as desired:
a. Drag the Input gain fader to set the gain that is applied to the signal before processing.
b. Drag the Dry out fader to set the level of the unprocessed signal that will be mixed into the output.
c. Drag the Chorus out fader to set the level of the processed signal that will be mixed into the output.
d. Drag the Chorus out delay slider to select the delay time that will be the middle point for the modulation.
Chorusing effects are typically created with delay times between 25 and 50 milliseconds, depending on the source material. Shorter delay times will create a flanging effect, and longer delay times will create a doubling or slap-back delay effect.
e. Drag the Modulation rate slider to determine how fast the delay time is modulated. Choose values of 0.3 to 1 Hz for subtle modulation. Higher values will produce more intense effects.
Modulation will not be heard until you increase the Modulation depth setting.
f. Drag the Modulation depth slider to determine how far outside of the initial setting the delay time is modulated. Higher settings will create detuning effects. Lower settings are better for creating lush guitar and string effects.
Increase the chorus size
Drag the Chorus size slider to specify how many times the selection is processed with the chorus algorithm.
A larger Chorus size setting will add depth to the effect and will emphasize the effects of the Feedback setting.
Add feedback
Drag the Feedback slider to specify the percentage of the processed signal that you want to re-process.
Increasing feedback is another way to thicken up the chorus effect. By increasing the Feedback setting, additional delays are added to the signal. The result can range from subtly increasing the girth of the chorus to adding dramatic discrete echoes.
Invert the phase of the chorus or feedback signal
Select the Invert the chorus phase check box if you want to invert the phase of the processed signal before mixing it with the unprocessed signal.
Select the Invert the feedback phase check box if you want to invert the phase of the feedback signal before adding it to the chorused signal.
Inverting sound data reverses the polarty of a waveform around its baseline. Inverting a waveform does not change the sound of a file; however, when you mix different sound files, phase cancellation can occur, producing a "hollow" sound. Inverting one of the files can prevent phase cancellation.
In the following example, the red line represents the baseline, and the lower waveform is the inverted image of the upper waveform:
Attenuate high frequencies
Select the Attenuate high frequencies above check box and drag the slider to apply a low-pass filter to your selection. Frequencies above the frequency specified by the slider will be filtered.
A wide variety of non-chorus-like effects can be created with this function: if the Modulation depth is high, a vibrato effect will occur; if the Chorus out delay is small, flanging occurs.
Apply a simple chorus
Open the Sonic Foundry Chorus dialog.
Choose a preset from the Name drop-down list, or adjust the controls as desired:
a. Drag the Input gain fader to set the gain that is applied to the signal before processing.
b. Drag the Dry out fader to set the level of the unprocessed signal that will be mixed into the output.
c. Drag the Chorus out fader to set the level of the processed signal that will be mixed into the output.
d. Drag the Chorus out delay slider to select the delay time that will be the middle point for the modulation.
Chorusing effects are typically created with delay times between 25 and 50 milliseconds, depending on the source material. Shorter delay times will create a flanging effect, and longer delay times will create a doubling or slap-back delay effect.
e. Drag the Modulation rate slider to determine how fast the delay time is modulated. Choose values of 0.3 to 1 Hz for subtle modulation. Higher values will produce more intense effects.
Modulation will not be heard until you increase the Modulation depth setting.
f. Drag the Modulation depth slider to determine how far outside of the initial setting the delay time is modulated. Higher settings will create detuning effects. Lower settings are better for creating lush guitar and string effects.
Increase the chorus size
Drag the Chorus size slider to specify how many times the selection is processed with the chorus algorithm.
A larger Chorus size setting will add depth to the effect and will emphasize the effects of the Feedback setting.
Add feedback
Drag the Feedback slider to specify the percentage of the processed signal that you want to re-process.
Increasing feedback is another way to thicken up the chorus effect. By increasing the Feedback setting, additional delays are added to the signal. The result can range from subtly increasing the girth of the chorus to adding dramatic discrete echoes.
Invert the phase of the chorus or feedback signal
Select the Invert the chorus phase check box if you want to invert the phase of the processed signal before mixing it with the unprocessed signal.
Select the Invert the feedback phase check box if you want to invert the phase of the feedback signal before adding it to the chorused signal.
Inverting sound data reverses the polarty of a waveform around its baseline. Inverting a waveform does not change the sound of a file; however, when you mix different sound files, phase cancellation can occur, producing a "hollow" sound. Inverting one of the files can prevent phase cancellation.
In the following example, the red line represents the baseline, and the lower waveform is the inverted image of the upper waveform:
Attenuate high frequencies
Select the Attenuate high frequencies above check box and drag the slider to apply a low-pass filter to your selection. Frequencies above the frequency specified by the slider will be filtered.
A wide variety of non-chorus-like effects can be created with this function: if the Modulation depth is high, a vibrato effect will occur; if the Chorus out delay is small, flanging occurs.
Reverb
Reverb
Reverb allows you to recreate the space that is typically lost with close-miking techniques. It may also be used to create effects by placing sounds in spaces where they would normally never be heard.
What do you want to do?
Apply a simple reverb
Open the Sonic Foundry Reverb dialog.
Choose a preset from the Name drop-down list, or adjust the controls as desired:
a. Choose a Reverberation mode from the drop-down list.
These modes are the basic types of reverb simulation available to you in the Reverb dialog. Rather than determine the length of the reverb, these modes determine parameters such as diffusion and the reflective patterns of the echoes that make up a reverb.
b. Drag the Dry out fader to set the level of the unprocessed signal that will be mixed into the output.
c. Drag the Reverb out fader to set the level of the processed signal that will be mixed into the output.
d. Choose an Early reflection style from the drop-down list, and drag the Early out slider to adjust the early reflections mixed into the output.
Early reflections are the first reflections you hear when a sound is created in a space. These reflections have typically only bounced once before reaching your ears. The human ear uses these first reflections to judge the size of the space.
e. Drag the Decay time slider to specify the length of the reverb. Decay time is the time it takes for the reverb to decay to -60 dB below its initial level. Typically, anything over three seconds is a very long reverb. Most small rooms have decay times of less than one second.
f. Drag the Pre-delay slider to specify the time between the initial sound and the start of the reverb. Pre-delay is another parameter that gives the human ear cues about how big a space is. Long Pre-delay times are usually associated with large spaces.
Adjust the placement of the source and reverb signals
You can adjust the Dry out, Reverb out, and Early out faders to sculpt the sound and place the source closer to or farther from the listener in the space you have created. A higher balance of dry signal will make the source sound closer. A higher balance of reverb will place the source farther away in the space.
Apply high- and low-pass filters
Reverb tends to lose high- and low-frequency material as it is reflected in a space. You can apply high- and low-pass filters to your signal to simulate the frequency loss of a space.
Select the Attenuate bass freqs. below check box and drag the slider if you want to filter low frequencies. Sounds below the specified frequency will be attenuated.
Select the Attenuate high freqs. above check box and drag the slider if you want to filter high frequencies. Sounds above the specified frequency will be attenuated.
Dull rooms will typically attenuate high frequencies starting around 4000 Hz. Brighter rooms will begin attenuation at higher frequencies.
Reverb allows you to recreate the space that is typically lost with close-miking techniques. It may also be used to create effects by placing sounds in spaces where they would normally never be heard.
What do you want to do?
Apply a simple reverb
Open the Sonic Foundry Reverb dialog.
Choose a preset from the Name drop-down list, or adjust the controls as desired:
a. Choose a Reverberation mode from the drop-down list.
These modes are the basic types of reverb simulation available to you in the Reverb dialog. Rather than determine the length of the reverb, these modes determine parameters such as diffusion and the reflective patterns of the echoes that make up a reverb.
b. Drag the Dry out fader to set the level of the unprocessed signal that will be mixed into the output.
c. Drag the Reverb out fader to set the level of the processed signal that will be mixed into the output.
d. Choose an Early reflection style from the drop-down list, and drag the Early out slider to adjust the early reflections mixed into the output.
Early reflections are the first reflections you hear when a sound is created in a space. These reflections have typically only bounced once before reaching your ears. The human ear uses these first reflections to judge the size of the space.
e. Drag the Decay time slider to specify the length of the reverb. Decay time is the time it takes for the reverb to decay to -60 dB below its initial level. Typically, anything over three seconds is a very long reverb. Most small rooms have decay times of less than one second.
f. Drag the Pre-delay slider to specify the time between the initial sound and the start of the reverb. Pre-delay is another parameter that gives the human ear cues about how big a space is. Long Pre-delay times are usually associated with large spaces.
Adjust the placement of the source and reverb signals
You can adjust the Dry out, Reverb out, and Early out faders to sculpt the sound and place the source closer to or farther from the listener in the space you have created. A higher balance of dry signal will make the source sound closer. A higher balance of reverb will place the source farther away in the space.
Apply high- and low-pass filters
Reverb tends to lose high- and low-frequency material as it is reflected in a space. You can apply high- and low-pass filters to your signal to simulate the frequency loss of a space.
Select the Attenuate bass freqs. below check box and drag the slider if you want to filter low frequencies. Sounds below the specified frequency will be attenuated.
Select the Attenuate high freqs. above check box and drag the slider if you want to filter high frequencies. Sounds above the specified frequency will be attenuated.
Dull rooms will typically attenuate high frequencies starting around 4000 Hz. Brighter rooms will begin attenuation at higher frequencies.
Graphic EQ
Graphic EQ is a powerful plug-in that allows you to tailor sound with pre-defined bands or a user-definable envelope graph. Graphic EQ is divided into three pages: Envelope, 10 band, and 20 band.
You can create a rough representation of a filter with the 10- or 20-band page and then switch to the envelope page to fine-tune the frequency spectrum.
What do you want to do?
Use the envelope graph
Open the Sonic Foundry Graphic EQ dialog.
Choose a preset from the Name drop-down list.
Click the Envelope tab.
Adjust the envelope graph:
Drag the small boxes (envelope points) up or down. When the envelope is below the centerline, signals of the corresponding frequency level are attenuated. When the envelope is above the centerline, the signal is boosted.
To create a new envelope point, left-click on any point of the envelope.
To delete an envelope point, click it with the right mouse button, or double-click it with the left mouse button.
To move all envelope points, press Ctrl+A and drag when the envelope has focus (the cursor will be displayed as a ).
Click the Reset button to reset the graph.
Choose a setting from the Accuracy drop-down list to determine a balance between filter precision and processing speed.
Low precision is not recommended for performing very sharp filtering, when filtering very low frequencies, or when using a high sample rate.
Drag the Output gain fader if you want to apply a gain after processing.
Use the 10- or 20-Band EQ
Faders for high frequencies will be unavailable when working with files that use low sample rates.
Open the Sonic Foundry Graphic EQ dialog.
Choose a preset from the Name drop-down list.
Click the 10-Band or 20-Band tab.
Drag the frequency-band faders to boost or attenuate the selected frequency band.
To quickly disable a band, set the gain to 0.0 dB by double-clicking the fader handle.
The frequency displayed at the bottom of the fader is the center frequency of the frequency band affected by the fader.
Choose a setting from the Accuracy drop-down list to determine a balance between filter precision and processing speed.
Low precision is not recommended for performing very sharp filtering, when filtering very low frequencies, or when using a high sample rate.
Drag the Output gain fader if you want to apply a gain after processing.
You can create a rough representation of a filter with the 10- or 20-band page and then switch to the envelope page to fine-tune the frequency spectrum.
What do you want to do?
Use the envelope graph
Open the Sonic Foundry Graphic EQ dialog.
Choose a preset from the Name drop-down list.
Click the Envelope tab.
Adjust the envelope graph:
Drag the small boxes (envelope points) up or down. When the envelope is below the centerline, signals of the corresponding frequency level are attenuated. When the envelope is above the centerline, the signal is boosted.
To create a new envelope point, left-click on any point of the envelope.
To delete an envelope point, click it with the right mouse button, or double-click it with the left mouse button.
To move all envelope points, press Ctrl+A and drag when the envelope has focus (the cursor will be displayed as a ).
Click the Reset button to reset the graph.
Choose a setting from the Accuracy drop-down list to determine a balance between filter precision and processing speed.
Low precision is not recommended for performing very sharp filtering, when filtering very low frequencies, or when using a high sample rate.
Drag the Output gain fader if you want to apply a gain after processing.
Use the 10- or 20-Band EQ
Faders for high frequencies will be unavailable when working with files that use low sample rates.
Open the Sonic Foundry Graphic EQ dialog.
Choose a preset from the Name drop-down list.
Click the 10-Band or 20-Band tab.
Drag the frequency-band faders to boost or attenuate the selected frequency band.
To quickly disable a band, set the gain to 0.0 dB by double-clicking the fader handle.
The frequency displayed at the bottom of the fader is the center frequency of the frequency band affected by the fader.
Choose a setting from the Accuracy drop-down list to determine a balance between filter precision and processing speed.
Low precision is not recommended for performing very sharp filtering, when filtering very low frequencies, or when using a high sample rate.
Drag the Output gain fader if you want to apply a gain after processing.
Paragraphic EQ
Paragraphic EQ
The Paragraphic EQ is a set of six very flexible parametric filters. Four independent band filters allow you to boost or attenuate specific frequency ranges. In addition, two shelving filters let you control the amount of low and high frequencies in your recordings. A Gain vs. Frequency graph shows the overall effect of the combined filters, making it easier to visualize the final sound.
Removing very low and inaudible frequencies after recording eliminates any DC offset and gives your sounds more headroom for audible frequencies.
Using the Paragraphic EQ
Open the Sonic Foundry Paragraphic EQ dialog.
Choose a preset from the Name drop-down list, or adjust the controls as desired:
a. Drag the Dry Out fader to set the level of the unprocessed signal mixed into the output.
b. Drag the Wet Out fader to set the level of the processed signal mixed into the output.
c. Drag the Gain fader to set the amount of boost or cut for the band. To quickly disable a band, set the Gain to 0.0 dB by double-clicking on the fader handle.
d. Drag the Width slider to specify the number of octaves (centered on the selected frequency) that will be affected by the filtering. Use a high value to affect a greater range of frequencies and a low value for a more selective (notch) filter.
e. Drag the Center frequency slider to specify the center of the selected frequency band.
f. Select the Enable low-shelf check box to attenuate or boost frequencies below the low-shelf cutoff frequency. The low-shelf cutoff frequency and gain are determined by the sliders to the right of the Enable low-shelf check box.
g. Select the Enable high-shelf check box to attenuate or boost frequencies above the high-shelf cutoff frequency. The high-shelf cutoff frequency and gain are determined by the sliders to the right of the Enable high-shelf check box.
The Paragraphic EQ is a set of six very flexible parametric filters. Four independent band filters allow you to boost or attenuate specific frequency ranges. In addition, two shelving filters let you control the amount of low and high frequencies in your recordings. A Gain vs. Frequency graph shows the overall effect of the combined filters, making it easier to visualize the final sound.
Removing very low and inaudible frequencies after recording eliminates any DC offset and gives your sounds more headroom for audible frequencies.
Using the Paragraphic EQ
Open the Sonic Foundry Paragraphic EQ dialog.
Choose a preset from the Name drop-down list, or adjust the controls as desired:
a. Drag the Dry Out fader to set the level of the unprocessed signal mixed into the output.
b. Drag the Wet Out fader to set the level of the processed signal mixed into the output.
c. Drag the Gain fader to set the amount of boost or cut for the band. To quickly disable a band, set the Gain to 0.0 dB by double-clicking on the fader handle.
d. Drag the Width slider to specify the number of octaves (centered on the selected frequency) that will be affected by the filtering. Use a high value to affect a greater range of frequencies and a low value for a more selective (notch) filter.
e. Drag the Center frequency slider to specify the center of the selected frequency band.
f. Select the Enable low-shelf check box to attenuate or boost frequencies below the low-shelf cutoff frequency. The low-shelf cutoff frequency and gain are determined by the sliders to the right of the Enable low-shelf check box.
g. Select the Enable high-shelf check box to attenuate or boost frequencies above the high-shelf cutoff frequency. The high-shelf cutoff frequency and gain are determined by the sliders to the right of the Enable high-shelf check box.
Parametric EQ
Parametric EQ
The Parametric Equalizer is a set of four frequency-selective filters that allow for very precise changes in the frequency content of a sound signal:
A high-frequency shelf filter attenuates frequencies above a specified cutoff frequency. This filter is useful for removing high-frequency noise such as wind, tape hiss, or computer noise.
A low-frequency shelf filter attenuates frequencies below a specified cutoff frequency. This filter is useful for removing low-frequency rumbles such as wind, electrical hum, or traffic noise.
A band-pass filter attenuates or boosts frequencies outside of a specified range of frequencies. This filter is useful for removing hiss and low-frequency rumble simultaneously or boosting a specific frequency range.
A band-reject (or notch filter) attenuates frequencies within a specified range of frequencies. This filter is useful for removing to remove narrow-bandwidth noise such as amplifier/microphone feedback or 60 Hz electrical hum.
Using the Parametric EQ
Open the Sonic Foundry Parametric EQ dialog.
Choose a preset from the Name drop-down list, or choose a filter from the Filter style drop-down list.
Adjust the filter frequency:
If you're using the High-frequency shelf filter, drag the Cutoff frequency slider to set the frequency above which the filter will be applied. The Transition width slider sets the slope of the filter.
If you're using the Low-frequency shelf filter, drag the Cutoff frequency slider to set the frequency below which the filter will be applied. The Transition width slider sets the slope of the filter.
If you're using the Band-pass or Band-notch/boost filter, drag the Center frequency slider to set the frequency at which the filter will be applied. The Band width slider controls the range of frequencies affected by the filter.
Drag the Amount fader to set the gain applied to the specified frequency band. This gain may be positive or negative.
Drag the Output gain fader if you want to apply a gain after processing.
Choose a setting from the Accuracy drop-down list to determine a balance between filter precision and processing speed.
Low precision is not recommended for performing very sharp filtering, when filtering very low frequencies, or when using a high sample rate.
The Parametric Equalizer is a set of four frequency-selective filters that allow for very precise changes in the frequency content of a sound signal:
A high-frequency shelf filter attenuates frequencies above a specified cutoff frequency. This filter is useful for removing high-frequency noise such as wind, tape hiss, or computer noise.
A low-frequency shelf filter attenuates frequencies below a specified cutoff frequency. This filter is useful for removing low-frequency rumbles such as wind, electrical hum, or traffic noise.
A band-pass filter attenuates or boosts frequencies outside of a specified range of frequencies. This filter is useful for removing hiss and low-frequency rumble simultaneously or boosting a specific frequency range.
A band-reject (or notch filter) attenuates frequencies within a specified range of frequencies. This filter is useful for removing to remove narrow-bandwidth noise such as amplifier/microphone feedback or 60 Hz electrical hum.
Using the Parametric EQ
Open the Sonic Foundry Parametric EQ dialog.
Choose a preset from the Name drop-down list, or choose a filter from the Filter style drop-down list.
Adjust the filter frequency:
If you're using the High-frequency shelf filter, drag the Cutoff frequency slider to set the frequency above which the filter will be applied. The Transition width slider sets the slope of the filter.
If you're using the Low-frequency shelf filter, drag the Cutoff frequency slider to set the frequency below which the filter will be applied. The Transition width slider sets the slope of the filter.
If you're using the Band-pass or Band-notch/boost filter, drag the Center frequency slider to set the frequency at which the filter will be applied. The Band width slider controls the range of frequencies affected by the filter.
Drag the Amount fader to set the gain applied to the specified frequency band. This gain may be positive or negative.
Drag the Output gain fader if you want to apply a gain after processing.
Choose a setting from the Accuracy drop-down list to determine a balance between filter precision and processing speed.
Low precision is not recommended for performing very sharp filtering, when filtering very low frequencies, or when using a high sample rate.
Pro Tools:Essential training
http://www.youtube.com/watch?v=M-yZ9AnN2K8&feature=related
Етикети:
аудио редактиране,
про туулс,
pro tools
сряда, 27 януари 2010 г.
Sound for Film and Television Instructional DVD from Barry Green and WBS
http://books.google.bg/books?id=wBlRtAlKPFsC&printsec=frontcover&dq=sound+on+film+and+television&source=bl&ots=pU8KqJb0cW&sig=_de0TctK14GF3IP56dUY_zsjmOQ&hl=bg&ei=4xhgS72RCaCYmAPbteXODA&sa=X&oi=book_result&ct=result&resnum=6&ved=0CCUQ6AEwBQ#v=onepage&q=&f=false
Етикети:
sound in television


