/prod/ - music production general

'Hello, children', edition.

New to production? Check it: pastebin.com/p2QUqMzj

Post what you're looking for in feedback. GIVE feedback to get feedback.

Post WIP's on Clyp.it

DON'T link to Soundclouds or youtube channels

LAST THREAD:

Attached: 16218-Mixing-Jazz-With-Fab-Dupont-4-gallery-original.jpg (1920x1080, 300K)

Other urls found in this thread:

clyp.it/qjxhmjbo
vocaroo.com/i/s1YWtBySTNLj
youtube.com/watch?v=YpOwyvD2ecg
youtube.com/watch?v=U6aOpFhO6zU
clyp.it/tghbnwi3
gumroad.com/valiumdupeuple#vDPJ
clyp.it/t3rermcu
clyp.it/o4eokwc0
clyp.it/5jxfpvdg
clyp.it/u4ajpign
clyp.it/4p5uiwhy
vocaroo.com/i/s0MrI9qonzFJ
clyp.it/wpcyydwo
clyp.it/rfmlbt0f
vocaroo.com/i/s18Cw6VR09gE
vocaroo.com/i/s1eQcfZWiswD
clyp.it/bcxan1sp
clyp.it/gczz4jfb
cdm.link/2019/08/reason-11-new-plugin/
clyp.it/mqkc12er
clyp.it/ll0oomwm
reverb.com/software/samples-and-loops/reverb/3514-reverb-drum-machines-the-complete-collection
clyp.it/s5mpxlxk
teoria.com/en/exercises/
tonedear.com/ear-training/chord-identification
instaud.io/3UeY
www91.zippyshare.com/v/czihap1h/file.html
clyp.it/q425tepj
youtube.com/watch?v=m6Sw3kplhjA
clyp.it/x4hifoi4
clyp.it/ccxenrbs
clyp.it/taa3wycm
clyp.it/udkxgc4z
clyp.it/1w3tvufl
youtube.com/watch?v=EDjaSMgp9_w
clyp.it/zeh2carw
clyp.it/h30t1qac
clyp.it/kk5i45il
clyp.it/t5r0zwku
clyp.it/2xpmeqaw
clyp.it/dtachldu
youtube.com/watch?v=2YXDd_NKhac
vimeo.com/221178360
youtube.com/watch?v=mVndQfTQP0c
youtu.be/FgWsopJVlg4?t=249
youtube.com/watch?v=41TT1lPAo8s
youtu.be/CyW5O1xSk40?t=842
clyp.it/fm22eyya
youtube.com/watch?v=my9LM3K9K5c
scales-chords.com/fscale_res_en.php?rn1=A&rn2=C&rn3=C#/Db&rn4=E&rn5=F&rn6=G#/Ab&rn7=&rn8=&rn9=&normal=1&greek=1&greekalt=1&other=1&etnic=1&c1=&t1=&c2=&t2=&c3=&t3=
youtube.com/watch?v=BpO7FD1pkHs
clyp.it/1mst32mr
twitter.com/NSFWRedditImage

clyp.it/qjxhmjbo
Tell me if you like it or if you don't like it

not a fan, ibra

Attached: 354f80899d5d8df2acd17098e46fcc93.jpg (480x360, 21K)

i am not op but if the chords were a little different id probably like it more, tho thats just me. i guess the vibe youre aiming is kinda off-kilter wavey stuff?

vocaroo.com/i/s1YWtBySTNLj

Trying to spend the whole next week in the studio recording an album. Any tips from people who can record a whole project in a short amount of time?

>vocaroo.com/i/s1YWtBySTNLj
uh
avant garde? i vibe with the mp3 aesthetic but the guitar and keys are tuned differently

is there any vintage modeled vsts with a good saxophone patch?
youtube.com/watch?v=YpOwyvD2ecg

Attached: 1566709325673.jpg (373x498, 33K)

YEYEYEYE SNATCHER OST BUMPS

youtube.com/watch?v=U6aOpFhO6zU

That might help, it's a VST emulation of the synth chip used in the Genesis, and it's patch compatible with files ripped from the games themselves.

eric valentine has a relatively new and quite good youtube channel.

very long, very detailed videos and he'll basically walk you through his whole session

recc'd

Thanks Eric. I'll check it out.

>get a new computer
>no longer gave cpu issues within the first 20 minutes producing
Feels good, though ngl it was fun having to bounce tracks all the time and using low cpu plug ins whenever possible. It gave my stuff a unique vibe, but damn running a bunch of kontakt libraries at the same time makes everything easier.

you can and should still bounce tracks bro lol

is it retarded to have a synth playing sine waves for the sub bass and an actual OD'd bass playing an octave higher but with a slight low freq cutoff in parallel?

I had to use a dedicated track bouncing project before and another one for mixing. Now I can just build the whole thing with generators and bounce when I'm done with most of the structure. It saves me a hell lot of time and work.

Nothing is retarded if it sounds good and doesn't cause problems in the mix.

>a slight low freq cutoff in parallel
What do you mean?

what do you mean is it retarded? just give it a go and see if you like the effect. nobody on this board is any position to be imposing broad rules on other people about what is and is not acceptable.

he meant the od bass in parallel lol
eq cuts in parallel are retarded

the only retarded thing about this is that you asked. this is extremely common

sorry, in parallel as in both playing the same line (like, if the synth plays an E0 at 20hz, the bass will play an E1 at 40hz)
and with a slight low freq cut I mean cutting off the lower freqs of the bass guitar a little bit as to not make it sound like a fartfest, leaving the overtones so it kinda keeps its timbre

sorry man I'm still pretty new, I thought most of the time they used the same synth sound but layered or just the bass guitar alone

Attached: 20160205_212114[1].jpg (3264x2448, 2.03M)

Yeah, it's normal.
The human brain can mentally reconstruct the fundamental of the sound from its higher overtones, so you can highpass the higher layer and still retain its pitch.
It's even less of a problem here, since you're reintroducing it with the lower layer, which does have its fundamental and is in tune with the upper layer.
I'm not sure that highpassing the upper layer is necessary (since there's no overlapping and the sine is an octave lower), but it all depends on how it sounds, so the choice is yours.

However, what is a problem here, is that what you're doing would sound good shifted an octave up so that the low sine would sit at 40Hz.
Now it's at an inaudible 20Hz and to have it you're removing the audible 40Hz one from the other sound.
In this range you're better off just leaving the normal bass as it is (it's still in the sub-bass range) and removing the sine.

hey we all start somewhere
it's better that you ask a retarded question and get a quick answer rather than making a stupid habit and learning the hard way later (not hat you shouldn't experiment with breaking the rules on your own but you get what i mean lol)

the term "parallel" is usually meant by processing on the same track- ie two identical tracks (or the equivalent rack if you're ableton) with only one receiving an effect, or perhaps two different tracks going to one bus.
any other instance is already assumed that the tracks are playing side by side and not occupying the same track so it's not necessary to describe it as parallel and will only confuse people

you're welcome, anonymous. please enjoy this photograph of my somewhat attractive and moderately successful musician-wife. i impregnated her recently by ejaculating into her vagina.

Attached: grace2018.jpg (2500x3750, 2.24M)

gay

I think you might want to try E1 and E2, rather than starting at E0.

listen to most systems won't even reproduce a sine wave at 20 Hz. it's far too low.

oh sorry it was just an example as I usually improvise in my guitar or in my piano to come up with ideas so I usually play in E or C/A, but I transpose everything to A or B since A0 is the lowest note my subwoofer can play considerably loud and still can hear with my headphones (although barely)
so I was thinking of riffing on the bass A string

thanks guys

I really like pussy. Am I gay?

yes

Attached: lisa-bella-donna-moog-grandmother.jpg (1200x674, 85K)

forced and still not funny

>having a musician wife
I'm jelly

we have a little house so I don't have a dedicated studio room, so every once in a while I find everything piled up in a box and a sparkingly clean living room
I mean I'm not mad I know equipment takes a lot of space but come on it doesn't look that bad as long as it's clean and you don't have cables hanging from the ceiling's fan

would eq'in down the sines' upper freqs be a better idea? I just want that super low fundamental without it crashing with the upper bass freqs
read this:
still kinda confused as I like to use sampled kick drums and those things look like a Winamp visualization plugin when you check them out with an analyzer lol
anyways don't want to bother you too much guys as I could learn this stuff later by myself, but a little help is always great

gotta open my daw, you guys are cool

Attached: 1400949763287.jpg (750x500, 79K)

>the sines' upper freqs
No such thing.
Look at it on an analyzer. There's ony one frequency on a sine.
Putting a filter on it only alters its volume (which with a LPF on the range played by the melody means that the higher the note, the lower the volume).

>I just want that super low fundamental
You shouldn't, because you can't hear it, speakers can barely play it, and it eats up headroom for no reason (which means it will make your song quieter).

>without it crashing with the upper bass freqs
If it's a single pure sine, it's not gonna clash if it only plays notes that are lower than the bass.

m80, why don't you look into a subharmonic generator like bx_subsynth or mbassador and save yourself a lot of time. manually syncing a sine wave, depending on the complexity of the bass line, would take awhile to sound natural.

the other poster is quite correct re: no harmonics on a sine, although i'd disagree with the 'you shouldn't' part, if you want the additional sub weight. just be careful with it and realise that as you extend below 30 somethingish you might as well not because almost no one's going to hear it.

holy fuck i'm retarded, i thought a sine was composed of the fundamental + harmonics going down in amplitude for some reason

>m80, why don't you look into a subharmonic generator like bx_subsynth or mbassador and save yourself a lot of time. manually syncing a sine wave, depending on the complexity of the bass line, would take awhile to sound natural.
i'll look into that, althought i don't care about it sounding "natural" actually, just clear
>below 30 somethingish you might as well not because almost no one's going to hear it.
almost no one's going to hear it anyways, i'm not a professional
i just like making music i like to listen to

>You shouldn't, because you can't hear it, speakers can barely play it, and it eats up headroom for no reason (which means it will make your song quieter).
i'm aware it's going to occupy headroom, but an A0 is audible even with headphones and not even at a super high volume, and a subwoofer can play it no problem
it's below G0 when you get into the "i don't know what pitch this is in or if it even has a pitch" territory
play an A0 with a sine m8 you will hear (and feel) it clearly, even if you have something else playing over it

Attached: sine.png (685x651, 74K)

btw that's an A1 in the picture, used it to hear it more easily while checking it in the analyzer

clyp.it/tghbnwi3

Trying to make glitch-hop, am I on the right track?

Jesus Christ, do not start any dumb ass conversation or meme material from this

Don’t play notes with a fundamental below 30hz, that’s literally it. Ideally a few higher than that because quite a lot of systems cutoff above that.

Yes you can hear harmonics of notes below that if it’s not a pure sine or you distort or whatever the fuck but you’re fucking yourself over with a weaker bass

To all the producers here: You can't do shit. I am an actual artist and going to change and mix shit with different samples etc to make not full songs, but whole albums that include bridges, intros, outros and many different technique's to approach the samples I find. To me it's disgusting how you wannabe artists come here, take three seconds from Rubycon for a loop and think you're the shit. You are nothing.

Am I supposed to backup my FL Studio folder? It's like 80GB of plugins and effects. (all HDDs die eventually)

I kinda wish the plugins and effects were automatically copied to the .flp file's directory so I could simply backup the things that I'm actually using.

.... do people backup their vsts? wtf? your project should recall everything

... i backup reaktor files but that's because everybodys' library is different

...does fl not have a freeze option? that's my last safety net in ableton- backup everything but with vsts i don't trust 100% to recall correctly or things i know could be easily lost i always freeze

is there a max for live device like:
gumroad.com/valiumdupeuple#vDPJ

specifically the insist part, but a single device to bring in multiple tracks? getting bored of dealing with ableton's always-stuck-to-the-right return tracks making session organisation clunky as shit.

also, for the love of fucking god, ableton, please let us hide tracks and give us a proper routing matrix.

make one retard.

also, insist is still a half-baked hack, because you can't fake a post-fader return track without losing the ability to solo the track.
i'm this close to dumping live because it actively makes life hell for people dealing with large sessions.

Wtf dude just insert an audio track and make that your send it’s not that hard lol

When you lose your plugins and samples, what remains in your .flp files are nothing but note patterns. I think I should just accept the approach of completing something and moving on, because backing up 80GB is too much of a hassle.

It's not the end of the world to lose your plugins folder if you remember to export all your practice songs and keep those backed up. cuz then at least you'll have the final products of your efforts.

a standard ableton track cannot function like a proper return track because you're not sending anything to it. you're just monitoring a single channel as an input. so if you want to send multiple tracks to this 'return', you need to buss them first, then monitor that track. but when you do that you lose the ability to control how much you're sending on a per track basis.

so then you you go and get a max for live device like the one i linked to make up for that.

but you still can't monitor your return if it's post-fader because as soon as you solo it, no audio is actually getting sent post-fader.

it's a broken, shit system.

Squawk Squawk Bitch
clyp.it/t3rermcu

Threw this bit together quickly. Trying to make one or two tracks a week to challenge myself.

Going to replace the squawk with some black dude or downtrend white dude sayin something like "fuckin in the club, bitch on my dick".

Try "bitch in tha club, fuckin on my dick". It's good to subvert expectations linguistically.

Bruh backup your shit wtf are you doing?
And no shit you should export final songs lol

I don’t use fl so I can’t call you on the accuracy of that, but ableton at least prompts you to reload vsts that it can’t find.

Okay, I see the dilemma now. I can’t relate to this problem though as I can’t think of any reason I’d do that rather than just use a regular send or bus.

What are you trying to accomplish? I know of plenty of lives shortcomings (and don’t care to switch still because it suits me) but I’d like to know what makes it unusable for other people

>but ableton at least prompts you to reload vsts that it can’t find.
Yeah of course you can reinstall vsts and samples and it'll work just fine. But good luck when your plugins and samples folders are so huge and varied that you don't even know where it all came from anymore.

Just had the worst sound design session of all time, I feel sick

its mostly a session organisation problem.
printing stem audio for delivery/post-processing/final mixdowns means it's often better if you just split up return tracks by logical groups - the drum reverbs are only for the drums, the synths get their own, the vocals their own. so when you export you don't get these clusterfuck return tracks with disparate shit on them.

now, you can, of course do this in ableton, but your returns are now stacking up on the right side, instead of being actually next to the things that are using them. not only that, because sends/returns aren't track specific, you now have 10 of those dumb send dials on all your tracks.

and when you go to the sorry excuse for a mixer that live has, and you show your returns because you want to change something you've suddenly got 9 other tracks wasting space.

the real problem, t b h, is that i went off and played with some other daws for a bit and realising ableton is missing so much basic shit makes me never want to mix in it again.

>good luck
I don’t need luck, I’m organized you slacker

We’ve all been there man :(

Thanks
Glad I’m all in the box and working alone :^)

oh? how big are your folders and what do you use to backup

in a perfect world, all DAWs would do all their routing like pic-related. but because people in music are brainlets, they're terrified of node graphs.

Attached: HoudiniGraph.jpg (1072x684, 144K)

68 gigs, external hard drive lol

nah dude i'm fucking with you, losing my vsts.... or really a whole clean install would be mostly just an inconvenience

95% of the vsts i use are just stock ableton shit and native instruments (not a cluster to install and very easy/organized to backup all my presets)

the other 5% is a few free mixing vsts i use for every song, serum and iris 2..... i mean i'm organized but i just don't use *that much* shit and i know exactly where everything came from

oh

What are some essential drum plug-ins to get?

just now I was wanting to know that exact same thing. it would be handy if it came with a million presets too, samples are annoying.

>I kinda wish the plugins and effects were automatically copied to the .flp file's directory so I could simply backup the things that I'm actually using.
ah I found a solution, File -> Export -> Project data files, to copy the samples used, now I can backup only what I need to.

very new to this please help me improve and be as critical as possible
clyp.it/o4eokwc0

Use Max/MSP or Pure Data and you can do that with everything.

clyp.it/5jxfpvdg
also this one, first song i ever make but i think its cute

So I’ve been producing for a year just through watching busyworks tutorials and Ive come to realization that I have no idea how to make a beat without watching his videos. Is there anything that’ll teach me enough that I can make a beat without needing to watch a video?
I was thinking of watching seamless but I have no idea where to start with his videos, should I just say fuck it and start learning sound design and hope the song structure and other stuff just comes naturally? Is there better tutorials out there for different DAWs? How did you guys do it?

Can't you just do it? This shit isn't hard.

why do you have to be a dick? obviously he can't "just do it," and obviously it IS hard for him

>Don’t play notes with a fundamental below 30hz that’s literally it.
anything i play under 40hz are just sines, if i want to make a broader spectrum bass sound i use layers (as in, the sub playing clean sines, then the same line but with a hpf so only the overtones can be heard, and so on), or i just simply record it with a 5-strings bass guitar

>Yes you can hear harmonics of notes below that if it’s not a pure sine or you distort or whatever the fuck but you’re fucking yourself over with a weaker bass
i don't know where this came from

>Ideally a few higher than that because quite a lot of systems cutoff above that.
oh i think you are missuderstanding me man, sorry if i wasn't clear
i'm not a professional producer nor i want to be, i just started in this since music has been my hobby for my entire life and just found out about how fascinating this whole audio signals processing thing is
i don't care if most systems can't play it, i know they can't, but again, i'm not a paid professional, so i don't really have to take much care about the final system it is going to be played with, as it's probably going to be listened to by me only, and i have a dedicated subwoofer + other drivers of different sizes with freq filters, so at least for me, it's no problem: the low-base in my stuff is not too loud in the mix since that's what my amps are for
and if i want to listen to it in my shitty phone i have another version where i lift the bass up an octave or transpose the key up to a fifth depending on how low it goes
just because it's fun
i really know where your post came from man, and i really appreciate all the advice you guys are giving me as this of kind "lol retard here's what you are doing wrong" advice is the most helpful of all because it's sincere, but i'm not occupied with fitting into quality standards or even thinking about it being able to be played accurately on a broad type of different devices becase atm it's just for me

Attached: 1316029683653.jpg (2576x1936, 744K)

obviously this doesn't mean i don't want to learn about the industry's standards and good practices, it's just that i'm still an early begginer and i'm not really bothering about it right now since taking the creative mask to put the technical mask right now isn't quite as easy
but i'll get there

I'm not good enough to identify the issues but there's a lot of clashing going on. It sounds messy, like the patterns are all doing their own thing instead of playing off each other.
this one's very good.

Here's something I threw together real quick. I have no idea how else to expand on this besides just adding one element indefinitely until I can't think of anything else to add. I might just get a course on udemy.

Thank you very much for the feedback! and I see, which parts you feel are the most conflicting with the chords (first thing you hear)? I am a bass player so I might have gone a little bit overboard with that hahahaa

Shit forgot the link

clyp.it/u4ajpign

Lol, you know what, I think I'm going to go back to making classical music

clyp.it/4p5uiwhy

vocaroo.com/i/s0MrI9qonzFJ

quick mix of a song im working on. How do the sections at 1:50 and 3:30 sound? Think the low end needs to be louder or maybe add some sub. kick could use some work too but need some outside opinions

hi
hope you are having a nice day
clyp.it/wpcyydwo

clyp.it/rfmlbt0f
pbjt

>How do the sections at 1:50 and 3:30 sound?
do you mean the background's synth staccato?
it's a nice nuance but i think it would be more gratifying to not resolve it by looping it but getting it either higher and higher or by sustaining it in a note that would create a bit of tension until later in the section
i particularly mean the 1:50 part
just an opinion bro

vocaroo.com/i/s18Cw6VR09gE
any ideas on how i can make this even MORE GOOFY?
we're making a song about some dude from panama who's a communist-wiccan-transexual-adventist-racist-feminist-pedophilic weeb who's been obssesed with a girl who doesn't like him at all for like 5 years
the song is a call for her to stop cucking him (the synth that sounds kinda like a wind instrument is just a guide for the vocals, as it's going to be sung by a guy who doesn't sing for shit but has a pretty funny voice)

Attached: tracks.png (314x533, 26K)

responding to the politeness. thanks (:

the music should follow the dynamics of the drums

Hello all, does this make you want to dance?

vocaroo.com/i/s1eQcfZWiswD

no but it makes me wanna do something cool with computers and machines with lots of leds and knobs

i took your advice
clyp.it/bcxan1sp
how did i do?

Do you listen to this music? If so then I don't understand how you can't pick apart the most important aspects of it yourself intuitively especially after watching videos for one year. I don't like this tutorial trend at all. Back then people figured this stuff out by themselves and the music was better off for it.

Dude, to be a good producer you have to listen to a fuck ton of music. Every free minute you have, you should be listening to something new. This is how you innovate and this is how the beats will come naturally.

this sounds great. well done.
i don't really know glitch hop though.

thank you :). Just getting back into producing after a 2 year break.

>quickly
no kidding

yeah. do each track in one, max 2 sittings, in a single day.
an 8 hour writing session is not the same as two 4 hour writing sessions
perfect is the enemy of the good
dont listen to your music on loop. if you can hear your music, you sure as shit better be working on it (debatable, but if you're a newbie stick with this rule. a lot of people sit there listening to their loop over and over and then it starts to sound trash to them. they psyche themselves out)
finish EVERYTHING. you don't have to mix/master it, but finish EVERY arrangement you possibly can. even if it's just copypasted out to 3 minutes with some basic transitions here and there, call it a finished track. don't leave it a 'WIP'. WIPS are the enemy of good tunes. WIP mentality will get you in the gutter (another thing which is extremely important when you start off, and less and less important as time goes on. if you don't finish shit, you will get into the habit of not finishing shit)
dont send unfinished stuff to people for critique. i'll give you all the critique you need right now: it sucks. i don't care who you are, fucking bob dylan, mozart, fucking ed sheeran aphex twin whoever the fuck. it sucks. and that's all you need to know. it sucks and everything needs improving. EVERYTHING. don't fucking go online asking for critque for your unfinished shit. finished tracks maybe.
keep it simple, even if you're writing an orchestral suite. as simple as possible to get the message across / do what you want to do musically

Attached: 27788447_1829879337085616_9220054563444963502_o.jpg (1080x720, 96K)

don't worry about mixing or mastering at all during the writing/producing process (besides the basics like 'is this bass even audible on a normal speaker, if not i have to replace it')
if something will take more than 5 minutes to implement (i.e. designing a synth sound or getting an effect 'just right'), don't do it. you will lose too much creative momentum (this is why having a prepared sound library in advance is so important if you're making BEATS)
set a deadline
make a 1 bar loop, then turn it into a 2 bar loop, then make another loop with some different shit with a few same/similar elements. then once you have a few different very clustered sections, duplicate them out to fill 4-5 minutes, and delete stuff to shape it into an actual arrangement
have shit with you, snacks and food and shit so you don't have to get up and take an extended break
obv. most of this shit slanted towards electronic, DAW focused music which is what i make

on avg. tracks take me 3-4 hours real time. when i started they took 8-10 over 1/2 days. the interesting thing is a 3-4 hour session now is just as (if not more) draining as an 8 hour session was when i started (~7 years ago). it's like i'm reading a book at 4x the speed i used to: my brain still has to process the same amount of content (musical decisions, if you will) and is taxed by it

the quickest i recorded a whole (worthwhile) album was about 2 months - not every track makes it to an album, maybe 1 in 3 or 4. the actual mixing and polishing of everything will take you a lot longer than the songwriting

Attached: 396-600.jpg (600x450, 70K)

Liking pussy is much gayer than liking penis, which is really not that gay. If you like penis, you just like the penis, whereas if you like pussy then you like everything else that surrounds the penis and those body parts are all far more masculine than the penis and greater in number than a single penis, and thus is in fact far more gay than the penis itself. It’s just delusional to pretend you don’t like men when you like many more, far more masculine parts of the anatomy than the penis, when, especially, if you find a penis that’s feminine in appearance then it’s not very gay at all. Not to mention the vagina is a warm wet hole similar to the anus and very near in proximity to the anus, such that many men prefer inserting their penises into the anus over the vagina. Very gay indeed to be presented with a choice and select the far more masculine of all options. Whereas a single penis and anus, which is the only option, is far less gay because no choice is provided.

What the fuck are you trying to say?

I'm not sure if I understand what you need, but you can already route any signal anywhere by using any of the stock devices that have a sidechain mode (compressor, gate, etc.)
Just put the compressor where you want your audio to go, set it to sidechain from where you want to take your signal (if it doesn't appear in the menu, just put an empty audio effect rack with an empty channel and choose that in the second SC source menu), then click on the blue headphones icon in the compressor to hear the SC source instead of the compressor's output.

Unironically Serum.
It has a sampler with basic functionality and you can use the LFO as an envelope to get the perfect curves.
Obviously, on top of all the normal synth stuff that Serum is famous for, which can be extremely useful when making drums (all the wavetable and FX modulation you can have to create interesting transients, tails, etc.) as well as giving you tons of control over every parameter that you wouldn't get anywhere else.

Sonic Academy's KICK 2 is also good if you don't want to use Serum, and has the advantage of showing you the phase of each layer so you can align them perfectly, as well as the ability to snap the nodes to specific notes for the pitch envelope.

We were speaking of beliefs; beliefs and conditioning. All belief possibly could be said to be the result of *some* conditioning. Thus, the study of history is simply the study of one belief system deposing another, and so on and so on and so on... A psychologically tested belief of our time is that the central nervous system, which feeds its impulses directly to the brain, the conscious and subconscious, is unable to discern between the real, and the vividly imagined experience. If there is a difference, and most of us believe there is. Am I being clear? For to examine these concepts requires tremendous energy and discipline. To experience the now, without preconception or beliefs, to allow the unknown to occur and to occur, requires clarity. And where there is clarity there is no choice. And where there is choice, there is only misery. And why should anyone listen to me? Why should I speak, since I know nothing?

Attached: 2014_07050013.jpg (800x600, 605K)

that's a really clever solution, but it still has the same post-fader solo issue unfortunately. it seems to be a hair less cpu intensive than the max device though, so i'll probably build an effects rack preset to keep handy.

fuck i have to get some sleep but at least i got this general idea written down
vocaroo.com/i/s1E8oULg3oNB

Attached: 1536772980398.png (657x527, 63K)

>the same post-fader solo issue
sorry i'm retarded
what do you mean?

What kind of melody do I put on this so it doesn't sound too repetitive?
clyp.it/gczz4jfb

Attached: 1511986088668.jpg (1200x900, 303K)

in the compressor example, if you select a track and then set it to post-mixer, you can't solo the fake return track any more.

I see...
Yeah, you either need to set it on post FX (or just use the audio track's input to it), or use the returns (which do still work when soloed).

If you really need this workflow, I guess you could solo the track by setting its output to "Ext. Out" and muting the master.
This way the original track remains untouched and still keeps playing out of the fader while you can't hear it, and the receiving track remains the only audible thing.
The only disadvantage (apart from it being a two-step process instead of a single click) is that it won't go through whatever you have on the master, which could be an issue depending on how you use the master chain during production/mixing.

EVERYONE STOP WHAT YOU'RE DOING AND READ THIS

cdm.link/2019/08/reason-11-new-plugin/

Attached: 898.gif (301x300, 949K)

eh, i'll just deal with the organisational problems that come with live's weird segregated returns instead of doing this, most of the time.

at least ableton's showing some signs of life, with the recent attention they've given arrangement view. hopefully the mixer will get a proper overhaul by 11. ara support by 10.5 prob.

for their sake, i hope push has largely been a commercial failure. they've devoted far too many development resources to it.

if live wasn't so baked into my muscle memory i'd have just switched to a different DAW.
that said i'm going to do a project in reaper just to see how the transition goes.

they really are in trouble aren't they. i'd expect them to go bust in

>Reason Rack Plugin
Hopefully it uses the same protection as Europa, which was cracked in no time.

omg lol

>live's weird segregated returns
Have you tried color-coding them?
If you tend to always use the same colors for each element of the song, you can just use the same color for the corresponding return tracks to make it more intuitive.

What do you think? I don't know if I did a good job on the piano.
clyp.it/mqkc12er

I don't understand.
Do you have to buy the full Reason 11 to get the Rack plugin VST, or you can just buy it standalone like with their other synth?

Attached: Reason-11_press-image-1-1920x1080.jpg (1920x1080, 382K)

yeah, i colour code and name everything obsessively. it's helpful, but it would be nice if i could just move them around. or filter tracks. or hide tracks. it's little QoL stuff that really starts to matter when you're on your X thousandth hour inside a program.


ableton buying cycling and making max for live so tightly integrated was such a weird move on their part because it brings a very power-user oriented feature set, while the rest of their development philosophy seems like it actively ignores those same users. no custom hotkeys, no track template, no programmable macros/actions for frequent tasks, no vca faders, no way to monitor what's actually taking up cpu cycles etc. etc.

That's Ableton for you.
A ton of features tht we've been voting for (in their feature suggestion page) since several versions ago, are nowhere in the horizon, and only a few are only being introduced now.
They'd rather focus on the fewer features they have and making them really polished, than to add stuff like the other DAW developers do.
Thankfully they started adding more small and qol stuff recently (starting with the 9.* releases), so we could start seeing a few of these in the near future.
What bugs me is that some of these features are being introduced as M4L devices (such as the modular CV integration), which I don't really prefer over regular devices, but whatever... It's still better than nothing (and it has the advantage of allowing you to mess with them in Max, which is good when the plugin isn't too complex).

holy fuck i woke up and it sounds like ass

yeah, i hear you re: the max device stuff.
i remember when they first brought in m4l and then thought it was okay to just give us max based drum synths and a max based convolution instead of proper native devices.

this was before max was loaded on start up and you had wait for that dumb maxforlive start up screen that popped up when you first put a max device in. made me never want to use them and i still never do. outside of some of the basic modulation devices and a couple of Rene clones, i don't think a m4l device has actually made it into a session.

they don't seem to doing as much of it now, thankfully.

the cv tools are neat, but like, how many of their users actually have external hardware and dc coupled interfaces to go with them? and with reaktor available + vcv rack vst around the corner, i suspect most of their users would rather just use those instead.

Has anyone had success with taking lessons for an instrument? I’m considering just a few lessons each for piano, guitar, bass, and vocals just to get proper technique and not develop bad habits or screw myself
I get the feeling that most people here and in general are self-taught but wouldn’t it be worth it to “skip” the most frustrating part of getting these skills?

clyp.it/ll0oomwm


any feedback

>I get the feeling that most people here and in general are self-taught but wouldn’t it be worth it to “skip” the most frustrating part of getting these skills?
Yes it's always worth it to take lessons if you can afford it. Paying someone to give you constructive criticism and to give you sense of urgency to practice your trades before lessons because you spend money on the teacher is always worth

>I’m considering just a few lessons each for piano, guitar, bass, and vocals just to get proper technique and not develop bad habits or screw myself
If you just want to be a composer and producer (not a performer in any kind of capacity), in my opinion it's more practical to take only the piano lessons first then continue to take another lessons in composition and arranging

but I don't want my cpu to max out on one instrument

How is it any different than using Kontakt or Reaktor?
If you have CPU problems just use it with one rack at a time and it essentially becomes just a Rack Extension bridge for your DAW.

I don't know I'm guessing. Maschine's fx plug in is CPU heavy and I don't use it

Free drum samples

reverb.com/software/samples-and-loops/reverb/3514-reverb-drum-machines-the-complete-collection

I'm not super fanboi pumped but I am interested.
I don't like the pricing difference, I'd rather they skip all the old instruments and just include all the actually new stuff in a $150-200 upgrade (because I already bought most of the other ones).
If the new RE instrument is cool enough it might be worth another $120 to me to get that and Parsec at the same time though.

I hope so for you guys, all the wiring options are hours of (non-productive) fun and combinators are super useful for creating complex instruments.
Plus I think this bodes well for the two most autistic DAW fanbases (Reason and Renoise) who may now be able to combine retard strength instrument creation with spastic tracker workflow speed.

thanks!

pg 7 nope

is there a simple 8 band eq in ableton that's just flat and not set to some special thing? Im used to renoise and all the eqs i find in ableton have crazy settings

uhh eq8?

you sure you haven't accidentally set some weird preset at the default for eq8?

clyp.it/s5mpxlxk

Thank you for having me. This track was inspired by Jeffrey Epstein's suicide.

I just got Serum the other day. The presets are fun. I'm still learning what it can do. It's amazing and sounds great, but how do I get the most out of it? Who has the best tutorial on Serum's possibilities?

Thank you. And if you listen to the track, especially those with good monitors, how does the mix sound? Do the frequencies blend well or are they muddy?

Attached: 1563622771248.jpg (1024x512, 90K)

With any Ableton device you can take a setting and make it the default that the device loads up with, so if you don't like some default preset, you can always change it.
I think the button to do that is in the context menu that appears when you right-click the title bar of the device.

Pretty new to this - I heard that one of the best ways of going about learning composition is to recreate songs you like in your DAW in order to learn techniques. How would I go about doing this? I know some basic music theory, but I'm not good at finding chords or anything like that. What's the best workflow for this?

If your DAW doesn't have one already, get a spectrum analyzer that tells you which note any frequency you point your mouse to is, then find the fundamental peaks in the spectrum and see what notes they are.
Or just use your ears and push keys until you find the right ones.
Or get Melodyne and hope that its detection algorithm is correct (it usually is).
Or google " chords".

That's super helpful thanks, it's a little difficult because a lot of the songs I'm trying to learn from don't exactly have chords available on the internet, so those methods should definitely help out.

epic, sounds like a good FF track

Glad to help user.
I suggest ear training for detecting notes and chords by ear.
teoria.com/en/exercises/
tonedear.com/ear-training/chord-identification

If you find it difficult at first and you want to quit, try starting with popular songs that you can find chords for online, and once you get the hang of it you can move on to more difficult stuff.
Or just go straight for the hard stuff if it's no problem.
Good luck.

instaud.io/3UeY
What do you think about this?

I honestly don't know why I hate those snares so much
Also trap sucks
Nice sample I guess

Wait why have a verse with bass and one without?
Anyway I'm not sure about that bass. I dunno. Maybe it's too low notes

Every fucking time I update Windows my NI komplete audio 6 starts doing the popping and clicking thing. All my battery throttling is off and my buffers are all set high for it, but it still persists. What else can I do at this point?

The piano sounds very midi and not human. You can search on the internet about how to fix this.
The harmony is very dissonant and I don't like it.
The composition as a whole is mediocre, but sadly I suck at music too so I can't really offer you help here. Practize I guess

What's a basic drum sampler where I can just load the kick, load the snare, etc. and play my keyboard? I know I can use a regular sampler, but the extra step of having to map regions to my keyboard is annoying

What's your DAW?

any JV1080 owners around?
how do you sequence drums on this thing
i can't find any drum sets like with the usual general midi presets

nearly any generator in FL

I have one, but never used it for drums. I guess you'll have to play though it to find what's what and program on the piano roll

i figured it out
had to switch to MIDI channel 10 and i could use the drum sets under Rhythm

hm, I wonder if ch10 was ever a standard drum channel

>absolutely do not need more drum machine samples
>download anyway

every time :(

>tfw my posts always die with 0 replies

what do you guys think of this progression?
it sounds way better in guitar desu

www91.zippyshare.com/v/czihap1h/file.html

new educational slapp.


Lesson of the day, kids? Don't polish/shine a shitty song. Make it good before cleaning

clyp.it/q425tepj

>mfw someone tells me to add some WOW factor to a particular section

Attached: pfsc.jpg (154x152, 18K)

How to make a synthesized shoegaze guitar on a DAW? (fl studio or ableton i don't care)

Attached: 73.png (368x476, 98K)

Sounds dissonant in a few notes. Just adjust it a bit. The higher notes in particular.
Or maybe that was intended. Either way i don't really like it desu

>analog optic fiber guitar cable
youtube.com/watch?v=m6Sw3kplhjA

Wouldn't there be a DAC and an ADC at the two ends of the cable, making the analog part useless?
If it's converting to digital anyway, why not just use wireless mics?

Attached: Iconic-Sound-Lightlead.jpg (770x425, 27K)

>analog optic fiber guitar cable
why

In the video it says that it's because audio quality in copper cables degrades the longer they are and as they get old and worn down, and this product doesn't have this problem.
But wireless also doesn't have this problem, so if sound quality is an issue, why buy this instead of wireless?

Reaper

Let me get this straight:
The guitar generates an analog signal that gets converted to digital, travels through the optical fiber, then gets converted back into analog, and goes out.

What's the advantage of the fiber then?
Why not any other digital cable like USB or HDMI?
It would be even better because they could sell these adapters and you could use any cable you wanted and easily replace them when they break (which is far less likely to happen than with an optical fiber cable).

Gearf**kers will fall for anything. Simple as

Generally make a lowpassed saw with a bit of pluck. Make sure the lowpass is deep enough it sounds dull and uninteresting and then turn the gain down until you think it's nearly unusable, then route that through a guitar amp plugin and a TON of stompbox effects.

I would assume reaper would have them too, but you should just pirate fl studio it's very easy to use

u like?? I tried to do a ping-pongish pan, does it work?
clyp.it/x4hifoi4

you can't, you have to use a guitar

battery if you want a full on drum sampler

you can also do this with multiple instances of reasamplomatic. google for 'reaper ableton drum rack'. you can just set up up once and save it as a track template

Any got any tips on how Jack Stauber gets that specific sound in his videos/music?

Channel 10 ended up being the default for drums on GM modules, but it was kinda accepted that people would use it as the default even before that showed up and GM really just standardized the note mappings - kit listings and note mapping for the JV-1080 in image related from the manual but the GM ones in preset D will be true for all devices.

Attached: JV-1080 Drums.png (2037x1623, 1.05M)

Thanks man. How do i make pluck? I use serum. Also for the stompbox is there an FL studio equivalent to that?
So far i've made something decent that sounds like a shoegaze guitar but it sounds too harsh. I want a sweet, candy-like wall of sound like sweet trip
shut up

Attached: Untitled.png (949x883, 500K)

...

Hi :)
Take a listen, leave some feedback, and have a nice day!
clyp.it/ccxenrbs

No

Sounds dope

the mix is a mess

that's weird
ive had people tell me the mix is fine
what's your opinion on the structure?

those people are lying to you.
your mix is a mess. structure is irrelevant till you fix your mix.

what are the glaring issues with my mix?

I've been messing around with an online tool that let's you play all the chords in a scale. Literally every combination of 4 note chords in a given scale sound good. Songwriting is trivially easy.

I mean like any given guitar pedal. Shoegaze you're gonna just want to chain up compressors, distortion, flangers, reverb, delay, and whatever else in huge serial and parallel chains.
Pluck should just be a saw with a touch of attack, no sustain, minimal release, and decay to taste. Filter envelope should be basically the same, with velocity slightly to filter cutoff so you can pretend to pick the strings harder.

That's false, structure is more important because it's less dependent on listening equipment.

it should be but in practice the average guy does not care about a good arrangement if it's mixed like ass, there are plenty of poorly arranged tunes that were still successful thanks to the sound

hi everybody, love you all
feedback?
clyp.it/taa3wycm

url?

I love you too

Hey, Sadhguru, I didn't know you browsed 4channel. Do you also make music?
My interpretation is that buying Reason 11 also results in you getting the rack and there isn't a way to just buy the rack since the rack is everything that Reason is except for the sequencer... basically.

Help me.

I bought XLR to RCA left and right cables and I connected them from my studio monitors to my audio interface.

The problem is they keep connecting and disconnecting and I don't know how to get sound and audio output controls.

Please help.

Yeah I think you're right, also they've confirmed that R11 STILL doesn't have VST3 support and the rack VST won't do MIDI out so you can't use Reason's players to control your other VSTs.
Sweden yes and their new logo is stupid.

TODAYS BEAT DUDEEEEE fuckkk idm and those wanky sounds amiright :^)

clyp.it/udkxgc4z

Attached: 1545880079488.png (730x1000, 599K)

that makes me feel weird, it tickles my ears yes but I try to follow with my mind's eye and thats making me just dizzy. Is this Hypnosis soundtrack?

I like tremolo pads

why does music sound so shit without reverb
scientific answer please

Reverb makes everything sound better.

Hit me with some good convolution reverb plugins

unironically because humans were living in caves when music was invented, and for thousands of years our ears and brains evolved listening to cave echos

Where can I get impulse responses of caves?

what do yall think of the mix on my shoegaze track. bass is supposed to be prominent.

clyp.it/1w3tvufl

even if you're joking somebody else might want to know:

look up echothief impulse responses. that's like, tip of the iceberg for when you start searching for free impulse responses lol

I wasn't joking, thanks brah.

Also ifn you goys hadn't herd:
youtube.com/watch?v=EDjaSMgp9_w

Attached: moo.jpg (1400x958, 483K)

deffo sounds like a wall, good on ya, bass could be more prominent, kinda gets off rhythm towards the end
bass needs more bass nawmsayn

I don't really understand which kind of ir to use they have 2 channel and quad channel

Attached: 1504821964271.png (609x714, 99K)

>clyp.it/udkxgc4z
horrid
10

got the wrong post bro

:(

but checked

clyp.it/zeh2carw

whoa dude your songs have non-retarded melodies now lmao

Do they? I think I've been getting better, but I've also been on antidepressants for about a week now. I think maybe I should've mixed the drums on this lower though. Or at least have put a low pass filter on it.

>
clyp.it/h30t1qac

Attached: yeezus vs yandhi.jpg (1920x1200, 763K)

yes, you've gotten a lot better. i haven't listened to you in a few weeks. last i heard you were in the range of "uh... not totally tone deaf i guess" to just okay. This one is quite decent.

biggest thing is tightening the decay on the kick and lowering it a bit. same with the hat... or maybe a different sample. it's not terrible but it's among the things that stick out.
I hope the pills work for you bro

fuck you you soundcloud shilling troll ass bitch lmao

which genre is this?
clyp.it/kk5i45il

see

Attached: download (20).jpg (200x162, 8K)

holy fuck you know you have to treat your drum samples right?

i oft forget how low-sentience this general is. quite entertaining desu

clyp.it/t5r0zwku
threw this together today

quirky
masterpiece

Attached: g.gif (300x349, 3.08M)

not him, but what does "treat" mean

>biggest thing is tightening the decay on the kick and lowering it a bit. same with the hat... or maybe a different sample.
Yeah I'll probably mess around with the drums a bit more if I ever do anything with it. And turn up the synth during the chorus. I layered an acoustic kick with an 80s style synthesized kick, trying to go for a grittier feel, but I'm not sure.

You mean process or mix? I didn't really turn them down because I wanted them to be prominent but it doesn't really work, especially with the hihat being that loud. I'll probably re-level everything. And try a lowpass filter or bitcrusher on the drums. Maybe a more sparse hihat.

How's this?
clyp.it/2xpmeqaw

>clyp.it/q425tepj
cozy

guys im all depressed and stuff

whats wrong?

lifes hard i hate life

Sounds like you should be using protools user

unrelated but anyone got tips on writing lyrics. everything that i write feels centered around myself. i cannot seem to tap into the millions of other things happening around me to write songs about and thats got me in a terrible rut

you can use 4 channel files aka "true stereo" if your convolver supports it and you want to convolve stereo data with a stereo IR, means you get IRs for each possible interaction L-L, L-R, R-L, R-R which is legitimate for getting the proper interaction of early reflections of f.e. a concert hall stage into your carefully placed main AB omnis, but for anything else it's mostly just overkill and gets lost in the mix, hell in most cases you realistically can't even tell if you accidentally made a mono aux to send to your stereo reverb because it all gets lost in the mix

important post : clyp.it/dtachldu

Attached: tyler shore.jpg (2048x1024, 351K)

Pretend you're writing the song for someone else to preform.

How to make those synth stabs like kraftwerk here? at 00:00, 00:32 and 04:59
youtube.com/watch?v=2YXDd_NKhac

Attached: qr1324.gif (307x231, 907K)

short decay on filter, drench in reverb
squareish/not too busy waves preferred obviously

Instead of playing chords to ur synths, Tune the oscillators as if you were making a chord, like one at 0 semitones, and one at +7 semitones, etc.

What does that achieve over having only one osc active and playing quantized chords?

Only one copy of the patch, one filter, more or less gives it that "pad feel" vs a chord.... This is most relevant to cliche 'rave stabs', certain kinds of classic pads, the example in ur 2nd timestamp.

I'm not the guy who asked the question, I just don't understand how it can be any different.
When playing chords it's still just one patch and the different voices still go through one filter, one reverb, etc.
The only advantage I can think of is that it allows to set a different timbre for each voice, but I don't think it's what's happening here.

It sounds very different. 5th patches sound different when u tune oscs vs play the patch as a chord.

I should say it also locks u into a pretty bizarro note scheme when ur literally transposing what is basically a 5th. that also has an effect on how u write for it

Anyone?

I'm trying it with Serum and it's exactly the same.
They cancel out perfectly, until I increase the filter drive knob, at which point they do sound very different.
Why is that?
What comes out of the OSC section is exactly the same, so why does the 2-osc one sound brighter and the 1-osc sounds duller?

the overall harmony is so fucked from the non-diatonically transposed chords that you'll have a hard time to find any melodies that make sense

>the 2-osc one sound brighter and the 1-osc sounds duller
Actually nevermind, they do sound very different, but not in this way.

because music doesn't exist in a vacuum, it needs a space with reflections so that the air can vibrate

>autism
>not realizing (good) DAW routing is literally just a digital copy of actual mixing console routing for recording actual instruments

What do you mean by "drum plug-ins"?
Do you mean vsts or audio processing?
Are you talking about synthetic drums, percussion or acoustic drumkits?

Not him but with the technological progress that we've been seeing for the past few decades there's no reason to stick to old hardware-based workflows when the digital world allows for so much more flexibility.

FL Studio does this with the Patcher, which isn't perfect, but is very usful for handling complex routings quickly and intuitively.
DDMF's MetaPlugin is cool too, but quite limited.

Altiverb

>when the digital world allows for so much more flexibility
dude the graph you posted is so autistic it takes like 5 minutes to figure out what the hell is going on
this is a worse workflow overall than any conventional bus routing

>skeuomorphic ux is good

node based structures have become the standard in almost every area of VFX because they allow you to deal with very large complex networks with ease. they encourage experimentation and allow for large changes to be made quickly.

why shouldn't i be able to reroute audio/midi/osc/cv data from any arbitrary point in a chain to anywhere else at a moment's notice and also be able to visualise it easily.

>it takes like 5 minutes to figure out what the hell is going on
for you
it's a matter of getting used to it and obviously you can do simple bussing just as easily as with a "virtual console" approach so there is no downsides at all

>the graph you posted
Read the first two words of my post here >is so autistic it takes like 5 minutes to figure out what the hell is going on
Only because you're not used to seeing a node-based workflow.
I use Max, Blender, and some times FL's patcher and everything is super intuitive to me.
Granted, the UI could have some way to differenciate what's what more intuitively without having to read (in Max you can color-code them for example), but even without that, if you've made it yourself you know where everything is automatically.

>this is a worse workflow overall than any conventional bus routing
Only for simple stuff.
As soon as you get a little complex, having everything in one screen with lines that show you the signal paths is priceless.
For example I constantly have to jump back and forth between tracks in Ableton when doing sound design because things are routed through racks, which work great for simple parallel processing but for when you have dozens of tracks feeding into each other it gets annoying to keep track of everything, and if I leave the project for a week and come back it takes me ages to re-memorize where everything goes.
With nodes even opening someone else's patch I can immediately understand what's going on without sifting through tracks for an hour.

As if normal bus routing doesn't ecourage "experimentation"
>why shouldn't i be able to reroute audio/midi/osc/cv data from any arbitrary point in a chain to anywhere else at a moment's notice and also be able to visualise it easily
Because while it is maybe nice for midi synth stuff, it is not a viable solution for an actual mixing engineer let alone for recording.
Maybe, but I don't see the point in making things more complicated than they need to be just for the sake of "lol different, progress en shieet" that doesn't really progress anywhere.
A lot of people (specially audiophiles, but that is a different subject) really have to accept that a lot of things in audio are already as good as they can get.

True. I'll just stick to fewer notes.

>As if normal bus routing doesn't ecourage "experimentation"
Not like nodes.
Not even close.

>Because while it is maybe nice for midi synth stuff, it is not a viable solution for an actual mixing engineer let alone for recording.
If you're working in the box why not?

>I don't see the point in making things more complicated than they need to be just for the sake of "lol different, progress en shieet" that doesn't really progress anywhere.
1- It's not more complicated. If anything it's easier when dealing with big intricate projects. You're just intimidated because you've never used it.
2- Just because you can't see the advantages because you make simple music doesn't mean that it can't be useful for someone else.

>A lot of people (specially audiophiles, but that is a different subject) really have to accept that a lot of things in audio are already as good as they can get.
This has to be bait.

>it is not a viable solution for an actual mixing engineer let alone for recording.
it is not only viable for mix engineers, but it would make life much easier for people dealing with large sessions because of the reasons mentioned here i'm also a maya + houdini user and once you've experienced node based routing, the alternatives feel slow, clunky and bad. it's much easier to visualise routing in node graphs than it is in any other format.

also, the graph posted above is extremely simple.

Is Houdini as hard as people say?
And in what way it's harder than the average /3/ program? More stuff to learn or requires more "brain power" to understand, so to speak?
I've been thinking of getting into 3D just to make promos for my music (nothing serious or career-oriented) and I'm still considering my program choices.
I'm leaning towards Blender atm beacuse of all the tutorials and free stuff, but I don't really like the keyboard-shortuct-heavy workflow (I'll probably get used to it when I memorize them all) and the stuff people make with it seems generally worse than what people make with other programs, so IDK.

>Is Houdini as hard as people say?
This is a real non ironic tutorial
vimeo.com/221178360

Is it possible to get worse at music over time? I haven't been doing it for very long, around 4 years. I didn't really care about it until around 2018 and in 2018, I tried to make as much stuff as possible. I ended up with around 200 tracks (only like 20 are even passably ok) and I saw that as a sort of "practice".

Now though, while I've improved in the technical sense, I feel really shitty about less-technical parts of music. I wanted to make as much as possible but it genuinely feels like the ideas I'm having now just suck compared to the ones I used to have. I'm having to force myself to keep making shit pretty much all the time. I wouldn't quit but I haven't taken any real breaks either and I'm not sure what to do.

Am I overthinking this? Or is it possible to genuinely get worse at music? I would post examples but it's kind of a really broad thing that isn't well exemplified by specific songs.

if you're just starting out i wouldn't recommend houdini because it can feel very alien, even to people with years of experience in other DCCs. basic stuff live, modelling, retopo, uv'ing is, let's just say 'interesting' compared to other programs. there are things you can do in houdini that you can't do anywhere else and then there's all the other stuff which you probably want to do anywhere except houdini.

blender is absolutely fine and works as a good base. once you have your fundamentals down, moving between DCCs (with the exception of houdini) is largely trivial. they all do certain things better than each other, but most of them do all of the same stuff.

houdini is something you should look into 6-12 months down the line, but only as a secondary program to start with.

that dude is so wild.

Yes you can, because even when you gain information, if you change your mindset to one that isn't beneficial, you're going to have the wrong approach to things and do them worse.
In this case you're probably suffering from the extremely common problem of overthinking things and doing art with your conscious brain, while as a beginner you just did things subconsciously which is much better for creative endeavors because it sets ideas flow out without having to manually force them out.
So the solution is to do non-technical things away from the computer (use an instrument or your voice and record everything) and squeeze every creative idea out of your brain without thinking or stressing about the details (whatever comes out comes out, if it sucks you can just delete it and forget about it).
Then bring them into the DAW and use your conscious brain to do the technical things that you need to turn it into an actual song.

This works wonders for me, but everyone's different so it might not work well for you.
Try all the different workflows you can think of, and I guarantee you'll find something that works well, because even if your problem isn't what I assumed above, by changing your approach you change the way your brain does things, so you might find an unexpected trick or two that solve your problem.

Thank you very much. Very helpful.

So you use Maya as your main, and Houdini for those things that only Houdini can do, correct?

That sounds exactly like the problem, thanks. I'll try your suggestions.

mostly.
maya for modelling and some animation.
zbrush for sculpting.
substance painter for shading/texturing.
substance designer for material authoring.
houdini for procedural tasks, tool building, dynamics etc.
rendering gets split between maya and houdini depending on the task.

all the stuff in the middle, you'll probably have to learn btw. so leave houdini till the end. if anything, most people don't really need and can get similar stuff done in c4d much faster.

I see, thank you very much.
I'm gonna start with Blender and decide if I need to change down the line as I get more knowledgeable and see what I need.

youtube.com/watch?v=mVndQfTQP0c

What about this gypsy trap`?

get yourself a cgpeers account so you can pick up tutorials, textures etc. easily. registrations open on the 1st and 15th of every month. everything's freeleech all the time as well so no ratio worries.

Cool, thanks.

What you want to make :

youtu.be/FgWsopJVlg4?t=249

Listen to the next minute or so, this is what you want to make

Open in a separate tab, it starts at 4:09

We haven't forgotten about you.

Attached: prod stab tier list.png (854x1178, 186K)

Sounds nice and crunchy on my phone speaker

2.5/10 overall

unlistenable trash
excuse me while i go to a silent place to cleanse myself of this shit by listening to my tinnitus for a few hours

some nigga recommended Great Songwriting Techniques earlier and I'm a few chapters in, likin it so far

U got a link famalama

...

Hello it says this is for porn from dolphins? I was looking a book on the songwriting techniques Thanksgiving

Dolphin porn is much better than books.

youtube.com/watch?v=41TT1lPAo8s

Unreal.

Attached: DwDnBbfWsAEK3w4.jpg (913x1024, 107K)

d'aww

youtu.be/CyW5O1xSk40?t=842
Based

>dude are you a welder?
what country is that from?

>hi, please send stems of all the tracks
>are you sure you want all of them and not just busses?
>yes
>okay
>
>hi, please send over the busses

How many tracks are we talking, large male

150ish (after some grouping)

clyp.it/fm22eyya

explain

it's one of my best works to date

You lack so much of the knowledge required to understand any kind of such explanation that any attempt would be a waste of time.
And I'm not in the mood for 15 paragraphs of /x/-tier ramblings, so goodbye.

Attached: Renoise3_2.png (622x350, 280K)

>it's one of my best works to date
please explain whats good about it

Jesus, is that violin sampled?

youtube.com/watch?v=my9LM3K9K5c

Most interesting parts:

4:40, 9:00, 13:00, 14:00

It's actually straight ripping/prog wankery and increasing compressor attack and compression level (Lows with high gain and less range, very highs more range and less gain etc) the higher the signal gets (+Threshold decreasing the higher the signal gets) + regular single band compressor (vintage emulation CLA 76 by Waves) on the master bus (and also some transient shaping and 31 band graphic EQ and LoAir [Sub Harmonization]) paired with two synths playing the same midi track but having completely different automation curves. I finally learned how a compressor works and this might be the first track where I would have applied it correctly lol.
Used synths : some random ten year old free stuff

I could have corrected anything that is wrong with this but it kind of adds a little bit of something interesting to the whole thing.

The number one rule...
Let me say that again...
The number ONE rule, of making music and art n general, is that if you have to explain why something is good, then it's not.
Anything that you have to justify because otherwise the audience would think is bad, is bad.

Unless you put a recording of you explaining all that before the song, the listener isn't going to know any of it and will just hear garbage without enjoying any of it.

In this particular case though, knowing the explanation makes it so much worse because of how retarded it is.

GUYS I MADE MY OWN SCALE

I took the important, unique part of the harmonic minor, which is the 5 6 7 8 interval, and repeated it.
In the root of A it's:
A C Db E F G# A

i distribute all my music with my phd thesis attached to it.

Attached: DcrbpfhVwAIKFQe.jpg (680x788, 103K)

It isn't hard to make youre are own scale bro

scales-chords.com/fscale_res_en.php?rn1=A&rn2=C&rn3=C#/Db&rn4=E&rn5=F&rn6=G#/Ab&rn7=&rn8=&rn9=&normal=1&greek=1&greekalt=1&other=1&etnic=1&c1=&t1=&c2=&t2=&c3=&t3=

I'm one of the most intelligent ENTITIES in the universe, of course I will understand what you want to express.

It has anything I like about music, it changes constantly and doesn't sound like anything I've heard before.

>doesn't sound like anything I've heard before
Maybe there's a reason.

You mixed things up - what you mean is that if I know what you see as good beforehand and you explain it and I say: "I know, it doesn't impress me.", like presenting a cheap gimmick which anyone gets/understands and is so obvious that normally it woudln't be necessary to mention it.

What I have done is explaining technical details which correlate with the character of the sound, this is something completely different, also what you mean doesn't include a thing which is self-evident of any piece of music that is being released: thoughts of other people about the piece and as I'm the first to listen to it I'm obliged to explain anything I could about it and it wouldn't influence the quality of the piece at all, it would just help to understand how the artist perceives his art and what he does like about it.

For instance, contrary to other music which is being put out I see mine as inspiration which could be incorporated into any style as it's barely coherent (sometimes a lot) and very diverse. It's really just some one track improvization with no correction at all and also some kind of a mastering exercise with consciously kept imperfections because anybody could master and finalize a track after following instructions correctly but not everyone has the "balls" to put out something commonly seen as imperfect in order to provoke attempts of seeing these as a medium of communication just as simple jingles are.

tldr the explanation is subordinary to the quality and couldn't change anything good about a piece afterwards, the quality defines what could be said or explained about a piece

ok

youtube.com/watch?v=BpO7FD1pkHs

Don't really have anything I'm hoping criticism on with this track, but it's new and any feedback is appreciated, just looking to get more opinions on it. Also i will comment on another song from someone pls give me a few minutes.

Also there are things/phenomenons which I wouldn't directly understand in the posted pieces but would notice these as something extraordinary until I could find an explanation, there are some very weird but oddly pleasing sounds and sequences being present.

Interesting my friend. I am following youre progress

>youtube shilling
great

i had a strong suspicion earlier that some posts were made by >(You) because of how stupid and grammatically fucked they were...

I can't tell if it's assuring or worrying to see you type like a normal person after I called your shit out as an act. Regardless, fuck off

i ordered a midi keyboard but it has weird dirty looking shit in some spots like its used is this a thing that happens

Yeah I mean why should I care^^ I'm happy about it and want to share with others, anway it was in a rather educational context and that's why I have explained what exactly I have done, music is not my main thing so I'm posting casually without a certain intention besides of education. My grammar will be sketchy because I'm mostly concentrating on couple of things simultaneously while writing something down here and as it will be one of the least important tasks... (rhetorical ellipsis)

one of the tracks was mixed on laptop speakers and some earplugs with missing rubbers I found in the park btw lol

How can I make this sound fuller?
Any other criticism is very welcome!

clyp.it/1mst32mr

Because digital synths are designed with analog signal paths in mind, so when you have 3 oscillators tuned to a chord, 3 oscillators are going through the filter. These notes carry amplitude, harmonics, and have a phase relationship that occurs *before* the filter, so when you drive it, those previously mentioned variables are all altering and influencing each other together as the drive knob turns. Conversely, when you play a chord on keyboard with your hands, you may have it set up so that there is only one oscillator, which hits the filter with significantly less information that the previous, “pre-tuned” chord. You still will hear the three notes because that is what you played on the keyboard, but they are summed *after* the filter.

Pre-tuned:
>osc 1, 2, 3 combine together, forming a unique waveform that hits the filter, after going through the filter, it proceeds through the amp envelope and out to your ears

Regular chord playing by hand:
>the 1 oscillator hits the filter for note 1, the same 1 oscillator is repeated through the filter for note 2 and added to the signal *post filter*, repeat for as many notes in the chord, then all the notes go through the amp envelope and out to your ears

a bassline would help a lot
also you have a lot of annoying out of phase components going in your mix

So there's a separate instance of the filter for each voice that processes it separately?
Wouldn't the opposite make more sense?
To have a separate instance for each oscillator instead?

yes is that good or bad

That'll depend on the synth in question.
Most make their signal path fairly apparent, if it isn't check the manual for a diagram or ask the developer.

Probably not, and I probably fucked up somewhere in there with the exact explanation of what’s going on, but I do know for sure that 3 oscillators being combined and driven at the filter stage will be a different signal than one oscillator being repeated across different notes. The way phase cancellation/combination works, if you add harmonics before the filter you are changing each osc’s phase relationship to each other in a subtle way. And adding drive to three sounds is going to hit the filter harder than adding drive to just one.

Not him but I think he means that it sounds very good and was impressed with the possibility of it being played live by you.

oh no i can't play violin i just sampled it

yes you're supposed to lick it

I don't know what it is, you can clean it with a toothpick and qtips and desinfecting alcohol

first post is not mine but should be obvious^^

I see, I'm going to try it with a bunch of synths later.

I think you're right, because now that I'm maxing the volume of both synths I notice that they're not even cancelling out perfectly with the drive at zero, so yeah there is, albeit imperceptible, a difference, and I guess this difference gets "amplified" by the other parameters like the drive, and I'd imagine that on other less clean synth the difference could be much bigger.
Correct?

I almost completely abandoned playing by scales, it can cause an ego boost about your own music because it sounds like it would make sense in a classical way but when trying to compose more complex stuff it can limit one... makes me understand that one saying which says something like that in order to become a master one needs to learn the theory and then screw it all^^

compression and harmonizing

When does one choose to do it with midi chords vs with tuned oscillators?

pg 9 baka make a new one

why not both

why not none?

Bump limit reached.
New thread?

>been wanting to buy a mic for recording vocals/samples/acoustic instruments for 4 months
>still haven't pulled the trigger on any despite having enough money
someone end my autism please

-~-~-~-~ NEW THREAD -~-~-~-~-~-
-~-~-~-~ NEW THREAD -~-~-~-~-~-
-~-~-~-~ NEW THREAD -~-~-~-~-~-
-~-~-~-~ NEW THREAD -~-~-~-~-~-