How come CR cuts the audio to just 15khz?

B-Global sounds so much better, almost like BD

Attached: 1647124647060.png (399x458, 122.41K)

Other urls found in this thread:

spek.cc/
youtube.com/watch?v=ITyKEf4bu0I
audiocheck.net/audiotests_dynamiccheck.php
audiocheck.net/audiotests_frequencycheckhigh.php
hydrogenaud.io/index.php?topic=120166.0
twitter.com/NSFWRedditImage

compression, its not like plebs will notice in their seasonal anime

B-global looks better a lot too these days, which is extra embarassing since the files are 30% smaller

What is this?

i only noticed when i got the b-global version by accident, CR sounds like muddy by comparison

That has fuck to do with the frequency range and more to do with the fact that B global's volume is higher. You can use the volume on your fucking computer if you'd like. And B global has mono audio.

Attached: daliv1.png (624x776, 373.24K)

no, volume adjusted, eg: Shijou Saikyou no Daimaou, the two version sounds almost completely different

spek.cc/

waveform or gtfo

What's B-Global?

Bilibili almost everywhere except china, canada, and the us

just download both and listen

Ah yes I love hearing 15Khz. Fucking audiophools
youtube.com/watch?v=ITyKEf4bu0I

No.

>mono audio
Ew.
Directional sound is not often used in anime, but when it is you definitely want to hear it.
That's a deal breaker.

>AAC sampled at 48k (should handle up to 24k freqs)
>Frequencies cuts off at 15k-18k
Was the audio reencoded, and if so from what?

could it be the ripper's program got bugs?

I can't hear the difference over my tinitus

Doesn't look like a bug, it looks like the audio got intentionally lowpass-filtered or bandwidth-limited at some point then got reencoded to AAC.

> pretending you can hear -80dB or -100dB noises, even if they were not masked by important signal
audiocheck.net/audiotests_dynamiccheck.php
Unless you have a giant home theater system set at rock concert loudness level, you won't hear even the perfect clean test for those. However, you are already insane if you are watching anime like that.

As for frequency clamping, keep in mind that most multimedia compression algorithms were made by scientists hired by global broadcasting and multimedia companies. They didn't care about digital snake oil, they needed solutions to be used in really expensive and often non-upgradeable hardware setups, and had enough funds for mass testing. Work on MPEG-1 happened in the early nineties, that's 30 years of consensus that storing everything up to 22 kHz is a waste of bits that worsens the valuable parts of the audio in most test cases.

audiocheck.net/audiotests_frequencycheckhigh.php
My limit is at 17 kHz.

Let me guess, you NEED more?

Audiophiles I swear. How do you even discover this in the first place?

what about audio in manga rips?

They're right to do this honestly, because normalfags will never care about this, so who gives a shit. Anyone who cares about archiving series they like will be downloading BD rips anyway so I don't even see why I should give a shit. Let plebs be plebs, I am secure that my data is safe and backed up.

I could hear the -80dB reasonably well for the dynamic, and for the freq check I could hear it at 21kHz, lost it, then heard it again at 18kHz.
Did some of my own testing on a few raws, I think this might be freq-limited at the source, since none of them show anything over 18kHz. Regardless, Ohys and B-Global look fine, whereas CR and Erai look like shit. My guess is lower sampling rate, along with who knows what option they have enabled that fucks the high frequencies. But who fucking knows, I'm drunk.

Attached: results.png (1246x939, 533.1K)

This is the kind of encoder autism I miss on current day Yea Forums, that said, it's probably Daiz's fault that your audio is so shitty.

Well desu this is more of a /g/ topic (mostly headphone or media player related shitposting), and outside of blatant shitposting you don't see much crossposting anymore. I fucking love signal processing and encoding shit but practical applications never get brought up.

how retarded are you?
the AAC encoder uses a low-pass filter internally for compression, how do you not know that?

>I could hear it at 21kHz
humans can't hear 21kHz, your headphones must have bugged out

>30 years of consensus that storing everything up to 22 kHz is a waste of bits
guess what, in 30 years technology advanced so we can afford more bits and the compromise is utterly pointless

from the raw WAVs probably

No, I'm drunk and have no familiarity with AAC. This is armchair signal processing coming from someone who took DSP classes in college 4 years ago.
This user though might be. Human hearing maxes out at around ~22kHz, why the fuck do you think most formats sample at twice that frequency?

>I fucking love signal processing and encoding shit but practical applications never get brought up.
Video and audio have it good in that regard. People are actually adopting AVC, HEVC, AV1, AAC, Opus, and will probably adopt the next innovations that come too.
Meanwhile the world of images is still stuck on MOTHERFUCKING JPEG PIECE OF NIGGER SHIT.

It's sad how they don't realize it. I feel like I'm reading a message in a bottle from an earlier time when I come across this shit.

It's amazing that Opus isn't the standard lossy codec yet. People are dumb.

Compression saves bandwidth costs. Haven't you seen how they bit starve video as well?

>"smart" TVs are dumb
FTFY
can't use something your customers won't be able to play

> I could hear the -80dB reasonably well for the dynamic
But are you really going to listen the rest of the (loud) audio at that same volume level that allows you to discern -80dB?

> I could hear it at 21kHz, lost it, then heard it again at 18kHz
Congratulations, that probably means that something in your software or hardware does the low quality resample which creates the interference on lower, audible frequencies. In other words, you are not hearing the difference the presence of original high frequencies makes, you are listening to the noise your specific setup generates on other frequencies when it works with high frequency data, improperly. Because such problems were common in early digital era when chips didn't have additional resolution and additional transistors for proper resampling, and because analog circuits before that simply could not result in perfect square bandpass in a vacuum (both on recording and playing), most audio traditionally has linear or logarithmic drop to zero in the high frequencies to minimize possible distortions (and something like an acoustic recording might not have any in the first place).

> I think this might be freq-limited at the source
If you tried to use any audio encoder, you would know that none of them keep all 22kHz on economically viable non-placebo bitrates. Something really abnormal should happen to make people use a non-standard combination.

>sounds so much better, almost like BD
I love how you act as if that means anything, especially for a series you haven't even had a chance to listen to the BDs of

>aspiring audiophile
>has no fucking clue about even the very basics of audio encoding
lmao

>aspiring audiophile
>sub 48kHz sampling rate
???

> in 30 years technology advanced
Yes, but you have no idea how much. 30 years ago you could have a DAC that handles 44kHz 16 bit stereo (that's actually more than enough for humans) in consumer hardware. Now you can have a dozen of DACs that handle much more on a dime. Those bogus 96kHz, 192kHz modes don't really mean anything because the internal frequency is much higher anyway. Compared to what regular network card or DVI/HDMI/DP/USB3/etc. controllers do without you noticing it, mere kilohertz signal processing is a piece of cake.

But I was not talking about that, I was talking about lossy compression. You don't have “bits to spare”, you have some bitrate, which might be static or floating, and you want that amount of bits to generate the result most similar to the original for the human. If you can increase the bitrate any way you want, you don't need any compression, just send the original audio. On the contrary, when you need the compression, you most likely need the result to be significantly smaller than the original (roughly 10×). The way you do this is by inventing smarter and more complex techniques: some primitive ADPCM-like algorithm that makes audio 8× smaller sounds way more horrible than a modern compressor despite having the same resulting bitrate. The principle of all smart algorithms is focusing more on what you notice more and focusing less on what you notice less. Not spending bits on something most, if not all, people won't hear is a straightforward idea. If you spend bits on high frequencies you can't hear because you feel like it, less bits are left for the rest of the audio that you can hear, and therefore what you hear will get worse. This change is unnoticeable on 200-300 kbps lossy encodings because they are, on average, way above the threshold anyone can notice any difference, and lots of extra bitrate can be spent on useless details without any consequences, that's why people think saving all 22kHz is “free”.

>You don't have “bits to spare”, you have some bitrate, which might be static or floating, and you want that amount of bits to generate the result most similar to the original for the human
My point was that that bitrate is much bigger than 30 years ago

Most AAC encoders (including FFMpeg's, which is likely what CR are using) use a low-pass filter for encoding; Apple's AAC encoder IIRC doesn't, at least at high bitrates.

Isn't ffmpeg's encoder dogshit?
Why the fuck wouldn't an enterprise like crunchy buy a Fraunhofer license?

License cartels.

Because anyone who gives a shit will just pirated BDRips anyway. You don't watch CrunchyRoll for encoding quality; in fact they've even had controversy surrounding them reducing the bitrate on their video encodes in the past.

Very questionable statement.

From the hardware perspective, the opposite is true: old devices were slower, so they had to use less complex algorithms, so they had to give more bitrate for less complex algorithms to sound better. There were MPEG Layer I, MPEG Layer II, MPEG Layer III for that reason, in order of increasing compression, and increasing complexity and processing power. MPEG Layer III (aka MP3) decoding overloaded even 486 CPUs, and using it for transmission (live encoding on one side, live decoding on another) at the time would require special hardware codecs anyway. DVB and VideoCD standards used earlier and less complex Layer 2, and Layer 3 hardware decoders (in form of portable music players) appeared half a decade after its public release.

Also, on the scale of companies like YouTube nothing is free. They have to buy crates of HDDs just to hold new daily uploads and change the faulty ones. They won't send an additional pickup truck full of cash to buy more storage and bandwidth just because you feel that bitrates should grow (until they all achieve some kind of singularity). Economically viable bitrate for lossy audio compression should be as high as possible, but no higher than that. The goal of newer codecs is to do more in the same or smaller space, not to inflate the requirements. Everyone would simply use 25 year old MP3 with more bitrate if that was the case.

The audio is way too easy for modern hardware, the video is where you can still notice how new codecs targeting future decade(s) have enormous requirements compared to the previous generation.

I'll admit that I was pulling educated guessed out of my ass as I don't know shit about history.
Are you saying that 30 years ago the standard audio bitrate was already 96-128kbps? That sounds weird because video bitrate absolutely exploded through time, with the advent of HD and 4K and internet connections that could stream them.

30 years ago there was practically no audio or video media on the internet.
TVs were analogue and CDs used uncompressed PCM.

Hearing the difference now isn’t the reason to encode to FLAC. FLAC uses lossless compression, while MP3 is ‘lossy’. What this means is that for each year the MP3 sits on your hard drive, it will lose roughly 12kbps, assuming you have SATA ? it’s about 15kbps on IDE, but only 7kbps on SCSI, due to rotational velocidensity. You don’t want to know how much worse it is on CD-ROM or other optical media.

I started collecting MP3s in about 2001, and if I try to play any of the tracks I downloaded back then, even the stuff I grabbed at 320kbps, they just sound like crap. The bass is terrible, the midrange…well don’t get me started. Some of those albums have degraded down to 32 or even 16kbps. FLAC rips from the same period still sound great, even if they weren’t stored correctly, in a cool, dry place. Seriously, stick to FLAC, you may not be able to hear the difference now, but in a year or two, you’ll be glad you did.

>Bilibili almost everywhere except china
you lost me

I just want to watch some anime, guys...

Attached: 1576130854696.jpg (292x493, 37.91K)

Interesting. So the portion of audio in a multimedia stream is rapidly decreasing? Was it as big as 50/50 at some point?

B-global is the brand bilibili uses to simulcast anime subtitled in English and some other languages around the globe
It's not available for China because they have bilibili proper

The problem with B-global are the subs though, isn't it?

People who get aneurysms when they see burger localization might prefer them.
Those who don't can always just get the subs from subsplease or whatever.

Attached: Audio.png (1280x720, 616.43K)

30 years ago only industry professionals would understand what “audio bitrate” was. Regular people used what was inside their system for storage, transmission, or receiving. Some people could use home computers or high end consumer devices to make music, and work with samples, uncompressed or lightly compressed. “Songs” meant MIDI files of tracker modules. CDs could only be ripped by grabbing analog output, because that's how CD-ROMs played audio CDs. Computer simply passed commands to play, stop, or change track, the ability to read raw audio data came in later generations (standard way to support it appeared only in Windows 98). Where would you store all those dozens of megabytes anyway?

In late '90s 128 kbps MP3s were often described as “(near) CD quality” in software manuals and press, 64 kbps was space efficient choice for sharing (just 10 minutes to download one song!), and cutting down sampling rate and switching to mono was used to get some tolerable audio in even lesser bitrate if you only had 10 MB of free hosting space, and decided to share your favorite shonen attack sounds. You can probably find some old settings page in Windows Media Player that says something like that about WMA encoding even today.

Well, in most cases 128 kbps MP3s had noticeable artifacts compared to CD audio, so pirates switched to 160 and 192 kbps as “high quality”. However, the former were still frequent on musical services and on the internet in general. Since then, the 100-200 kbps is a sweet spot for “almost CD quality for most people”, and lower bitrates centered around 50 kbps are used for live transmissions and communication on mobile (though voice-only can get much lower, and there's a lot of dedicated algorithms for that). Codecs that came and went after MP3 increased quality, so “almost CD quality for most people” is now closer to 128 than 192 kbps (with Opus), see, for example,
hydrogenaud.io/index.php?topic=120166.0

The human eyes cannot see more than 15khz