Rectangle 27 30

However, the math of the Fourier transform assumes that the signal being Fourier transformed is periodic over the time span in question.

This mismatch between the Fourier assumption of periodicity, and the real world fact that audio signals are generally non-periodic, leads to errors in the transform.

These errors are called "spectral leakage", and generally manifest as a wrongful distribution of energy across the power spectrum of the signal.

Notice the distribution of energy above the -60 dB line, and the three distinct peaks at roughly 440 Hz, 880 Hz, and 1320 Hz. This particular distribution of energy contains "spectral leakage" errors.

To somewhat mitigate the "spectral leakage" errors, you can pre-multiply the signal by a window function designed specifically for that purpose, like for example the Hann window function.

The plot below shows the Hann window function in the time-domain. Notice how the tails of the function go smoothly to zero, while the center portion of the function tends smoothly towards the value 1.

Now let's apply the Hann window to the guitar's audio data, and then FFT the resulting signal.

The plot below shows a closeup of the power spectrum of the same signal (an acoustic guitar playing the A4 note), but this time the signal was pre-multiplied by the Hann window function prior to the FFT.

Notice how the distribution of energy above the -60 dB line has changed significantly, and how the three distinct peaks have changed shape and height. This particular distribution of spectral energy contains fewer "spectral leakage" errors.

The acoustic guitar's A4 note used for this analysis was sampled at 44.1 KHz with a high quality microphone under studio conditions, it contains essentially zero background noise, no other instruments or voices, and no post processing.

Real audio signal data, Hann window function, plots, FFT, and spectral analysis were done here:

Why do I need to apply a window function to samples when building a po...

audio signal-processing fft spectrum window-functions
Rectangle 27 2

Ideally a Discrete Fourier Transform (DFT) is purely a rotation, in that it returns the same vector in a different coordinate system (i.e., it describes the same signal in terms of frequencies instead of in terms of sound volumes at sampling times). However, the way the DFT is usually implemented as a Fast Fourier Transform (FFT), the values are added together in various ways that require multiplying by 1/N to keep the scale unchanged.

Often, these multiplications are omitted from the FFT to save computing time and because many applications are unconcerned with scale changes. The resulting FFT data still contains the desired data and relationships regardless of scale, so omitting the multiplications does not cause any problems. Additionally, the correcting multiplications can sometimes be combined with other operations in an application, so there is no point in performing them separately. (E.g., if an application performs an FFT, does some manipulations, and performs an inverse FFT, then the combined multiplications can be performed once during the process instead of once in the FFT and once in the inverse FFT.)

I am not familiar with Matlab syntax, but, if Stuarts answer is correct that cX*cX' is computing the sum of the squares of the magnitudes of the values in the array, then I do not see the point of performing the FFT. You should be able to calculate the total energy in the same way directly from iData; the transform is just a coordinate transform that does not change energy, except for the scaling described above.

we do the fft for other calculations that we need.

@user438431: If you want Power_X scaled correctly but do not need the FFT data scaled, then you can remove the division by fftSize from cX = / fftSize (where it divides each element of the vector) to Power_X = (cX*cX')/(50*(fftSize*fftSize)) (where it is one multiplication [to square fftSize, since Power_X is the square of energy] and one division on a scalar value), saving fftSize-2 operations.

complex conjugate transpose matlab to C - Stack Overflow

c matlab fft translate
Rectangle 27 10

what you have is a sample whose length in time is 256/44100 = 0.00580499 seconds. This means that your frequency resolution is 1 / 0.00580499 = 172 Hz. The 256 values you get out from Python correspond to the frequencies, basically, from 86 Hz to 255*172+86 Hz = 43946 Hz. The numbers you get out are complex numbers (hence the "j" at the end of every second number).

You need to convert the complex numbers into amplitude by calculating the sqrt(i2 + j2) where i and j are the real and imaginary parts, resp.

If you want to have 32 bars, you should as far as I understand take the average of four successive amplitudes, getting 256 / 4 = 32 bars as you want.

Hi, sorry for the initial (wrong) answer... didn't get the math right. This should be correct now.

Please note that, if c is a complex number, sqrt(c.real2 + c.imag2) == abs(c)

python - Analyze audio using Fast Fourier Transform - Stack Overflow

python audio signal-processing fft spectrum
Rectangle 27 3

If the arrays contain integers of limited size (i.e. in range -u to u) then you can solve this in O(n+ulogu) time by using the fast Fourier transform to convolve the histograms of each collection together.

For example, the set a=[-1,2,2,2,2,3] would be represented by a histogram with values:

ha[-1] = 1
ha[2]  = 4
ha[3]  = 1

After convolving all the histograms together with the FFT, the resulting histogram will contain entries where the value for each bin tells you the number of ways of combining the numbers to get each possible total. To find the answer to your question with a total of 0, all you need to do is read the value of the histogram for bin 0.

algorithm - 5 numbers such that their sum equals 0 - Stack Overflow

algorithm search
Rectangle 27 3

You can use the fast fourier transform for extremely large input (value of n) to find any bit pattern in O(n log n ) time. Compute the cross-correlation of a bit mask with the input. Cross -correlation of a sequence x and a mask y with a size n and n' respectively is defined by

R(m) = sum  _ k = 0 ^ n' x_{k+m} y_k

then occurences of your bit pattern that match the mask exactly where R(m) = Y where Y is the sum of one's in your bit mask.

So if you are trying to match for the bit pattern

[0 0 1 0 1 0]

in

[ 1 1 0 0 1 0 1 0 0 0 1 0 1 0 1]

then you must use the mask

[-1 -1  1 -1  1 -1]

the -1's in the mask guarantee that those places must be 0.

You can implement cross-correlation, using the FFT in O(n log n ) time.

what would you use fourier if this can be solved in O(n) time?

c++ - Fastest way to scan for bit pattern in a stream of bits - Stack ...

c++ c algorithm assembly embedded
Rectangle 27 2

I think you need to use the accelerate framework, inside there is a vDSP API that could do FFT(Fast Fourier Transform). It will convert the data from time domain to frequency domain. According the bin size information, you could extract the magnitude/amplitude after the certain bin size.

For how FFT work in there, you could refer to this question - Understanding FFT in aurioTouch2

P.S. AurioTouch or AurioTouch 1 is not using the vDSP API. I remember before iOS 4 there is an FFT function that could do similar thing but slower. So you may think that vDSP is only available after iOS4.0

Hi could you please help me with sample code @KenHui

ios - iPhone app audio recording only in above certain frequency - Sta...

iphone ios audio voice frequency
Rectangle 27 1

If you did really want to use clustering, then dependent on your application you could generate a low dimensional feature vector for each time series. For example, use time series mean, standard deviation, dominant frequency from a Fourier transform etc. This would be suitable for use with k-means, but whether it would give you useful results is dependent on your specific application and the content of your time series.

matlab - How can I perform K-means clustering on time series data? - S...

matlab time-series cluster-analysis data-mining k-means
Rectangle 27 3

After your FFT and filter, you need to do an inverse FFT to get the data back to the time domain. Then you want to add that set of samples to your .WAV file.

As far as producing the file itself goes, the format is widely documented (Googling for ".WAV format" should turn up more results than you have any use for), and pretty simple. It's basically a simple header (called a "chunk") that says it's a .WAV file (or actually a "RIFF" file). Then there's an "fmt " chunk that tells about the format of the samples (bits per sample, samples per second, number of channels, etc.) Then there's a "data" chunk that contains the samples themselves.

Since it sounds like you're going to be doing this in real time, my advice would be to forget about doing your FFT, filter, and iFFT. An FIR filter will give essentially the same results, but generally a lot faster. The basic idea of the FIR filter is that instead of converting your data to frequency domain, filtering it, then converting back to time domain, you convert your filter coefficients to time domain, and apply them (fairly) directly to your input data. This is where DSPs earn their keep: nearly all of them have multiply-accumulate instructions, which can implement most of a FIR filter in one instruction. Even without that, however, getting a FIR filter to run in real time on a modern processor doesn't take any real trick unless you're doing really fast sampling. In any case, it's a lot easier that getting an FFT/filter/iFFT to operate at the same speed.

@Jerry Coffin - I have to disagree on the speed of FIR versus FFT/multiply/IFFT. For a 64 tap FIR filter, each output sample requires 64 multiply-accumulates. Via an FFT using an N=128 transform and overlap-save processing (en.wikipedia.org/wiki/Overlap-save_method), you do a transform, complex buffer multiply, and inverse transform = 2*128*log2(128) + 6*128 = 2560 operations, which would calculate 64 samples for a ops/sample count of 40, saving you 24 cycles. There's some handwaving on memory access etc here, but as your filter gets longer, the FFT method shines.

There is some point at which a FFT will be better, that's true -- IME, that's pretty rare in practice though. In particular, a FIR is extremely cache friendly (linear read through the data and coefficients). By contrast, an FFT practically defines "cache hostile". A single cache miss is virtually guaranteed to be at least 50 cycles on a modern processor. On a modern processor, you can often treat CPU cycles as free; the limiting factor is memory bandwidth.

I put the example that is here: ccrma.stanford.edu/courses/422/projects/WaveFormat into a .txt and then changed the extension to .wav and didn't work. It wasn't supposed to work just like this?

Glancing at that, it shows the bytes in hex -- did you enter them as hexadecimal text? If so, it shouldn't work. Rather, those are supposed to be entered as the binary values of individual bytes. It also looks like the sample they show is incomplete -- it shows the headers and the first few samples, but its header says it'll have a lot more samples than show up there.

c# - How to record to .wav from microphone after applying fast fourier...

c# filter wav record fft
Rectangle 27 5

That approach goes by the name Short-time Fourier transform. You get all the answers to your question on wikipedia: https://en.wikipedia.org/wiki/Short-time_Fourier_transform

It works great in practice and you can even get better resolution out of it compared to what you would expect from a rolling window by using the phase difference between the fft's.

Here is one article that does pitch shifting of audio signals. The way how to get higher frequency resolution is well explained: http://www.dspdimension.com/admin/pitch-shifting-using-the-ft/

signal processing - Is a "rolling" FFT possible and could it be of use...

signal-processing fft processing
Rectangle 27 1

The standard tool for transforming time-domain signals like audio samples into a frequency domain information is the Fourier transform.

Grab the fast Fourier transform library of your choice and throw it at your data; you will get a decomposition of the signal into its constituent frequencies. You can then take that data and visualize however you like. Spectrograms are particularly easy; you just need to plot the magnitude of each frequency component versus the frequency and time.

I've managed the FFT and received a double[] containing values from -1 to 1. Can you explain in more detail what "plot the magnitude of each frequency component versus the frequency and time" means and how you would code that part?

c# - Visualization of streamed music from Spotify - Stack Overflow

c# visualization naudio libspotify
Rectangle 27 11

Yes, it's possible for a pure function to return the time, if it's given that time as a parameter. Different time argument, different time result. Then form other functions of time as well and combine them with a simple vocabulary of function(-of-time)-transforming (higher-order) functions. Since the approach is stateless, time here can be continuous (resolution-independent) rather than discrete, greatly boosting modularity. This intuition is the basis of Functional Reactive Programming (FRP).

scala - How can a time function exist in functional programming? - Sta...

scala haskell f# functional-programming clean-language
Rectangle 27 0

Have you considered using the multiprocessing module to parallelize processing the files? Assuming that you're actually CPU-bound here (meaning it's the fourier transform that's eating up most of the running time, not reading/writing the files), that should speed up execution time without actually needing to speed up the loop itself.

For example, something like this (untested, but should give you the idea):

def do_transformation(filename)
    t,f = loadtxt(filename, unpack=True)

    dt = t[1]-t[0]
    fou = absolute(fft.fft(f))
    frq = absolute(fft.fftfreq(len(t),dt))

    ymax = median(fou)*30

    figure(figsize=(15,7))
    plot(frq,fou,'k')

    xlim(0,400)
    ylim(0,ymax)

    iname = filename.replace('.dat','.png')
    savefig(iname,dpi=80)
    close()

pool = multiprocessing.Pool(multiprocessing.cpu_count())
for filename in filelist:
    pool.apply_async(do_transformation, (filename,))
pool.close()
pool.join()

You may need to tweak what work actually gets done in the worker processes. Trying to parallelize the disk I/O portions may not help you much (or even hurt you), for example.

hmmmm. Could you elaborate a little more? I'm in a bit of a time crunch with this program. It seems at the rate I'm going, it looks like another another day or two before finishing, and I'm really shooting for 12-18 hours.

I just selected your comment as the answer, since it's effectively sped up the program almost 8x (the number of cpu's). But if you could, I have another question. There are a handful of files here that are quite substantial, taking a quite a long time to process. Is there a way to assign multiple processors to the same task instead of applying them to separate files?

There's no simple tweak to say "Throw more CPUs at this task". You'd need to refactor the code to break your worker method up into smaller pieces that multiple processes can work on at the same time, and then pull it back together once all the pieces are ready. For example, it looks like fou = absolute(... and frq = absolute(... could be calculated in parallel. You have to be careful though, because passing large amounts of data between processes can be slow. Its hard for me to say exactly what kind of changes you could make because I really don't understand the algorithms you're using.

python - What is the fastest/most efficient way to loop through a larg...

python matplotlib fft figure
Rectangle 27 0

What you want to do is certainly possible, you are on the right track, but you seem to misunderstand a few points in the example. First, it is shown in the example that the technique is the equivalent of linear regression in the time domain, exploiting the FFT to perform in the frequency domain an operation with the same effect. Second, the trend that is removed is not linear, it is equal to a sum of sinusoids, which is why FFT is used to identify particular frequency components in a relatively tidy way.

In your case it seems you are interested in the residuals. The initial approach is therefore to proceed as in the example as follows:

(1) Perform a rough "detrending" by removing the DC component (the mean of the time-domain data)

(2) FFT and inspect the data, choose frequency channels that contain most of the signal.

You can then use those channels to generate a trend in the time domain and subtract that from the original data to obtain the residuals. You need not proceed by using IFFT, however. Instead you can explicitly sum over the cosine and sine components. You do this in a way similar to the last step of the example, which explains how to find the amplitudes via time-domain regression, but substituting the amplitudes obtained from the FFT.

The following code shows how you can do this:

tim = (time - time0)/timestep;  % <-- acquisition times for your *new* data, normalized
NFpick = [2 7 13]; % <-- channels you picked to build the detrending baseline

% Compute the trend
mu = mean(ts);
tsdft = fft(ts-mu);
Nchannels = length(ts);      % <-- size of time domain data
Mpick = 2*length(NFpick);
X(:,1:2:Mpick) = cos(2*pi*(NFpick-1)'/Nchannels*tim)';
X(:,2:2:Mpick) = sin(-2*pi*(NFpick-1)'/Nchannels*tim)';

% Generate beta vector "bet" containing scaled amplitudes from the spectrum
bet = 2*tsdft(NFpick)/Nchannels;
bet = reshape([real(bet) imag(bet)].', numel(bet)*2,1)
trend = X*bet + mu;

To remove the trend just do

detrended = dat - trend;

where dat is your new data acquired at times tim. Make sure you define the time origin consistently. In addition this assumes the data is real (not complex), as in the example linked to. You'll have to examine the code to make it work for complex data.

preprocessor - Fast fourier transform for deasonalizing data in MATLAB...

matlab preprocessor filtering signal-processing fft
Rectangle 27 0

Apple provides aurioTouch sample code which display the input audio in one of the forms, a regular time domain waveform, a frequency domain waveform (computed by performing a fast fourier transform on the incoming signal), and a sonogram view (a view displaying the frequency content of a signal over time, with the color signaling relative power, the y axis being frequency and the x as time).

iphone - How to get Beats per minutes of a song in objective-c - Stack...

iphone objective-c
Rectangle 27 0

First the expected one goes. You're talking about "removing wavelength dependence of phase". If you did exactly that - zeroed out the phase completely - you would actually get a slightly compressed peak. What you actually do is that you add a linear function to the phase. This does not compress anything; it is a well-known transformation that is equivalent to shifting the peaks in time domain. Just a textbook property of the Fourier transform.

Then goes the unintended one. You convert the spectrum obtained with fft with fftshift for better display. Thus before using ifft to convert it back you need to apply ifftshift first. As you don't, the spectrum is effectively shifted in frequency domain. This results in your time domain phase being added a linear function of time, so the difference between the adjacent points which used to be near zero is now about pi.

Does this mean that fitting a best fit to the unwrapped phase and subtracting this from the actual phase in not a valid method to attempt to compress the pulses?

I was under the impression that turning the wavelength dependence on the phase into a constant ( or as close as possible) within the bounds of the pulses in the spectral range would reduce chirp and thus compress the peak?

In general, yes, it will. However, you're fitting a linear function, but a linear summand does not change the peak shape. If you need to compress it you need to use nonlinear functions for approximation. Linear ones are of no use for this task.

What if the phase within the chirp is linear with wavelength?

If the phase is linear w.r.t. wavelength, your peak is already in the most compressed state possible.

transform - Complex FFT then Inverse FFT MATLAB - Stack Overflow

matlab transform fft inverse
Rectangle 27 0

You may have too little data for FFT/DWT to make sense. DTW may be better, but I also don't think it makes sense for sales data - why would there be a x-week temporal offset from one location to another? It's not as if the data were captured at unknown starting weeks.

FFT and DWT are good when your data will have interesting repetitive patterns, and you have A) a good temporal resolution (for audio data, e.g. 16000 Hz - I am talking about thousands of data points!) and B) you have no idea of what frequencies to expect. If you know e.g. you will have weekly patterns (e.g. no sales on sundays) then you should filter them with other algorithms instead.

DTW (dynamic time-warping) is good when you don't know when the event starts and how they align. Say you are capturing heart measurements. You cannot expect to have the hearts of two subjects to beat in synchronization. DTW will try to align this data, and may (or may not) succeed in matching e.g. an anomaly in the heart beat of two subjects. In theory...

Maybe all you need is spend more time in preprocessing your data, in particular normalization, to be able to capture similarity.

Thanks for your answer! so what method do you suggest? The result, I want to achieve, is to cluster products with different dynamic of sales and present these different dynamics on plots.

I suggest to do a lot of preprocessing, than whatever algorithm you feel comfortable with, and which yields reasonable results. But preprocessing is key. (And depends on your data, we cannot help you preprocess).

r - Fast Fourier Transform and Clustering of Time Series - Stack Overf...

r fft time-series cluster-analysis
Rectangle 27 0

Yes, the FFT is merely an efficient DFT algorithm. Understanding the FFT itself might take some time unless you've already studied complex numbers and the continuous Fourier transform; but it is basically a base change to a base derived from periodic functions.

(If you want to learn more about Fourier analysis, I recommend the book Fourier Analysis and Its Applications by Gerald B. Folland)

algorithm - How exactly do you compute the Fast Fourier Transform? - S...

algorithm math fft
Rectangle 27 0

Assembly language is used for transforming higher-level programming languages like C into machine code. Processors can only run machine code -- a sequence of short, discrete, instructions encoded in binary format. Every time any program runs, machine code is being executed by a processor. Assembly language is simply a human-readable form of machine code.

The job of transforming high-level code into machine code is performed by a compiler, and assembly is typically created along the way as an intermediate representation before being translated into machine code. In this light, assembly is written at least as often popular high-level programming languages -- its just written by another program.

Reasons you might write a program in assembly language

  • You don't trust a compiler to generate optimized or working machine code

q - Assembly Language Usage - Stack Overflow

assembly q
Rectangle 27 0

Following this question I have a doubt. How do I know my sampling frequency and maximum frequency over a window of specific time length containing the data points. TO elaborate my question-

I have a set of accelerometer readings in X, Y, Z axes obtained from an android based smart-phone. The data (in X, Y, Z) was recorded at different time stamps and there is no uniform time period for recording the data.

My data set looks as timestamp, X, Y, Z. First, I did a filtering (using a low-pass) on this data and now want to perform FFT on a timewindow of 1 min or may be a window containing 250 samples (not very sure about the window length). I am using FastFourierTransform class of apache commons in Java (https://commons.apache.org/proper/commons-math/javadocs/api-2.2/org/apache/commons/math/transform/FastFourierTransformer.html).

I am getting FFT magnitude but I was wondering how do I know the corresponding frequency of each FFT mag. I know corresponding frequency would be ((n*Fs)/N) where n is the bin number or data point index, Fs is sampling frequency and N is number of input data points over the window. Now my question is how do I know Fs for a given set of data say for example input data in an array as [1 2 3 4 5 6 7] over a window of size Nt millisec where Nt=(lastTimeStamp-firstTimeStamp)?

This also immediately follows by doing so if this agrees with nyquist sampling frequency? and this says Fs>=2*maxF in the signal and I don't know the maximum frequency over the time window Nt.

java - Build sample data for apache commons Fast Fourier Transform alg...

java signal-processing fft apache-commons
Rectangle 27 0

Z-Transforms (ZT)

Analysis of continuous time LTI systems can be done using z-transforms. It is a powerful mathematical tool to convert differential equations into algebraic equations.

The bilateral (two sided) z-transform of a discrete time signal x(n) is given as

The unilateral (one sided) z-transform of a discrete time signal x(n) is given as

$Z.T[x(n)] = X(Z) = \Sigma_{n = 0}^{\infty} x(n)z^{-n} $

Z-transform may exist for some signals for which Discrete Time Fourier Transform (DTFT) does not exist.