Rectangle 27 5

Without knowing details about the mentioned Bluetooth Music Player, it seems to use simple Bluetooth data connection, otherwise you would not need to install a client on playing/sending device. To stream audio from microphone to another device, you can record it on your sending device and send it to the receiving device. You will need to implement a protocol for that purpose. OR You can implement an alternative A2DP sink service. This is, what the sink is: a device with a Bluetooth Protocol Stack with an implementation of A2DP Sink.Edit: For the case you detailed by your comments, the sending device should be left as-is, without installing any app. That implicitly means that your solution must make use of out-of-the-box Bluetooth functionality of that Android device. What you can use here is therefor limited to those profiles that Android typically support, which is HSP, HFP and A2DP. Since you obviously want to stream music, A2DP would be your choice. On the device supposed to receive the audio stream and do the playback, you have to implement a service providing the A2DP sink as an self implemented BluetoothService opening a BluetoothServerSocket on RFCOMM as described in Android documentation. You will have to spend much effort implementing this, and I am not sure if you will need a license for this.

Sorry for making you confused. Actually what i want to implement is streaming audio from one android device to another. But i came to know that its not possible due to the lack of a2dp sink implementation in android devices. But can i implement the same with any other profile other than a2dp? At the same time, i want to implement the same without a client application. Now i am not giving much priority to sound quality and all.

You say you don't want a client application, and that you mean for which side - the receiving or the sending device?

Like in the case of an android device pairing with a bluetooth headset. Here, a custom application is not needed in the phone to send the data to the headset. But in that case, the bluetooth headset have the a2dp sink implementation. But in my case its difficult to integrate sink support to my android device. Therefore, is there any other android supported profile with which i can stream audio from one android device to another? I think now you got what my exact problem is.

Bluetooth audio streaming between android devices - Stack Overflow

android bluetooth streaming audio-streaming a2dp
Rectangle 27 5

Without knowing details about the mentioned Bluetooth Music Player, it seems to use simple Bluetooth data connection, otherwise you would not need to install a client on playing/sending device. To stream audio from microphone to another device, you can record it on your sending device and send it to the receiving device. You will need to implement a protocol for that purpose. OR You can implement an alternative A2DP sink service. This is, what the sink is: a device with a Bluetooth Protocol Stack with an implementation of A2DP Sink.Edit: For the case you detailed by your comments, the sending device should be left as-is, without installing any app. That implicitly means that your solution must make use of out-of-the-box Bluetooth functionality of that Android device. What you can use here is therefor limited to those profiles that Android typically support, which is HSP, HFP and A2DP. Since you obviously want to stream music, A2DP would be your choice. On the device supposed to receive the audio stream and do the playback, you have to implement a service providing the A2DP sink as an self implemented BluetoothService opening a BluetoothServerSocket on RFCOMM as described in Android documentation. You will have to spend much effort implementing this, and I am not sure if you will need a license for this.

Sorry for making you confused. Actually what i want to implement is streaming audio from one android device to another. But i came to know that its not possible due to the lack of a2dp sink implementation in android devices. But can i implement the same with any other profile other than a2dp? At the same time, i want to implement the same without a client application. Now i am not giving much priority to sound quality and all.

You say you don't want a client application, and that you mean for which side - the receiving or the sending device?

Like in the case of an android device pairing with a bluetooth headset. Here, a custom application is not needed in the phone to send the data to the headset. But in that case, the bluetooth headset have the a2dp sink implementation. But in my case its difficult to integrate sink support to my android device. Therefore, is there any other android supported profile with which i can stream audio from one android device to another? I think now you got what my exact problem is.

Bluetooth audio streaming between android devices - Stack Overflow

android bluetooth streaming audio-streaming a2dp
Rectangle 27 3

[self.mPlayerItem addObserver:self 
                   forKeyPath:kStatusKey 
                      options:NSKeyValueObservingOptionInitial | NSKeyValueObservingOptionNew
                      context:AVPlayerDemoPlaybackViewControllerStatusObservationContext];

to monitor the status key ("status"). Then I created the player

[self setPlayer:[AVPlayer playerWithPlayerItem:self.mPlayerItem]];

And in the observeValueForKeyPath

if (context == AVPlayerDemoPlaybackViewControllerStatusObservationContext)
{        
    AVPlayerStatus status = [[change objectForKey:NSKeyValueChangeNewKey] integerValue];
    switch (status)
    {
            /* Indicates that the status of the player is not yet known because 
             it has not tried to load new media resources for playback */
        case AVPlayerStatusUnknown:
        {
            [lblvalidation setText:@"Loading..."];

            NSLog(@"AVPlayerStatusUnknown");
        }
            break;

        case AVPlayerStatusReadyToPlay:
        {
            /* Once the AVPlayerItem becomes ready to play, i.e. 
             [playerItem status] == AVPlayerItemStatusReadyToPlay,
             its duration can be fetched from the item. */

            NSLog(@"AVPlayerStatusReadyToPlay");

            [self.player play];
            [lblvalidation setText:@"Playing..."];
        }
            break;

        case AVPlayerStatusFailed:
        {
            [lblvalidation setText:@"Error..."];
            NSLog(@"AVPlayerStatusFailed");
        }
            break;
    }
}

Will it work the following scenario? Player is playing when the network is slow or down will it return AVPlayerStatusUnknown?

iphone - How to get notification for audio streaming status from AVPla...

iphone audio streaming avplayer
Rectangle 27 9

Sooo...I just solved this only hours after I desperatly put bounty on it, but thats worth it.

I decided to start over. For the design thing with threads etc. I took some help from this awesome project, it helped me a lot. Now I use only one thread. It seems like the main point was the casting stuff, but I am not too sure, it also may have been the multithreading. I don't know what kind of bytes the byte[] constructor of AudioTracker expects, but certainly no float bytes. So I knew I need to use the short[] constructor. What I did was -put the bytes in a byte[] -take 4 of them and cast them to a float in a loop -take each float and cast them to shorts

Since I already did that before, I am not too sure what the problem was. But now it works. I hope this can help someone who wents trough the same pain as me. Big thanks to all of you who participated and commented.

Edit: I just thought about the changes and figured that me using CHANNEL_CONFIGURATION_STEREO instead of MONO earlier has contributed a lot to the stuttering. So you might want to try that one first if you encounter this problem. Still for me it was only a part of the solution, changing just that didn't help.

static final int frequency = 44100;
    static final int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
    static final int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
    boolean isPlaying;
    int playBufSize;
    Socket socket;
    AudioTrack audioTrack;

    playBufSize=AudioTrack.getMinBufferSize(frequency, channelConfiguration, audioEncoding);
    audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, frequency, channelConfiguration, audioEncoding, playBufSize, AudioTrack.MODE_STREAM);

    new Thread() {
        byte[] buffer = new byte[4096];
        public void run() {
            try { 
                socket = new Socket(ip, port); 
            }
            catch (Exception e) {
                e.printStackTrace();
            }
            audioTrack.play();
            isPlaying = true;
            while (isPlaying) {
                int readSize = 0;
                try { readSize = socket.getInputStream().read(buffer); }
                catch (Exception e) {
                    e.printStackTrace();
                }
                short[] sbuffer = new short[1024];
                for(int i = 0; i < buffer.length; i++)
                {

                    int asInt = 0;
                    asInt = ((buffer[i] & 0xFF) << 0) 
                            | ((buffer[i+1] & 0xFF) << 8) 
                            | ((buffer[i+2] & 0xFF) << 16) 
                            | ((buffer[i+3] & 0xFF) << 24);
                    float asFloat = 0;
                    asFloat = Float.intBitsToFloat(asInt);
                    int k=0;
                    try{k = i/4;}catch(Exception e){}
                    sbuffer[k] = (short)(asFloat * Short.MAX_VALUE);

                    i=i+3;
                }
                audioTrack.write(sbuffer, 0, sbuffer.length);
            }  
            audioTrack.stop();
            try { socket.close(); }
            catch (Exception e) { e.printStackTrace(); }
        }
    }.start();

java - Audio streaming via TCP socket on Android - Stack Overflow

java android sockets audio-streaming audiotrack
Rectangle 27 4

I don't know of any internet radio services playing back their streams with the Web Audio API currently, but I wouldn't be surprised to find one. I've been working on one myself using Audiocog's excellent Aurora.js library, which enables codecs in-browser that wouldn't normally be available, by decoding the audio with JavaScript. However, for compatibility reasons as you have pointed out, this would be considered a bit experimental today.

Most internet radio stations use progressive HTTP streaming (SHOUTcast/Icecast style) which can be played back within an <audio> element or Flash. This works well but can be hard to get right, especially if you use SHOUTcast servers as they are not quite 100% compatible with HTTP, hurting browser support in some versions of Firefox and a lot of mobile browsers. I ended up writing my own server called AudioPump Server to get better browser and mobile browser support with HTTP progressive.

Depending on your Flash code and ActionScript version available, you might also have to deal with memory leaks in creative ways, since by default Flash will keep all of your stream data in memory indefinitely as it was never built to stream over HTTP. Many use RTMP with Flash (with Wowza or similar on the server), which Flash was built to stream with to get around this problem.

iOS supports HLS which is basically a collection of static files served by an HTTP server. The encoder writes a chunk of the stream to each file as the encoding occurs, and the client just downloads them and plays them back seamlessly. The benefit here is that the client can choose a bitrate to stream and, raising quality up and down as network conditions change. This also means that you can completely switch networks (say from WiFi to 3G) and still maintain the stream since chunks are downloaded independently and statelessly. Android "supports" HLS, but it is buggy. Safari is the only browser currently supporting HLS.

Compatibility detection is not something you need to solve yourself. There are many players, such as jPlayer and JW Player which wrangle HTML5 audio support detection, codec support detection, and provide a common API between playback for HTML5 audio and Flash. They also provide an optional UI if you want to get up and running quickly.

Finally, most stations do offer a link to allow you to play the stream in your own media player. This is done by linking to a playlist file (usually M3U or PLS) which is downloaded and often immediately opened (as configured by the user and their browser). The player software loads this playlist and then connects directly to the streaming server to begin playback. On Android, you simply link to the stream URL. It will detect the Content-Type response header, disconnect, and open its configured media player for playback. These days you have to hunt to find these direct links, but they are out there.

If you ever want to know what a station is using without digging around in their compiled and minified source code, simply use a tool like Fiddler or Wireshark and watch the traffic. You will find that it is very straightforward under the hood.

html5 - Does web based radio and audio streaming services use the Web ...

html5 audio audio-streaming webradio
Rectangle 27 1

private var AVPlayerDemoPlaybackViewControllerStatusObservationContext = 0
player.currentItem!.addObserver(self, forKeyPath: "status", options: NSKeyValueObservingOptions.New, context: &AVPlayerDemoPlaybackViewControllerStatusObservationContext)
override func observeValueForKeyPath(keyPath: String?, ofObject object: AnyObject?, change: [String : AnyObject]?, context: UnsafeMutablePointer<Void>) {

    if context == &AVPlayerDemoPlaybackViewControllerStatusObservationContext {
        if let change = change as? [String: Int]
        {
            let status = change[NSKeyValueChangeNewKey]!

            switch status {
            case AVPlayerStatus.Unknown.rawValue:
                print("The status of the player is not yet known because it has not tried to load new media resources for playback")

            case AVPlayerStatus.ReadyToPlay.rawValue:
                self.playButtonPressed(playButton)
                print("The player is Ready to Play")

            case AVPlayerStatus.Failed.rawValue:
                print("The player failed to load the video")

            default:
                print("Other status")
            }
        }
    } else {
        super.observeValueForKeyPath(keyPath, ofObject: object, change: change, context: context)
    }

}

iphone - How to get notification for audio streaming status from AVPla...

iphone audio streaming avplayer
Rectangle 27 1

We use Web Audio for streaming via Aurora.js using a protocol very similar to HTTP Live Streaming. We did this because we wanted the same streaming backend to serve iPhone, Android and the web.

It was all a very long and painful process that took over 6 months of effort, but now that its all finished, its all good.

Have a look at http://radioflote.com and feel free to shoot questions or clarifications regarding anything. Go ahead and disassemble the code if you want to. Not a problem.

We created apps for mobiles because we weren't sure if was a good idea to tax the mobile processors with software decoding. But it works on newer Android phones, the iPhone didn't support HTML Audio for a long while, but I heard its working on the latest devices. I haven't confirmed this yet.

html5 - Does web based radio and audio streaming services use the Web ...

html5 audio audio-streaming webradio
Rectangle 27 2

To answer my own question, the patch provided previously in cordova media plugin stopped working on Android 6 does answer my problem. My mistake was applying it in the wrong place. Changing code directly in platforms/android/src/org/apache/cordova/media/AudioPlayer.java and rebuilding the app does work. (I was patching the plugin code instead). Comment out line 354, so it becomes,

//this.seekToPlaying(this.seekOnPrepared);

Cordova Media plugin fails with mp3 internet audio streaming on Androi...

android cordova audio streaming android-6.0-marshmallow
Rectangle 27 2

I have done something very close to what you have been trying to do. I am using gstreamer on my server listening to a udp port. There is also a relay server written in java which is basically nothing more than a loopback socket. There is one server port which waits for a mobile client connections, upon receiving data, it dumps them all using DatagramPackets (java class for udp packets) into gstreamer's udp port. The only catch is to find the proper decoder for your gstreamer pipeline.

Streaming audio from Android to desktop application - Stack Overflow

android audio streaming java
Rectangle 27 192

Everything below this line is out of date. Keeping it here for posteri...

EDIT 3: As of IOS 10, HLS will support fragmented mp4 files. The answer now, is to create fragmented mp4 assets, with a DASH and HLS manifest. > Pretend flash, iOS9 and below and IE 10 and below don't exist.

EDIT 2: As people in the comments are pointing out, things change. Almost all browsers will support AVC/AAC codecs. iOS still requires HLS. But via adaptors like hls.js you can play HLS in MSE. The new answer is HLS+hls.js if you need iOS. or just Fragmented MP4 (i.e. DASH) if you don't

There are many reasons why video and, specifically, live video is very difficult. (Please note that the original question specified that HTML5 video is a requirement, but the asker stated Flash is possible in the comments. So immediately, this question is misleading)

EDIT: since I wrote this answer Media Source Extensions have matured, and are now very close to becoming a viable option. They are supported on most major browsers. IOS continues to be a hold out.

Next, you need to understand that Video on demand (VOD) and live video are very different. Yes, they are both video, but the problems are different, hence the formats are different. For example, if the clock in your computer runs 1% faster than it should, you will not notice on a VOD. With live video, you will be trying to play video before it happens. If you want to join a a live video stream in progress, you need the data necessary to initialize the decoder, so it must be repeated in the stream, or sent out of band. With VOD, you can read the beginning of the file them seek to whatever point you wish.

Now let's dig in a bit.

Common Delivery methods for live video in browsers:

Common Delivery methods for VOD in browsers:

  • I'm not going to talk about MKV and OOG because I do not know them very well.
  • DASH (via MSE but no h.264)

MP4 cannot be used for live video (NOTE: DASH is a superset of MP4, so don't get confused with that). MP4 is broken into two pieces: moov and mdat. mdat contains the raw audio video data. But it is not indexed, so without the moov, it is useless. The moov contains an index of all data in the mdat. But due to its format, it can not be 'flattened' until the timestamps and size of EVERY frame is known. It may be possible to construct an moov that 'fibs' the frame sizes, but is is very wasteful bandwidth wise.

So if you want to deliver everywhere, we need to find the least common denominator. You will see there is no LCD here without resorting to flash example:

  • iOS only supports h.264 video. and it only supports HLS for live.
  • Flash does not work in iOS

The closest thing to an LCD is using HLS to get your iOS users, and flash for everyone else. My personal favorite is to encode HLS, then use flash to play HLS for everyone else. You can play HLS in flash via JW player 6, (or write your own HLS to FLV in AS3 like I did)

Soon, the most common way to do this will be HLS on iOS/Mac and DASH via MSE everywhere else (This is what Netflix will be doing soon). But we are still waiting for everyone to upgrade their browsers. You will also likely need a separate DASH/VP9 for Firefox (I know about open264; it sucks. It can't do video in main or high profile. So it is currently useless).

Thanks szatmary for the detailed background and pro/cons about the various options. I have selected this answer as the accepted one as the outline of the concepts are more important than the specific fix which I found to answer the original question. Good luck with the bounty!

This is not a working solution to this question. There is a working solution to this problem below.

Why is this marked as the answer when it's not the answer??

Firefox now supports MSE and h.264 natively. Go to www.youtube.com/html5 with the latest FF browser to confirm. I Tested with FF 37. Safari 8+ on Mac also now supports MSE.

node.js - Best approach to real time http streaming to HTML5 video cli...

html5 node.js ffmpeg streaming
Rectangle 27 1

Can you write a client that just connects to your phone on that port and receives data?

Now, there are no mp4 java decoders, so you'll need to use another format. Take look at some sample apps using JavaLayer or JOgg. They both work with any InputStream, so as long as you can open a socket, you can play back your stream.

Also, I'm not sure about Android, but don't you need to open a ServerSocket and wait for connections?

Looks like the phone is the client here, and the desktop PC is the server, so that's where a ServerSocket would be used.

Yes, as Nick says, the desktop pc is the server here. But thank you for the tips on decoding apps. I will have a look at those.

Streaming audio from Android to desktop application - Stack Overflow

android audio streaming java
Rectangle 27 1

Having implemented a music streaming app, I can share a little with you.

If you want to stream and use the Android MediaPlayer class, MP3 or OGG is your best bet for a format.

If your architecture is client-server, i.e. real server in the Internet serving streams to Android devices, then just stream MP3 or OGG bytes over HTTP. Just point MediaPlayer to a URL on on your server.

If your architecture is peer-to-peer with your own custom socket code, you can create a "proxy http" server that listens on localhost on a dedicated thread. You point your MediaPlayer instance to your local in-process socket server (e.g. http://localhost:54321/MyStream.mp3). Then you have to implement code to parse the HTTP get request form MediaPlayer, then proxy the stream bytes between your custom P2P socket protocol and listeners connected to your local http server. A lot of radio streaming apps do exactly this so as to parse the ICECAST metadata from the MP3 stream. Here's the code I use for my radio streaming app that does this.

For the "start midway through the file" scenario, you might find my MP3 Stream Reader class useful. It wraps an InputStream (file, socket stream, etc..) and syncs to the next valid frame from where ever you started from. Just call read_next_chunk to get the next block of audio and its format. MediaPlayer might do most of this heavy lifting for you, so this might not be needed.

Thanks for the response. If I'm understanding you correctly, the process is to run a socket thread at the local address, which accepts the GET request from the MediaPlayer, take the client address from that and forwards the stream data from file to that address? The only thing I'm confused about is the proxy server; how is that built to send the data internally?

On the listening side, MediaPlayer connects to http://localhost:12345. You have a dedicated server thread that listens for connections on port 12345. When the mediaplayer thread connects to your in-process server, you stream audio to it. You "get" the audio from the other side with your own socket protocol.

sockets - Streaming audio from an Android device to another - Stack Ov...

android sockets audio streaming android-mediaplayer
Rectangle 27 1

I could be wrong here, but I believe there is only a MediaPLayer.OnInfoListner API available in Java to get information about the content stream being played. Not sure of how helpful that API actually is though. You might also want to try stream scrapers(is what I believe they are called) to get stream data and see if there is both audio and video channel to make a determination.

media player - Android Mediaplayer :: How to detect streaming content ...

android media-player
Rectangle 27 1

for Live streaming video/audio http://www.wowza.com/ give you the best functionality. you have to set up your server in WOWZA also you cant test in that.

for IOS you can broadcast and receive from the below demo you can download from here

i am aware about wowza but i need to setup the whole thing on my own server and wowza is not free its paid service

@JayGajjar you are right it's paid.. :(

can you guide me how can i configure it in my server ?

it's long process to configure sever , please follow the forums and other user guild of wowza... you can start from here.. wowza.com/forums/

ffmpeg - Live Streaming App iOS - Stack Overflow

ios ffmpeg http-live-streaming live-streaming
Rectangle 27 1

Look to Audio Queue Services for capture and recording. You'd need to come up with a wire protocol to transmit the audio, but the tools to capture and playback or save on either side of the connection can be built using the queue services.

Which iPhone API should I be using for streaming audio? - Stack Overfl...

iphone audio core-audio voip
Rectangle 27 5

So here's what I think is happening and also how I think you can fix it.

You're pulling a predefined item out of the ipod (music) library on an iOS device. you are then using an asset reader to collect it's buffers, and queue those buffers, where possible, in an AudioQueue.

The problem you are having, I think, is that you are setting the audio queue buffer's input format to Linear Pulse Code Modulation (LPCM - hope I got that right, I might be off on the acronym). The output settings you are passing to the asset reader output are nil, which means that you'll get an output that is most likely NOT LPCM, but is instead aiff or aac or mp3 or whatever the format is of the song as it exists in iOS's media library. You can, however, remedy this situation by passing in different output settings.

readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track outputSettings:nil];
[NSDictionary dictionaryWithObjectsAndKeys:
                                                 [NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey, 
                                                 [NSNumber numberWithFloat:44100.0], AVSampleRateKey,
                                                 [NSNumber numberWithInt:2], AVNumberOfChannelsKey,
                                                 [NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)],
                                                 AVChannelLayoutKey,
                                                 [NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
                                                 [NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
                                                 [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
                                                 [NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
                                                 nil];

output = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track audioSettings:outputSettings];

It's my understanding (per the documentation at Apple1) that passing nil as the output settings param gives you samples of the same file type as the original audio track. Even if you have a file that is LPCM, some other settings might be off, which might cause your problems. At the very least, this will normalize all the reader output, which should make things a bit easier to trouble shoot.

the reason why I provided nul as a parameter for AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:audioReadSettings];

was because according to the documentation and trial and error, I...

AVAssetReaders do 2 things; read back an audio file as it exists on disk (i.e.: mp3, aac, aiff), or convert the audio into lpcm.

If you pass nil as the output settings, it will read the file back as it exists, and in this you are correct. I apologize for not mentioning that an asset reader will only allow nil or LPCM. I actually ran into that problem myself (it's in the docs somewhere, but requires a bit of diving), but didn't elect to mention it here as it wasn't on my mind at the time. Sooooo... sorry about that?

If you want to know the AudioStreamBasicDescription (ASBD) of the track you are reading before you read it, you can get it by doing this:

AVURLAsset* uasset = [[AVURLAsset URLAssetWithURL:<#assetURL#> options:nil]retain];
AVAssetTrack*track = [uasset.tracks objectAtIndex:0];
CMFormatDescriptionRef formDesc = (CMFormatDescriptionRef)[[track formatDescriptions] objectAtIndex:0];
const AudioStreamBasicDescription* asbdPointer = CMAudioFormatDescriptionGetStreamBasicDescription(formDesc);
//because this is a pointer and not a struct we need to move the data into a struct so we can use it
AudioStreamBasicDescription asbd = {0};
memcpy(&asbd, asbdPointer, sizeof(asbd));
    //asbd now contains a basic description for the track

You can then convert asbd to binary data in whatever format you see fit and transfer it over the network. You should then be able to start sending audio buffer data over the network and successfully play it back with your AudioQueue.

I actually had a system like this working not that long ago, but since I could't keep the connection alive when the iOS client device went to the background, I wasn't able to use it for my purpose. Still, if all that work lets me help someone else who can actually use the info, seems like a win to me.

Wow! it worked! The funny part is that I thought i did this (at least while i was trouble shooting.. your exact same code was already in mine only commented out!).. anyways just for further clarification.. i looked at your audiochannellayout stuff and wondered where it was defined.. so i found this code:...

but then i got this error: *** -[AVAssetReaderTrackOutput initWithTrack:outputSettings:] AVAssetReaderTrackOutput does not currently support AVNumberOfChannelsKey or AVChannelLayoutKey'.. so i simply commented out the whole channel layout info and it worked like a charm!

hey good stuff man! This made me explore some other options for my current architecture.. and I came up with this other question (stackoverflow.com/questions/12329251/) please take a look!

ios - why is audio coming up garbled when using AVAssetReader with aud...

ios audio streaming
Rectangle 27 3

The only way I've found to do this is to use an audio streaming lib (like youtube-audio-stream for Node) and buffer/pipe the audio from server-side.

var express = require('express');
var router = express.Router();

var youtubeStream = require('youtube-audio-stream');

router.get('/stream/:videoId', function (req, res) {
    try {
        youtubeStream(req.params.videoId).pipe(res);
    } catch (exception) {
        res.status(500).send(exception)
    }
});

Then you can create audioContext off of it. AudioContext is the magic keyword that you need for visualization with either WebGL or canvas (e.g. pixi.js).

function loadSound() {
  var request = new XMLHttpRequest();
  request.open("GET", "http://localhost:8000/stream/nl6OW07A5q4", true); 
  request.responseType = "arraybuffer"; 

  request.onload = function() {
      var Data = request.response;
      process(Data);
  };

  request.send();
}

function process(Data) {
  source = context.createBufferSource(); // Create Sound Source
  context.decodeAudioData(Data, function(buffer){
    source.buffer = buffer;
    source.connect(context.destination); 
    source.start(context.currentTime);
})

From there on out it should be easy to apply any of the multiple audio visualization libraries out there to the context.

javascript - Is there anyway to visualize youtube audio from an iframe...

javascript api audio iframe youtube
Rectangle 27 7

There is no effective difference between streaming and downloading. They're the same thing. Any difference is purely semantic.

If you wanted to, you could "download" an MP3 from any web server and start playing it while you were downloading it. It just requires that you buffer some of the data and start sending it to your decoding and playback routines right away.

Similarly, even so called "streaming" servers can be downloaded. You just have to save the bytes as they are being sent across the wire to a file.

"Streaming" applications are just apps that are not designed to save the files to disk.

First, if you are streaming "live" audio, such as radio or other types where you don't need 100% reliability, then they stream using UDP. This can still be saved if you want, but it's more packet oriented than stream oriented.

The second is when encryption is used, in which case you can still probably save the file, but it would be useless without the encryption algorithm and keys.

You can stream from IIS or use "Cassini" server to write your own. It's not hard to do. Plop them in a folder on the website. Just remember that spaces are either %20 or + in web stuff. like server/john%20Cougar%20Mellencamp.mp3 or server/john+Cougar+Mellencamp.mp3

wpf - C#: Streaming an Audio file from a Server to a Client - Stack Ov...

c# wpf audio streaming
Rectangle 27 3

When the UIBackgroundModes key contains the audio value, the systems media frameworks automatically prevent the corresponding app from being suspended when it moves to the background. As long as it is playing audio or video content or recording audio content, the app continues to run in the background. However, if recording or playback stops, the system suspends the app.

You can use any of the system audio frameworks to work with background audio content, and the process for using those frameworks is unchanged.

This means that iOS should recognize that you're playing audio through Core Audio, and keep your app unsuspended, as long as you've correctly configured your app for playing audio in the background.

Because your app is not suspended while playing media files, callbacks operate normally while your app is in the background. In your callbacks, though, you should do only the work necessary to provide data for playback. For example, a streaming audio app would need to download the music stream data from its server and push the current audio samples out for playback. Apps should not perform any extraneous tasks that are unrelated to playback.

You should be able to operate normally as long as your app is still playing audio, and is allowed to do what it needs to in order to continue playing audio. This means that you should be able to continue to use MPC in the background to receive the audio data and play it.

Be sure to read the entire documentation on the subject, especially regarding Audio Sessions.

You have all the reason, Really Thanks!. mmm i need wait 19 hours to bring your points :D!

iphone - Multipeer Connectivity audio streaming stop work on backgroun...

iphone objective-c ios7 audio-streaming multipeer-connectivity