|
|
iPhone OS offers a rich set of tools for working with sound in your application. These tools are arranged into frameworks according to the features they provide, as follows:
The Core Audio framework (which is a peer of the other audio frameworks) provides data types used by all Core Audio services.
This section provides guidance on getting started on implementing a wide range of audio features, as listed here:
Be sure to read the next section, “The Basics: Hardware Codecs, Audio Formats, and Audio Sessions”for critical information on how audio works on an iPhone OS-based device. Also read “Best Practices for iPhone Audio”, which offers guidelines and lists the audio and file formats to use for best performance and best user experience.
When you’re ready to dig deeper, the iPhone Dev Center contains guides, reference books, sample code, and more. For tips on how to perform common audio tasks, see Audio & Video Coding How-To's. For in-depth explanations of audio development in iPhone OS, see Core Audio Overview, Audio Queue Services Programming Guide, and Audio Session Programming Guide.
The Basics: Hardware Codecs, Audio Formats, and Audio Sessions
To get oriented toward iPhone audio development, it’s very helpful to understand a few things about the hardware and software architecture of iPhone OS-based devices.
iPhone Audio Hardware Codecs
iPhone OS applications can use a wide range of audio data formats. Starting in iPhone OS 3.0, most of these formats can use software-based encoding and decoding. You can simultaneously play multiple sounds in all formats, although for performance reasons you should consider which format is best in a given scenario. Hardware decoding generally entails less of a performance impact than software decoding.
The following iPhone OS audio formats can employ hardware decoding for playback:
The device can play only a single instance of one of these formats at a time through hardware. For example, if you are playing a stereo MP3 sound, a second simultaneous MP3 sound will use software decoding Similarly, you cannot simultaneously play an AAC and an ALAC sound using hardware. If the iPod application is playing an AAC sound in the background, your application plays AAC, ALAC, and MP3 audio using software decoding.
To play multiple sounds with best performance, or to efficiently play sounds while the iPod is playing in the background, use linear PCM (uncompressed) or IMA4 (compressed) audio.
To learn how to check which hardware and software codecs are available on a device, read the discussion for the kAudioFormatProperty_HardwareCodecCapabilities constant in Audio Format Services Reference.
Audio Playback and Recording Formats
Here are the audio playback formats supported in iPhone OS:
Here are the audio recording formats supported in iPhone OS:
The following list summarizes how iPhone OS supports audio formats for single or multiple playback:
The single hardware path for AAC, MP3, and ALAC playback entails implications for "play along” style applications, such as a virtual piano. If the user is playing a sound in one of these three formats in the iPod application, then your application—to play along over that audio—will employ software decoding.
Audio Sessions
Core Audio’s audio session interface (described in Audio Session Services Reference) lets your application define its general audio behavior and to work well within the larger audio context of the device it’s running on. The behavior you can influence includes such things as:
The larger audio context includes changes made by users, such as when they plug in headsets, and events such as Clock and Calendar alarms and incoming phone calls. By using the audio session, you can respond appropriately to such events.
AVAudioSession Class Reference and AVAudioSessionDelegate Protocol Reference describe a streamlined Objective-C interface for managing the audio session. To configure the audio session for interruptions, you employ the C-based Audio Session Services directly; its interface is described in Audio Session Services Reference. You can mix and match code from both interfaces in your application.
The audio session comes with some default behavior that you can use to get started in development. However, except for certain special cases, the default behavior is unsuitable for a shipping application that uses audio. By configuring and using the audio session, you can express your audio intentions and respond to OS-level audio decisions.
For example, when using the default audio session, audio in your application stops when the Auto-Lock period times out and the screen locks. If you want to ensure that playback continues with the screen locked, you include the following lines in your application’s initialization code:
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback error: nil]; [[AVAudioSession sharedInstance] setActive: YES error: nil];The AVAudioSessionCategoryPlayback category ensures that playback continues when the screen locks. Activating the audio session activates the specified category.
How you handle the interruption caused by an incoming phone call or clock alarm depends on the audio technology you are using.
Handling audio interruptions
Playing Audio
This section introduces you to playing sounds in iPhone OS using iPod library access, System Sound Services, Audio Queue Services, the AV Foundation framework, and OpenAL.
Playing Media Items with iPod Library Access
Starting in iPhone OS 3.0, iPod library access lets your application play a user’s songs, audio books, and audio podcasts. The API design makes basic playback very simple while also supporting advanced searching and playback control.
Your application has two ways to retrieve items. The media item picker, shown on the left, is an easy-to-use, pre-packaged view controller that behaves like the built-in iPod application’s music selection interface. For many applications, this is sufficient. If the picker doesn’t provide the specialized access control that you want, the media query interface will. It supports predicate-based specification of items from the iPod library.
Using iPod library access
As depicted in the figure to the right of your application, you then play the retrieved media items using the music player provided by this API.
Playing Short Sounds or Invoking Vibration Using System Sound Services
To play user-interface sound effects (such as button clicks) or alert sounds, or to invoke vibration on devices that support it, use System Sound Services. You can find sample code in the SysSound sample in the iPhone Dev Center.
The AudioServicesPlaySystemSound function lets you very simply play short sound files. The simplicity carries with it a few restrictions. Your sound files must be:
In addition, when you use the AudioServicesPlaySystemSound function:
The similar AudioServicesPlayAlertSound function plays a short sound as an alert. If a user
has configured their device to vibrate in Ring Settings, calling this function invokes vibration in addition to playing the sound file.
To play a sound with the AudioServicesPlaySystemSound or Audio Services Play Alert Sound function, you first create a sound ID object, as shown in Listing.
Creating a sound ID objectPlaying a system sound
In typical use, which includes playing a sound occasionally or repeatedly, retain the sound ID object until your application quits. If you know that you will use a sound only once—for example, in the case of a startup sound—you can destroy the sound ID object immediately after playing the sound, freeing memory.
Applications running on iPhone OS–based devices that support vibration can trigger that feature using System Sound Services. You specify the vibrate option with the kSystem SoundID_Vibrate identifier. To trigger it, use the Audio Services Play System Sound function, as shown in Listing.
Triggering vibration
If your application is running on an iPod touch, this code does nothing.
Playing Sounds Easily with the AVAudioPlayer Class
The AVAudioPlayer class provides a simple Objective-C interface for playing sounds. If your application does not require stereo positioning or precise synchronization, and if you are not playing audio captured from a network stream, Apple recommends that you use this class for playback.
Using an audio player you can:
The AVAudioPlayer class lets you play sound in any audio format available in iPhone OS, as described in “Audio Playback and Recording Formats”. For a complete description of this class’s interface, see AVAudioPlayer Class Reference.
To configure an audio player for playback, you assign a sound file to it, prepare it to play, and designate a delegate object. The code in Listing would typically go into an initialization method of the controller class for your application.
Configuring an AVAudioPlayer object
You use a delegate object (which can be your controller object) to handle interruptions and to update the user interface when a sound has finished playing. The delegate methods for the AVAudioPlayer class are described in AVAudioPlayerDelegate Protocol Reference. Listing shows a simple implementation of one delegate method. This code updates the title of a Play/Pause toggle button when a sound has finished playing.
Implementing an AVAudioPlayer delegate method
To play, pause, or stop an AVAudioPlayer object, call one of its playback control methods. You can test whether or not playback is in progress by using the playing property. Listing shows a basic play/pause toggle method that controls playback and updates the title of a UIButton object.
Controlling an AVAudioPlayer object
The AV Audio Player class uses the Objective-C declared properties feature for managing information about a sound—such as the playback point within the sound’s timeline, and for accessing playback options—such as volume and looping. For example, you set the playback volume for an audio player as shown here:
[self.player setVolume: 1.0]; // available range is 0.0 through 1.0
Playing Sounds with Control Using Audio Queue Services
Audio Queue Services adds playback capabilities beyond those available with the
AVAudioPlayer class. Using Audio Queue Services for playback lets you:
Audio Queue Services lets you play sound in any audio format available in iPhone OS, as described in “Audio Playback and Recording Formats”. You also use this technology for recording, as explained in “Recording Audio”.
For detailed information on using this technology, see Audio Queue Services Programming Guide and AudioQueue Services Reference. For sample code, see the SpeakHere sample in the iPhone Dev Center.(For a Mac OS X implementation, see the Audio Queue Tools project available in the Core Audio SDK. When you install Xcode tools in Mac OS X, the Audio Queue Tools project is available at /Developer/ Examples/ CoreAudio/ SimpleSDK/ Audio QueueTools.)
Creating an Audio Queue Object
To create an audio queue object for playback, perform these three steps:
Listing illustrates these steps using ANSI C. The SpeakHere sample project shows these same steps in the context of an Objective-C program.
Creating an audio queue object
Controlling the Playback Level Audio queue objects give you two ways to control playback level.
To set playback level directly, use the Audio Queue Set Parameter function with the kAudio Queue Param_Volume parameter, as shown in Listing. Level change takes effect immediately.
Setting the playback level directly
You can also set playback level for an audio queue buffer by using the Audio Queue Enqueue Buffer With Parameters function. This lets you assign audio queue settings that are, in effect, carried by an audio queue buffer as you enqueue it. Such changes take effect when the buffer begins playing.
In both cases, level changes for an audio queue remain in effect until you change them again.
Indicating Playback Level
You can obtain the current playback level from an audio queue object by:
The Audio Queue Level Meter State structure
Playing Multiple Sounds Simultaneously
To play multiple sounds simultaneously, create one playback audio queue object for each sound. For each audio queue, schedule the first buffer of audio to start at the same time using the Audio Queue Enqueue Buffer With Parameters function.
Audio format is critical when you play sounds simultaneously on an iPhone OS–based device. To play simultaneous sounds, you use the linear PCM (uncompressed) audio format or certain compressed audio formats, as described in “Audio Playback and Recording Formats”.
Playing Sounds with Positioning Using OpenAL
The open-sourced OpenAL audio API, available in iPhone OS in the OpenAL framework, provides an interface optimized for positioning sounds in a stereo field during playback. Playing, positioning, and moving sounds is simple when you use Open AL— working the same way as it does on other platforms. OpenAL also lets you mix sounds. OpenAL uses Core Audio’s I/O unit for playback, resulting in the lowest latency.
For all of the reasons, OpenAL is your best choice for playing sound effects in game applications on iPhone OS–based devices. However, OpenAL is also a good choice for general iPhone OS application audio playback needs.
OpenAL 1.1 support in iPhone OS is built on top of Core Audio. For more information, see OpenAL FAQ for iPhone OS. For OpenAL documentation, see the OpenAL website at http://openal.org. For sample code showing you how to play OpenAL audio, see oalTouch.
Recording Audio
Core Audio provides support in iPhone OS for recording audio using the AV Audio Recorder class and Audio Queue Services. These interfaces do the work of connecting to the audio hardware, managing memory, and employing codecs as needed. You can record audio in any of the formats listed in “Audio Playback and Recording Formats This section introduces you to recording sounds in iPhone OS using the AV Audio Recorder class and Audio Queue Services.
Recording with the AVAudioRecorder Class
The easiest way to record sound in iPhone OS is with the AV Audio Recorder class, described in AV Audio Recorder Class Reference. This class provides a highly-streamlined Objective-C interface that makes it easy to provide sophisticated features like pausing/resuming recording and handling audio interruptions. At the same time, you retain complete control over recording format.
To record, you provide a sound file URL, set up the audio session, and configure there cording object. Application launch is a good time to do some of the setup, as shown in Listing. Variables such as sound File URL and recording are declared in the class interface.
Setting up the audio session and the sound file URL
You would also add the AVAudioSessionDelegate, AVAudioRecorderDelegate,
AVAudioPlayerDelegate (if also supporting playback) protocol names to the interface declaration. Then, you could implement a record method as shown in Listing.
A record/stop method using the AVAudioRecorder class
Recording with Audio Queue Services
To record audio with Audio Queue Services, your application configures the audio session, instantiates a recording audio queue object, and provides a callback function. The callback stores the audio data in memory for immediate use or writes it to a file for long-term storage.
Recording takes place at a system-defined level in iPhone OS. The system takes input from the audio source that the user has chosen—the built-in microphone or, if connected, the headset microphone or other input source. Just as with playback, you can obtain the current recording audio level from an audio queue object by querying its kAudio Queue Property_Current Level Meter property, as described in “Indicating Playback Level”.
For detailed examples of how to use Audio Queue Services to record audio, see Recording Audio in Audio Queue Services Programming Guide. For sample code, see the SpeakHere sample in the iPhone Dev Center.
Parsing Streamed Audio
To play streamed audio content, such as from a network connection, use Audio File Stream Services in concert with Audio Queue Services. Audio File Stream Services parses audio packets and metadata from common audio file container formats in a network bitstream. You can also use it to parse packets and metadata from on-disk files.
In iPhone OS, you can parse the same audio file and bitstream formats that you can in Mac OS X, as follows:
Having retrieved audio packets, you can play back in any of the formats supported in iPhone OS, as listed in “Audio Playback and Recording Formats”.
For best performance, network streaming applications should use data from Wi-Fi connections only. iPhone OS lets you determine which networks are reachable and available through its System Configuration framework and its SC Network Reachability.h interfaces. For sample code, see the Reachability sample in the iPhone Dev Center.
To connect to a network stream, you can use interfaces from Core Foundation in iPhone OS, such as the CF HTTP Mesaage interface, described in CF HTTP Message Reference. You parse the network packets to recover audio packets using Audio File Stream Services. You then buffer the audio packets and send them to a playback audio queue object.
Audio File Stream Services relies on interfaces from Audio File Services, such as the Audio Frame Packet Translation structure and the Audio File Packet TableInfo structure These are described in Audio File Services Reference.
For more information on using streams, refer to Audio File Stream Services Reference. For sample code, see the Audio File Stream sample project located in the<Xcode> /Examples/ CoreAudio/ Services/ directory, where <Xcode> is the path to your
developer tools directory.
Audio Unit Support in iPhone OS
iPhone OS provides a set of audio plug-ins, known as audio units, that you can use in any application. The interfaces in the Audio Unit framework let you open, connect, and use these audio units. You can also define custom audio units and use them inside your application. Because you must statically link custom audio unit code into your application, audio units that you develop cannot be used by other applications in iPhone OS.
Table lists the audio units provided in iPhone OS.
System-supplied audio units
For more information on using system audio units, see System Audio Unit Access Guide.
Best Practices for iPhone Audio
Tips for Manipulating Audio
Table lists some basic tips to remember when manipulating audio content in iPhone OS.
Audio tips
Preferred Audio Formats in iPhone OS
For uncompressed (highest quality) audio, use 16-bit, little endian, linear PCM audio data packaged in a CAF file. You can convert an audio file to this format in Mac OS X using the afconvert command-line tool.
/usr/bin/afconvert -f caff -d LEI16 {INPUT} {OUTPUT}
The afconvert tool lets you convert to a wide range of audio data formats and file types. See the afconvert man page, and enter afconvert -h at a shell prompt, for more information.
For compressed audio when playing one sound at a time, and when you don’t need to play audio simultaneously with the iPod application, use the AAC format packaged in a CAF or m4a file.
For less memory usage when you need to play multiple sounds simultaneously, use IMA4 (IMA/ADPCM) compression. This reduces file size but entails minimal CPU impact during decompression. As with linear PCM data, package IMA4 data in a CAF file.
|
|
IPHONE APPS Related Tutorials |
|
---|---|
C Tutorial | Mobile Testing Tutorial |
Mobile computing Tutorial | Mobile Security Tutorial |
Objective C Tutorial | Mobile Marketing Tutorial |
IPHONE APPS Related Practice Tests |
|
---|---|
C Practice Tests | IPHONE APPS Practice Tests |
Mobile Testing Practice Tests | Oracle apps Practice Tests |
Mobile computing Practice Tests |
All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.