Apple introduced the newest version of its flagship mobile operating system, iOS 7, in September 2013. The biggest change for users was a completely revamped UI, using brighter colors, flat designs, and a complete move away from skeumorphic interfaces that have existed since the introduction of the original iPhone in 2007.

For developers, iOS 7 includes hundreds of new and modified APIs, many tailored to take advantage of the hardware capabilities of the new iPhone 5s.

And if you are a C# developer, you don’t have to feel left out.

Xamarin provides a set of developer tools that allow C# developers to directly access native iOS APIs in order to write native apps, while still using their deep knowledge of C# and the .NET core framework.

I’ll examine five of these new APIs and how to use them in Xamarin:

  • Beacons
  • Text to Speech
  • Maps
  • Barcodes
  • Background Transfers

Beacons

Beacons are a new technology that relies on Bluetooth Low Energy transmitters to allow devices to communicate their presence to other devices within range. One device acts as a beacon, transmitting a signal that other devices can detect and determine their range from.

Imagine setting your iPhone up as a beacon on your nightstand, and having your teenager’s device register its presence when he comes in late, or using standalone beacons in a crowded convention center to help users find room locations.

Let’s look at an example of setting a beacon and a receiver using Xamarin.

First, create a new CLBeaconRegion, assign it a GUID, and tell it what kind of notifications you want to receive.

guid =
new NSUuid ("c5cf54e0-6dd8-45e9-91a3-a8cda2f41120"); bRegion = new CLBeaconRegion (guid, "beacon"); bRegion.NotifyEntryStateOnDisplay = true; bRegion.NotifyOnEntry = true;
bRegion.NotifyOnExit = true;

Next, ask your region to return a PeripheralData Dictionary based on an input power (in dB) that you’ll use to create the beacon. You’ll also create a delegate and DBPeripheralManager object and use the StartAdvertising() method to start the beacon.

var power = new NSNumber (-59);
NSMutableDictionary peripheralData = bRegion.GetPeripheralData (power);
pDelegate = new BTPDelegate ();
pManager = new CBPeripheralManager (pDelegate, DispatchQueue.DefaultGlobalQueue);
pManager.StartAdvertising (peripheralData);

Now that you’ve created and started the beacon, you need to create a client class that looks for the beacon.

The next snippet shows how to do this. You’ll create a CLLocationManager object and assign a handler for the RegionEntered event. If the identifier of the Region matches the one you used in the Beacon, you’ll use a LocalNotification to alert the user that you’ve entered the beacon’s region.

locManager = new CLLocationManager ();
locManager.RegionEntered += (object s, CLRegionEventArgs ea) => {
if (ea.Region.Identifier == "beacon") {
  UILocalNotification notification = new
  UILocalNotification ()
  {
    AlertBody = "Entering beacon region!" };
    UIApplication.SharedApplication.
      PresentLocationNotificationNow (notification);
  }
};

You can also tell the user how far (in relative terms) they are from the beacon, which can help guide them in the correct direction if they are within range but heading the wrong way.

Finally, assign a handler to the DidRangeBeacons event, and based on the value of the CLProximity enum, you can use a visual indicator (in this case, background color) to let the user know if he’s getting closer or farther away from the beacon.

Text to Speech

Like many developers of my generation, I first learned to use a computer and do rudimentary programming on the Commodore 64. One of the applications I toyed with was SAM (Software Automatic Mouth) the first commercial software-based speech synthesizer. Thirty years later, Apple has included APIs in iOS 7 that allow you to replicate the experience of making a computer talk to you.

Doing this only takes a couple of lines of code, as shown below. First, create an AVSpeechSynthesizer class, and then an AVSpeechUtterance class initialized with the value of a text string. You can specify different voices based on locale strings, and also adjust the rate and pitch of the voice to match your preference. Once you have everything set up to your liking, call the SpeakUtterance method and pass in the AVSpeechUtterance object.

var speechSynthesizer = new AVSpeechSynthesizer ();
var speechUtterance
  = new AVSpeechUtterance (“Xamarin Rocks!”);
speechSynthesizer.SpeakUtterance (speechUtterance);

Maps

The original version of iOS included mapping features using Google’s Maps technology. Starting with iOS 6, Apple replaced Google’s Maps API with their own implementation. In iOS 7, Apple has enhanced Maps with several new features, including 3D projections that also display individual buildings on a map.

Using these new 3D Maps only takes a few lines of code:

map = new MKMapView (this.View.Bounds);
var target =
new CLLocationCoordinate2D(29, -95);
var viewPoint =
new CLLocationCoordinate2D(29, -95);
map.ShowsBuildings = true;
map.PitchEnabled = true;
var camera =
MKMapCamera.CameraLookingAtCenterCoordinate(target,
viewPoint, 500);
map.Camera = camera;

First, create a new MKMapView, and define two coordinates: one for the center of the map, and one for the viewpoint. Then tell the MKMapView that you want to include Buildings and also display the maps in 3D by setting the PitchEnabled property. Finally, create a new MKMapCamera, passing in the target and viewpoint coordinates, and a third argument that represents the elevation of the camera viewpoint.

Note that the buildings are only shown on a device and don’t display when running in the simulator.

Barcodes

Using the iOS camera to scan barcodes and QR codes has long been possible with the use of external libraries like RedLaser and ZXing. With iOS 7, Apple now includes barcode scanning capabilities in a native API. You can also easily generate QR Codes from within your app.

Barcode scanning with the iOS 7 API is not as straightforward as using some of the other new APIs, but it isn’t difficult.

To start scanning, create a new AVCaptureSession and define input and output options for it. Next, tell it what types of images you’re looking for using the AVMetaDataObject enum. In this sample, you’ll look for QR Codes and EAN13 barcodes, which are commonly used as UPC codes on consumer products. You will also assign a delegate to the session.

session = new AVCaptureSession ();
var camera = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);
var input = AVCaptureDeviceInput.FromDevice(camera);
session.AddInput(input);
output = new AVCaptureMetadataOutput();
session.AddOutput(output);
output.MetadataObjectTypes = new NSString[] {
  AVMetadataObject.TypeQRCode,
  AVMetadataObject.TypeEAN13Code
};

In the Delegate class (Listing 1) you’ll process each AVMetadataObject found by the scanner, and raise an event that the app can respond to by displaying the data to the user.

QR Code generation is made possible by a new Core Image filter. To generate the image, create an instance of CIQRCodeGenerator and assign the text you want to encode in the Message property. The CorrectionLevel property tells the generator how much error correction information to include when encoding. A higher level creates a larger image, but reduces the chances that someone scanning the image will encounter an error. The default level is “M” with a 15% correction level. The other possible values are “H,” “Q,” and “L,” representing 30%, 25%, and 7%.

After you generate the code, you need to transform it from a CIImage into a UIImage so you can display it in a UIImageView.

var gen = new CIRCodeGenerator() {
  Message = NSData.FromString(txtQR.Text);
  CorrectionLevel = “M”
};
    
var context = CIContext.FromOptions(null);
var cgimage = context.CreateCGImage(gen.OutputImage,
 gen.OutputImage.Extent);
qrImageView.Image = UIImage.FromImage(cgimage);

Background Transfers

iOS 7 introduces several new multitasking APIs that allow you to do long running data transfers in the background or periodically check a datasource for new data and notify your app.

To use the Background Transfer API, you first need to register a completion handler in your AppDelegate:

public NSAction BackgroundSessionCompletionHandler {
  get; set;
}
    
public override void HandleEventsForBackgroundUrl
(UIApplication application, string sessionIdentifier,
NSAction completionHandler) {
  BackgroundSessionCompletionHandler =
    completionHandler;
}

To initiate a transfer, create a NSUrlSession, passing in a Delegate class, and then call CreateDownloadTask, passing in a URL to the resource you want to download. Finally, call the newly created task’s Resume method.

var configuration =
NSUrlSessionConfiguration.
  BackgroundSessionConfiguration (Identifier)) {
    return NSUrlSession.FromConfiguration
    (configuration, new UrlSessionDelegate (this),
     null);
    
  using (var url = NSUrl.FromString (url))
    using (var request = NSUrlRequest.FromUrl (url)) {
    downloadTask =
      session.CreateDownloadTask (request);
    downloadTask.Resume ();

The Delegate class for the session contains several methods you can use for monitoring the progress of the download and reporting back to the user.

Conclusion

I’ve only scratched the surface of the new and revised APIs available in iOS 7. Additional APIs exist, such as:

  • Routing information in Maps
  • AirDrop for file transfers between devices
  • Manipulation and rendering of text with a completely new TextKit
  • Multi-peer connectivity
  • Additional filters for images
  • A new SpriteKit

The complete source for a sample iPhone application containing all of the samples included in this article is available at https://github.com/jawbrey/Code-ios7.

Additional resources for exploring new iOS 7 APIs with Xamarin can be found online: