Great multi-touch experience: can pass dragged items between hands, and hold
onto them while interacting with other UI elements, as if manipulating
Drag and drop objects
UIDragItem is the model object for the data you’re dragging
UIPasteConfiguration can be used to support both paste and drop with
the same code
UIDropInteraction for more customization: can provide a delegate that
accepts/rejects drops, requests formats
Animations, etc can be customized with the delegates
Timeline for a drag and drop interaction
Drag starts: app needs to provide the items to be dragged
Default image is a snapshot of the dragged view
Drop proposal: .cancel, .copy (should be default), .move,
.move only works within the same app
Get another callback when the user lifts finger to complete or fail the
Bulletin board app for dragging around photos
From Monroe to NASA (Session 106)
Dr. Christine Darden
In high school, geometry was her first exposure to math. Ended up studying
math and teaching in college out of concern that nobody would hire a black
Ended up in graduate school as a research assistant for aerosol physics.
Earned a degree in applied math and recruited by NASA.
After five years of implementing math on the computer, she pushes management
to give her a promotion. Previously, men and women of the same background
were assigned different positions: men became researchers and published
papers, while the women worked as computers.
Researched ways to reduce sonic boom for supersonic aircraft.
Went into management at Langley and retired in 2007.
iPhone 7 Plus sold much better than 6s Plus, and Brad attributes this to
the dual camera
Dual camera (virtual device) seamlessly matches exposure and compensates
for parallax when zooming
Depth and disparity
Value of depth map at each location indicates distance to that object
Stereorectified: multiple cameras that have same focal length and
parallel optical axes
Disparity: the inverse of depth. For the same object, change in position
between the images goes down as object gets farther.
AVDepthData can be either depth or disparity map, because they’re both
Holes are represented as NaN. Happens when unable to find features, or
points that are not present in both images.
Calibration errors (unknown baseline) can happen from OIS, gravity
pulling on lens, or focusing. This leads to a constant offset error in
the depth data. So we only have relative depth (cannot compare between
Streaming depth data
AVCaptureDepthDataOutput can give raw depth data during preview, or
smooth it between frames
Maximum resolution 320x240 at 24 fps
AVCaptureDataOutputSynchronizer helps synchronize multiple outputs of
a capture session
Can opt into receiving camera intrinsics matrix with focal length and
optical center (used by ARKit)
Capturing depth data
In iOS 11, all Portrait Mode photos contain depth data embedded
Lenses have distortion because they’re not pinholes — straight lines
in the scene don’t necessarily become straight lines in the image.
Features are different parts may be warped differently (especially
because the cameras have different focal lengths).
Depth maps are computed as rectilinear, then distortion is added so they
correspond directly to the RGB image. This is good for photo editing,
but you need to make it rectilinear for scene reconstruction.
AVCameraCalibrationData: intrinsics, extrinsics (camera’s pose), lens
distortion center (not always the same as optical axis), radial
distortion lookup table
Dual photo capture
Capturing separate images from both cameras in a single request
Was the default on iOS, watchOS, and tvOS — now macOS too
Dry run conversions in iOS 10.0, 10.1, and 10.2 to test robustness of
the conversion proecess
Many devices gained free space because LwVM (volume manager) is no
COW snapshots ensure consistent state when making iCloud backups
Unicode characters can be represented in many ways, such as ñ = n + ~
Native normalization for erase-restored iOS 11 devices
Runtime normalization (in the filesystem driver?) for 10.3.3 and 11
devices that are not erase-restored
Future update to convert all devices to native normalization. Maybe
because APFS compares filenames by hash, so they need to redo all the
hashes on disk.
APFS on macOS
System volume will be automatically converted by High Sierra release
installer (it’s optional during the beta). Other volumes can be manually
converted. Boot volume must be converted by the installer to be
bootable, so don’t do it manually.
Multiple volumes on existing drives do not use space sharing — they
are converted independently. Suggest adding APFS volumes to an existing
container and manually copying the files over.
EFI driver embedded into each APFS volume, allowing boot support even
for encrypted drives in virtual machines
FileVault conversion preserves existing recovery key and passwords.
Snapshots are encrypted even if they were taken before enabling
All Fusion Drive metadata is pinned to the SSD
Defragmentation support for spinning hard drives only (never on SSDs)
APFS and Time Machine
Since Lion, Mobile Time Machine locally caches some backups so you don’t
need the external drive all the time
Was implemented with 2 daemons, including a filesystem overlay. Lots of
complexity: O(10,000) lines of code.
In High Sierra, it’s been reimplemented on top of APFS snapshots
Make a backup in O(1) time: tmutil snapshot
List the volumes with mount. The ones beginning with
com.apple.TimeMachine are the hourly APFS snapshots being taken by
Unmount all snapshots to put it into a cold state:
tmutil unmountLocalSnapshots /. Then try to enter Time Machine and it
will load very fast (they are mounted lazily).
Local restores are also O(1) because we just make a COW reference to
existing blocks on disk
APFS and Finder
Fast copying using COW clones
If a bunch of clones referencing the same blocks are copied to another
APFS container, the cloning relationship is preserved
Check for available video codec types (for example,
AVCapturePhotoOutput). If HEVC is supported, it will be the first item
in the array, so it gets used by default.
Get used to dealing with HEVC content
AVAssetWriter has new presets
Hierarchical frame encoding
I-frames contain full information. P-frames refer to a previous frames,
and B-frames can refer to previous and future frames.
When dropping frames, can only drop if nobody depends on it
If you want to drop a lot of frames, the p-frames must depend on frames
far in the past, which makes compression suffers
Hierarchical encoding in HEVC makes the referenced frames closer
Opt into compatible high frame rate content in Video Toolbox: set the
base layer frame rate (30 fps) and expected frame rate (the actual
content frame rate, such as 120 fps).
What is High Efficiency Image File Format (HEIF)?
Proposed 2013, ratified in summer 2015 (only 1.5 years later!)
Uses HEVC intra-encoding, so average 2x smaller images than JPEG
Supports auxiliary images, such as alpha or depth maps
Store image sequences, such as bursts or focus stacks
EXIF and XMP for metadata
Arbitrarily large files with efficiently loading tiles, allowing loading
gigapixel panos while using O(100 MB) of memory
Low level access to HEIF
CGImageSource supports HEIF images, and infers type from file
Tiling support: get metadata from CGImageSourceCopyPropertiesAtIndex,
then specify area with CGImage.cropping(). CG will only decode the
CGImageDestinationCreate() will infer format from extension, though it
returns nil if the device does not have hardware encoder
High level access to HEIF (PhotoKit)
Required to render output as JPEG or H.264, regardless of input format
Don’t need anything for Live Photo editing, since editing code doesn’t
render stuff directly
Must use AVCapturePhotoOutput to get HEIF
It used to deliver a CMSampleBuffer, but unlike JPEG/JFIF, HEIF is a
container format that may hold many codecs
Introducing AVCapturePhoto, which encapsulates the photo, preview,
auxiliary data, settings, and more
Every captured image is now automatically put into a container (HEIF,
JFIF, TIFF, or DNG)
Photos captured during movies will contend for the same HEVC block.
Video is given priority, so photos will take longer and be less
compressed. Recommendation: JPEG for photos captured during video.
HEVC encode takes longer than JPEG, but it’s still fast enough for 10
fps burst. However, faster bursts should use JPEG.
Thanks for reading! If you’re enjoying my writing, I’d love to send you infrequent notifications for new posts via my newsletter. You’ll receive the full text of each post, plus occasional bonus content.