Great multi-touch experience: can pass dragged items between hands, and hold
onto them while interacting with other UI elements, as if manipulating
Drag and drop objects
UIDragItem is the model object for the data you’re dragging
UIPasteConfiguration can be used to support both paste and drop with
the same code
UIDropInteraction for more customization: can provide a delegate that
accepts/rejects drops, requests formats
Animations, etc can be customized with the delegates
Timeline for a drag and drop interaction
Drag starts: app needs to provide the items to be dragged
Default image is a snapshot of the dragged view
Drop proposal: .cancel, .copy (should be default), .move,
.move only works within the same app
Get another callback when the user lifts finger to complete or fail the
Bulletin board app for dragging around photos
From Monroe to NASA (Session 106)
Dr. Christine Darden
In high school, geometry was her first exposure to math. Ended up studying
math and teaching in college out of concern that nobody would hire a black
Ended up in graduate school as a research assistant for aerosol physics.
Earned a degree in applied math and recruited by NASA.
After five years of implementing math on the computer, she pushes management
to give her a promotion. Previously, men and women of the same background
were assigned different positions: men became researchers and published
papers, while the women worked as computers.
Researched ways to reduce sonic boom for supersonic aircraft.
Went into management at Langley and retired in 2007.
iPhone 7 Plus sold much better than 6s Plus, and Brad attributes this to
the dual camera
Dual camera (virtual device) seamlessly matches exposure and compensates
for parallax when zooming
Depth and disparity
Value of depth map at each location indicates distance to that object
Stereorectified: multiple cameras that have same focal length and
parallel optical axes
Disparity: the inverse of depth. For the same object, change in position
between the images goes down as object gets farther.
AVDepthData can be either depth or disparity map, because they’re both
Holes are represented as NaN. Happens when unable to find features, or
points that are not present in both images.
Calibration errors (unknown baseline) can happen from OIS, gravity
pulling on lens, or focusing. This leads to a constant offset error in
the depth data. So we only have relative depth (cannot compare between
Streaming depth data
AVCaptureDepthDataOutput can give raw depth data during preview, or
smooth it between frames
Maximum resolution 320x240 at 24 fps
AVCaptureDataOutputSynchronizer helps synchronize multiple outputs of
a capture session
Can opt into receiving camera intrinsics matrix with focal length and
optical center (used by ARKit)
Capturing depth data
In iOS 11, all Portrait Mode photos contain depth data embedded
Lenses have distortion because they’re not pinholes — straight lines
in the scene don’t necessarily become straight lines in the image.
Features are different parts may be warped differently (especially
because the cameras have different focal lengths).
Depth maps are computed as rectilinear, then distortion is added so they
correspond directly to the RGB image. This is good for photo editing,
but you need to make it rectilinear for scene reconstruction.
AVCameraCalibrationData: intrinsics, extrinsics (camera’s pose), lens
distortion center (not always the same as optical axis), radial
distortion lookup table
Dual photo capture
Capturing separate images from both cameras in a single request
Was the default on iOS, watchOS, and tvOS — now macOS too
Dry run conversions in iOS 10.0, 10.1, and 10.2 to test robustness of
the conversion proecess
Many devices gained free space because LwVM (volume manager) is no
COW snapshots ensure consistent state when making iCloud backups
Unicode characters can be represented in many ways, such as ñ = n + ~
Native normalization for erase-restored iOS 11 devices
Runtime normalization (in the filesystem driver?) for 10.3.3 and 11
devices that are not erase-restored
Future update to convert all devices to native normalization. Maybe
because APFS compares filenames by hash, so they need to redo all the
hashes on disk.
APFS on macOS
System volume will be automatically converted by High Sierra release
installer (it’s optional during the beta). Other volumes can be manually
converted. Boot volume must be converted by the installer to be
bootable, so don’t do it manually.
Multiple volumes on existing drives do not use space sharing — they
are converted independently. Suggest adding APFS volumes to an existing
container and manually copying the files over.
EFI driver embedded into each APFS volume, allowing boot support even
for encrypted drives in virtual machines
FileVault conversion preserves existing recovery key and passwords.
Snapshots are encrypted even if they were taken before enabling
All Fusion Drive metadata is pinned to the SSD
Defragmentation support for spinning hard drives only (never on SSDs)
APFS and Time Machine
Since Lion, Mobile Time Machine locally caches some backups so you don’t
need the external drive all the time
Was implemented with 2 daemons, including a filesystem overlay. Lots of
complexity: O(10,000) lines of code.
In High Sierra, it’s been reimplemented on top of APFS snapshots
Make a backup in O(1) time: tmutil snapshot
List the volumes with mount. The ones beginning with
com.apple.TimeMachine are the hourly APFS snapshots being taken by
Unmount all snapshots to put it into a cold state:
tmutil unmountLocalSnapshots /. Then try to enter Time Machine and it
will load very fast (they are mounted lazily).
Local restores are also O(1) because we just make a COW reference to
existing blocks on disk
APFS and Finder
Fast copying using COW clones
If a bunch of clones referencing the same blocks are copied to another
APFS container, the cloning relationship is preserved