Skip to content

Commit bfa2787

Browse files
committed
Add iOS views to guides
1 parent 9882955 commit bfa2787

5 files changed

Lines changed: 596 additions & 0 deletions

File tree

client/concepts/events-and-callbacks.mdx

Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,11 @@ You are currently viewing the JavaScript version of this page. Use the dropdown
1818
You are currently viewing the React Native version of this page. Use the dropdown to the right to customize this page for your client framework.
1919
</Callout>
2020
</View>
21+
<View title="iOS" icon="apple">
22+
<Callout icon="apple" color="#FFC107">
23+
You are currently viewing the iOS version of this page. Use the dropdown to the right to customize this page for your client framework.
24+
</Callout>
25+
</View>
2126

2227
The Pipecat client emits events throughout the session lifecycle — when the bot connects, when the user speaks, when a transcript arrives, and more.
2328

@@ -97,6 +102,36 @@ Always return a cleanup function from `useEffect` to remove the listener when th
97102

98103
</View>
99104

105+
<View title="iOS" icon="apple">
106+
107+
Conform your model to `PipecatClientDelegate` and assign it as the delegate after creating the client. All delegate methods are optional — implement only what you need:
108+
109+
```swift
110+
let client = PipecatClient(options: PipecatClientOptions(
111+
transport: SmallWebRTCTransport(),
112+
enableMic: true
113+
))
114+
client.delegate = self
115+
116+
extension MyModel: PipecatClientDelegate {
117+
func onBotReady(botReadyData: BotReadyData) {
118+
Task { @MainActor in
119+
print("Bot is ready")
120+
}
121+
}
122+
123+
func onUserTranscript(data: Transcript) {
124+
Task { @MainActor in
125+
if data.final ?? false { setTranscript(data.text) }
126+
}
127+
}
128+
}
129+
```
130+
131+
Delegate callbacks arrive on a background thread — always use `Task { @MainActor in }` before updating `@Published` properties or any UI state.
132+
133+
</View>
134+
100135
---
101136

102137
## Event reference
@@ -203,6 +238,22 @@ useEffect(() => {
203238

204239
</View>
205240

241+
<View title="iOS" icon="apple">
242+
243+
```swift
244+
func onUserTranscript(data: Transcript) {
245+
Task { @MainActor in
246+
if data.final ?? false {
247+
addMessage(data.text) // committed
248+
} else {
249+
updatePartial(data.text) // still in progress
250+
}
251+
}
252+
}
253+
```
254+
255+
</View>
256+
206257
`BotOutput` is the recommended way to display the bot's response text. It provides the best possible representation of what the bot is saying — supporting interruptions and unspoken responses. By default, Pipecat aggregates output by sentences and words (assuming your TTS supports streaming), but custom aggregation strategies are supported too - like breaking out code snippets or other structured content:
207258

208259
<View title="React" icon="react">
@@ -248,6 +299,20 @@ useEffect(() => {
248299

249300
</View>
250301

302+
<View title="iOS" icon="apple">
303+
304+
The iOS SDK exposes `onBotTranscript` for the bot's LLM text output:
305+
306+
```swift
307+
func onBotTranscript(data: BotLLMText) {
308+
Task { @MainActor in
309+
appendSentence(data.text)
310+
}
311+
}
312+
```
313+
314+
</View>
315+
251316
### Errors
252317

253318
| Event | Callback | When it fires |
@@ -306,6 +371,19 @@ useEffect(() => {
306371

307372
</View>
308373

374+
<View title="iOS" icon="apple">
375+
376+
```swift
377+
func onError(message: RTVIMessageInbound) {
378+
Task { @MainActor in
379+
// message.data contains the error description string
380+
showError(message.data ?? "Unknown error")
381+
}
382+
}
383+
```
384+
385+
</View>
386+
309387
### Devices and tracks
310388

311389
| Event | Callback | When it fires |
@@ -375,3 +453,13 @@ For custom server\<-\>client messaging, see [Custom Messaging](/client/guides/cu
375453
</CardGroup>
376454

377455
</View>
456+
457+
<View title="iOS" icon="apple">
458+
459+
<CardGroup cols={1}>
460+
<Card title="iOS SDK Reference" icon="apple" href="/api-reference/client/ios/overview">
461+
Full `PipecatClientDelegate` protocol and API reference
462+
</Card>
463+
</CardGroup>
464+
465+
</View>

client/concepts/media-management.mdx

Lines changed: 156 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,11 @@ You are currently viewing the JavaScript version of this page. Use the dropdown
1818
You are currently viewing the React Native version of this page. Use the dropdown to the right to customize this page for your client framework.
1919
</Callout>
2020
</View>
21+
<View title="iOS" icon="apple">
22+
<Callout icon="apple" color="#FFC107">
23+
You are currently viewing the iOS version of this page. Use the dropdown to the right to customize this page for your client framework.
24+
</Callout>
25+
</View>
2126

2227
The Pipecat client handles media at two levels: **local devices** (the user's mic, camera, and speakers) and **media tracks** (the live audio/video streams flowing between client and bot). This page covers how to work with both.
2328

@@ -79,6 +84,12 @@ const client = new PipecatClient({
7984

8085
</View>
8186

87+
<View title="iOS" icon="apple">
88+
89+
Audio output is handled automatically by the SDK — no additional setup required. The bot's audio plays through the device speaker as soon as the session connects.
90+
91+
</View>
92+
8293
---
8394

8495
## Microphone
@@ -152,6 +163,29 @@ function MicButton() {
152163

153164
</View>
154165

166+
<View title="iOS" icon="apple">
167+
168+
Call `enableMic(enable:completion:)` to mute and unmute. The completion handler fires on success or failure:
169+
170+
```swift
171+
func toggleMic() {
172+
client.enableMic(enable: !isMicEnabled) { [weak self] result in
173+
switch result {
174+
case .success:
175+
Task { @MainActor in
176+
self?.isMicEnabled = self?.client.isMicEnabled ?? false
177+
}
178+
case .failure(let error):
179+
print("Mic toggle error: \(error)")
180+
}
181+
}
182+
}
183+
```
184+
185+
Check current state with `client.isMicEnabled`.
186+
187+
</View>
188+
155189
### Switching microphones
156190

157191
<View title="React" icon="react">
@@ -206,6 +240,27 @@ client.updateMic(mics[1].deviceId);
206240

207241
</View>
208242

243+
<View title="iOS" icon="apple">
244+
245+
Enumerate available microphones with `getAllMics()` and switch with `updateMic(micId:completion:)`:
246+
247+
```swift
248+
let mics = client.getAllMics() // [MediaDeviceInfo]
249+
client.updateMic(micId: mics[1].id, completion: nil)
250+
```
251+
252+
Listen for device changes via the delegate:
253+
254+
```swift
255+
func onAvailableMicsUpdated(mics: [MediaDeviceInfo]) {
256+
Task { @MainActor in
257+
self.availableMics = mics
258+
}
259+
}
260+
```
261+
262+
</View>
263+
209264
---
210265

211266
## Camera
@@ -281,6 +336,39 @@ Switch cameras with [`getAllCams()`](/api-reference/client/js/client-methods#get
281336

282337
</View>
283338

339+
<View title="iOS" icon="apple">
340+
341+
Enable the camera via `PipecatClientOptions` and toggle it with `enableCam(enable:completion:)`:
342+
343+
```swift
344+
let options = PipecatClientOptions(
345+
transport: SmallWebRTCTransport(),
346+
enableMic: true,
347+
enableCam: true
348+
)
349+
350+
client.enableCam(enable: true) { result in /* ... */ }
351+
let isOn = client.isCamEnabled
352+
```
353+
354+
Receive track events to get the `MediaStreamTrack` for rendering:
355+
356+
```swift
357+
func onTrackStarted(track: MediaStreamTrack, participant: Participant?) {
358+
Task { @MainActor in
359+
if track.kind == .video {
360+
if participant?.local ?? true {
361+
self.localCamTrackId = track.id
362+
} else {
363+
self.botCamTrackId = track.id
364+
}
365+
}
366+
}
367+
}
368+
```
369+
370+
</View>
371+
284372
---
285373

286374
## Speakers
@@ -312,6 +400,16 @@ Audio routing (speaker vs. earpiece) is managed by the platform and `DailyMediaM
312400

313401
</View>
314402

403+
<View title="iOS" icon="apple">
404+
405+
Audio routing (speaker vs. earpiece) is managed by AVAudioSession. Use `AVAudioSession.sharedInstance()` to configure the output route if needed:
406+
407+
```swift
408+
try AVAudioSession.sharedInstance().overrideOutputAudioPort(.speaker)
409+
```
410+
411+
</View>
412+
315413
---
316414

317415
## Device initialization before connecting
@@ -391,6 +489,21 @@ useEffect(() => {
391489

392490
</View>
393491

492+
<View title="iOS" icon="apple">
493+
494+
Receive `onTrackStarted` in your delegate to access tracks as they become available:
495+
496+
```swift
497+
func onTrackStarted(track: MediaStreamTrack, participant: Participant?) {
498+
Task { @MainActor in
499+
// track.kind is .audio or .video
500+
// participant?.local is true for the local participant
501+
}
502+
}
503+
```
504+
505+
</View>
506+
394507
---
395508

396509
## Audio visualization
@@ -460,6 +573,39 @@ function AudioViz() {
460573

461574
</View>
462575

576+
<View title="iOS" icon="apple">
577+
578+
Receive audio level callbacks via the delegate and drive your own SwiftUI visualization:
579+
580+
```swift
581+
@Published var localAudioLevel: Float = 0
582+
@Published var remoteAudioLevel: Float = 0
583+
584+
func onLocalAudioLevel(level: Float) {
585+
Task { @MainActor in
586+
self.localAudioLevel = level
587+
}
588+
}
589+
590+
func onRemoteAudioLevel(level: Float, participant: Participant) {
591+
Task { @MainActor in
592+
self.remoteAudioLevel = level
593+
}
594+
}
595+
```
596+
597+
```swift
598+
// In your SwiftUI view
599+
GeometryReader { geo in
600+
Rectangle()
601+
.fill(Color.blue)
602+
.frame(width: geo.size.width * CGFloat(model.localAudioLevel))
603+
}
604+
.frame(height: 8)
605+
```
606+
607+
</View>
608+
463609
---
464610

465611
## API reference
@@ -499,3 +645,13 @@ function AudioViz() {
499645
</CardGroup>
500646

501647
</View>
648+
649+
<View title="iOS" icon="apple">
650+
651+
<CardGroup cols={1}>
652+
<Card title="iOS SDK Reference" icon="apple" href="/api-reference/client/ios/overview">
653+
Full API reference including `PipecatClientDelegate` and device management
654+
</Card>
655+
</CardGroup>
656+
657+
</View>

0 commit comments

Comments
 (0)