You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: client/concepts/events-and-callbacks.mdx
+88Lines changed: 88 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,6 +18,11 @@ You are currently viewing the JavaScript version of this page. Use the dropdown
18
18
You are currently viewing the React Native version of this page. Use the dropdown to the right to customize this page for your client framework.
19
19
</Callout>
20
20
</View>
21
+
<Viewtitle="iOS"icon="apple">
22
+
<Callouticon="apple"color="#FFC107">
23
+
You are currently viewing the iOS version of this page. Use the dropdown to the right to customize this page for your client framework.
24
+
</Callout>
25
+
</View>
21
26
22
27
The Pipecat client emits events throughout the session lifecycle — when the bot connects, when the user speaks, when a transcript arrives, and more.
23
28
@@ -97,6 +102,36 @@ Always return a cleanup function from `useEffect` to remove the listener when th
97
102
98
103
</View>
99
104
105
+
<Viewtitle="iOS"icon="apple">
106
+
107
+
Conform your model to `PipecatClientDelegate` and assign it as the delegate after creating the client. All delegate methods are optional — implement only what you need:
108
+
109
+
```swift
110
+
let client =PipecatClient(options: PipecatClientOptions(
111
+
transport: SmallWebRTCTransport(),
112
+
enableMic: true
113
+
))
114
+
client.delegate=self
115
+
116
+
extensionMyModel: PipecatClientDelegate {
117
+
funconBotReady(botReadyData: BotReadyData) {
118
+
Task { @MainActorin
119
+
print("Bot is ready")
120
+
}
121
+
}
122
+
123
+
funconUserTranscript(data: Transcript) {
124
+
Task { @MainActorin
125
+
if data.final ??false { setTranscript(data.text) }
126
+
}
127
+
}
128
+
}
129
+
```
130
+
131
+
Delegate callbacks arrive on a background thread — always use `Task { @MainActor in }` before updating `@Published` properties or any UI state.
132
+
133
+
</View>
134
+
100
135
---
101
136
102
137
## Event reference
@@ -203,6 +238,22 @@ useEffect(() => {
203
238
204
239
</View>
205
240
241
+
<Viewtitle="iOS"icon="apple">
242
+
243
+
```swift
244
+
funconUserTranscript(data: Transcript) {
245
+
Task { @MainActorin
246
+
if data.final ??false {
247
+
addMessage(data.text) // committed
248
+
} else {
249
+
updatePartial(data.text) // still in progress
250
+
}
251
+
}
252
+
}
253
+
```
254
+
255
+
</View>
256
+
206
257
`BotOutput` is the recommended way to display the bot's response text. It provides the best possible representation of what the bot is saying — supporting interruptions and unspoken responses. By default, Pipecat aggregates output by sentences and words (assuming your TTS supports streaming), but custom aggregation strategies are supported too - like breaking out code snippets or other structured content:
207
258
208
259
<Viewtitle="React"icon="react">
@@ -248,6 +299,20 @@ useEffect(() => {
248
299
249
300
</View>
250
301
302
+
<Viewtitle="iOS"icon="apple">
303
+
304
+
The iOS SDK exposes `onBotTranscript` for the bot's LLM text output:
305
+
306
+
```swift
307
+
funconBotTranscript(data: BotLLMText) {
308
+
Task { @MainActorin
309
+
appendSentence(data.text)
310
+
}
311
+
}
312
+
```
313
+
314
+
</View>
315
+
251
316
### Errors
252
317
253
318
| Event | Callback | When it fires |
@@ -306,6 +371,19 @@ useEffect(() => {
306
371
307
372
</View>
308
373
374
+
<Viewtitle="iOS"icon="apple">
375
+
376
+
```swift
377
+
funconError(message: RTVIMessageInbound) {
378
+
Task { @MainActorin
379
+
// message.data contains the error description string
380
+
showError(message.data??"Unknown error")
381
+
}
382
+
}
383
+
```
384
+
385
+
</View>
386
+
309
387
### Devices and tracks
310
388
311
389
| Event | Callback | When it fires |
@@ -375,3 +453,13 @@ For custom server\<-\>client messaging, see [Custom Messaging](/client/guides/cu
Copy file name to clipboardExpand all lines: client/concepts/media-management.mdx
+156Lines changed: 156 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,6 +18,11 @@ You are currently viewing the JavaScript version of this page. Use the dropdown
18
18
You are currently viewing the React Native version of this page. Use the dropdown to the right to customize this page for your client framework.
19
19
</Callout>
20
20
</View>
21
+
<Viewtitle="iOS"icon="apple">
22
+
<Callouticon="apple"color="#FFC107">
23
+
You are currently viewing the iOS version of this page. Use the dropdown to the right to customize this page for your client framework.
24
+
</Callout>
25
+
</View>
21
26
22
27
The Pipecat client handles media at two levels: **local devices** (the user's mic, camera, and speakers) and **media tracks** (the live audio/video streams flowing between client and bot). This page covers how to work with both.
23
28
@@ -79,6 +84,12 @@ const client = new PipecatClient({
79
84
80
85
</View>
81
86
87
+
<Viewtitle="iOS"icon="apple">
88
+
89
+
Audio output is handled automatically by the SDK — no additional setup required. The bot's audio plays through the device speaker as soon as the session connects.
90
+
91
+
</View>
92
+
82
93
---
83
94
84
95
## Microphone
@@ -152,6 +163,29 @@ function MicButton() {
152
163
153
164
</View>
154
165
166
+
<Viewtitle="iOS"icon="apple">
167
+
168
+
Call `enableMic(enable:completion:)` to mute and unmute. The completion handler fires on success or failure:
169
+
170
+
```swift
171
+
functoggleMic() {
172
+
client.enableMic(enable: !isMicEnabled) { [weakself] result in
0 commit comments