You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
`BotOutput` is the recommended way to display the bot's response text. It provides the best possible representation of what the bot is saying — supporting interruptions and unspoken responses. By default, Pipecat aggregates output by sentences and words (assuming your TTS supports streaming), but custom aggregation strategies are supported too - like breaking out code snippets or other structured content:
Copy file name to clipboardExpand all lines: client/concepts/media-management.mdx
+136Lines changed: 136 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,6 +13,11 @@ You are currently viewing the React version of this page. Use the dropdown to th
13
13
You are currently viewing the JavaScript version of this page. Use the dropdown to the right to customize this page for your client framework.
14
14
</Callout>
15
15
</View>
16
+
<Viewtitle="React Native"icon="mobile">
17
+
<Callouticon="mobile"color="#FFC107">
18
+
You are currently viewing the React Native version of this page. Use the dropdown to the right to customize this page for your client framework.
19
+
</Callout>
20
+
</View>
16
21
17
22
The Pipecat client handles media at two levels: **local devices** (the user's mic, camera, and speakers) and **media tracks** (the live audio/video streams flowing between client and bot). This page covers how to work with both.
Audio output is handled automatically by the platform via `DailyMediaManager` — no additional setup required. The bot's audio plays through the device speaker as soon as the session connects.
Enumerate available devices with [`getAllMics()`](/api-reference/client/js/client-methods#getallmics), then switch by `deviceId` using [`updateMic()`](/api-reference/client/js/client-methods#updatemic):
201
+
202
+
```tsx
203
+
const mics =awaitclient.getAllMics();
204
+
client.updateMic(mics[1].deviceId);
205
+
```
206
+
207
+
</View>
208
+
153
209
---
154
210
155
211
## Camera
@@ -212,6 +268,19 @@ Switch cameras with [`getAllCams()`](/api-reference/client/js/client-methods#get
212
268
213
269
</View>
214
270
271
+
<Viewtitle="React Native"icon="mobile">
272
+
273
+
Toggle the camera with [`enableCam()`](/api-reference/client/js/client-methods#enablecam-enable-boolean):
274
+
275
+
```tsx
276
+
client.enableCam(true);
277
+
const isOn =client.isCamEnabled;
278
+
```
279
+
280
+
Switch cameras with [`getAllCams()`](/api-reference/client/js/client-methods#getallcams) / [`updateCam()`](/api-reference/client/js/client-methods#updatecam). To render video, use the `DailyMediaManager`'s video rendering capabilities per the React Native Daily SDK docs.
Audio routing (speaker vs. earpiece) is managed by the platform and `DailyMediaManager`. Use the React Native audio session APIs (e.g., `react-native-audio-session`) to control routing if needed.
0 commit comments