The Android Client SDK 3.8.0 introduces a new capability that allows you to get the processed local participant’s audio before passing it to a conference. This blog post demonstrates how to integrate the Android SDK with Hive.ai to create a moderation capability in a Dolby.io conference. Audio will be captured with 10 seconds of buffer and sent on demand by the participant. As a result, the capability will display the returned results with transcription and scores.
The completed demo project is available on GitHub.
The Android sample application is built using Kotlin with the Jetpack Compose UI toolkit. Additionally, we use the following external dependencies:
- Hilt for dependency injection
- Retrofit and OkHttp libraries to make REST API calls
- Moshi for parsing JSON to Kotlin classes
Requirements
Make sure that you have:
- A client access token copied from your Dolby.io account. If you do not have an account, follow these steps:
- Sign up for a free Dolby.io account.
- Go to the
Communications & Media
dashboard. - Create a new app using the ‘+ Create App’ button and provide your application a name, for example,
hive.ai-sample.
- Select the
API Keys
link located next to your application. - Copy your client access token.
For more details, see the Accessing Dolby.io Platform document.

- An API key copied from Hive.ai, which you can obtain by following these steps:
- Sign up for a Hive.ai account.
- Request a Hive moderation demo.
- Hive.ai support will contact you and grant you access to the demo.
- Create a new project and copy your API key.
For more details, see the Hive moderation documentation.


Integrate the Android SDK with Hive.ai
To create a sample audio moderator application, we will capture audio samples and then send the buffered samples to Hive.ai.
Capture audio samples
First, we need to open an Android sample application and use the ObserveLocalAudioSamplesUseCase class to implement the LocalInputAudioCallback instance and register it using the registerLocalInputAudioCallback on VoxeetSDK.audio().getLocal(). Java usage of the API is straightforward, but the best way to use it in Kotlin is to convert callbacks into Kotlin Flows, as in the following example:
class LocalAudioSamplesCollector constructor() {
// We use a backing property of type MutableSharedFlow in a class to send items to the flow
private val _localAudioSamples = MutableSharedFlow<LocalInputAudioSamples>(replay = 1)
private val localAudioSamples = _localAudiosamples.asSharedFlow()
private lateinit var job: Job
init {
VoxeetSDK.audio().local.registerLocalInputAudioCallback {
// We use GlobalScope because we want to operate on the whole application lifetime.
// But you can limit the scope to whatever suits your needs.
GlobalScope.launch {
// Each time we receive a callback we emit a value on the flow which will be delivered to all collectors.
_localAudioSamples.emit(it)
}
}
}
func collectSamples() {
job = CoroutineScope(Dispatchers.IO).launch {
// Listen for audio samples data
localAudioSamples.collect { actualSamples ->
........
} }
}
fun stop() {
if (::job.isInitialized) {
job.cancel()
}
}
}
For more information, see the Accessing the Local Audio Stream guide.
After joining a conference, the application will start receiving audio samples and save them to a circular buffer. To simplify the application, we store only the last 10 seconds of audio.
Send the buffered audio samples to Hive.ai
Now, we need to use a synchronous endpoint to submit a request and wait for the result in the JSON format. The application should send the multimedia file created from the buffered audio samples, in our case a WAV file, to Hive as a parameter, parse the received result, and display it on a screen with values received from the service. The following example presents an audio moderation request implementation:
// Hive.ai API definition
interface AudioModerationApi {
@POST("api/v2/task/sync")
@Headers("accept: application/json")
suspend fun moderate(@Header("authorization") authorization: String, @Body body: RequestBody): ModerationResponse
}
.............
class AudioModeration constructor(
private val audioModerationApi: AudioModerationApi
) {
suspend fun getModeration(waveFile: File): ModerationResponse {
// See https://docs.thehive.ai/reference/submit-a-task-synchronously
// for Hive documentation
val body = waveFile.asRequestBody("audio/*".toMediaTypeOrNull())
val requestBody: RequestBody = MultipartBody.Builder()
.setType(MultipartBody.FORM)
.addFormDataPart("media", waveFile.name, body)
.build()
return audioModerationApi.moderate("token " + Configuration.HIVE_API_KEY, requestBody)
}
}
Run the application
To run the sample application, follow these steps:
- Clone the sample code from GitHub.
- Open the project in Android Studio.
- Update the file
Configuration.kt
with your Dolby.io client access token and Hive API key. - Build the project and run it on your device.
Examples of the running application:


Examples of transcription and detected bullying and violent sentences:


To learn more, see our Android SDK Documentation as well as this blog on Android Voice Calls.
Stay tuned for part 2, where we will discuss how to do this on iOS!