Recently I worked on a project that utilized live recorded and uploaded audio as a core feature. While the Expo SDK made working with React Native a far more streamlined process, the secondary operation of uploading the audio to a cloud database proved more of a challenge.
There is a wealth of information regarding performative actions on image, video, and text files across the internet, however in my journey to complete this feature, I found that examples of audio operations were far less common.
I tried all sorts of things to work around this, and in many of those attempts I'd managed to successfully upload images and videos but consistently, my attempts to make audio snippets appear in my cloud media database all failed.
Additionally, because we were working with the Expo SDK to build a native Android app, mobile file system access as well as file types(android is .m4a
whereas iOS is .caf
) had to be taken into account as well. This precedent added an additional layer of complexity in the process to seamlessly integrate this uploading feature without incorporating HTML, as seen in Cloudinary's Quickstart guide.
The following Cloudinary POST method works perfectly in the browser, however attempts to run this code in a mobile app will result in an error in regards to how the app accesses the file system of a mobile device, the event.target.files
will return a property error of undefined, and that's not the behavior we're expecting.
const CLOUDINARY_URL = 'https://api.cloudinary.com/v1_1/CLOUD_NAME/upload';
const defaultHeaders = {
'Content-Type': 'application/x-www-form-urlencoded'
};
const file = event.target.files;
const formData = new FormData();
formData.append('file', file);
formData.append('upload_preset', CLOUDINARY_UPLOAD_PRESET);
axios({
url: CLOUDINARY_URL,
method: 'POST',
headers: defaultHeaders,
data: formData
})
.then(res => console.log(res))
.catch(err => console.log(err))
Finally, I hit a wall. With my deadline looming and the pressure rising, I threw my caution to the wind and created my first Stack Overflow question. I'd initially not done this (even though there were no other SO posts that answered my questions) for the sheer amount of time I thought it would take to merit a response.
this is my first Stack Overflow post so please go easy on me!
I'm building an audio recording app using EXPO as the SDK with React Native. One of the main features of the app is to be able to to record live audio as well as uploading audio from…
But for once, I got lucky and within an hour I'd received some advice on how to alleviate the situation. I couldn't [redacted] believe it.
My SOS was responded to in the most helpful manner, and in turn I wanted to write this to help anyone else who might be struggling in a similar situation. There are a few key things that need to happen to successfully upload audio:
Permission enabled access to a device's file system
Matched MIME types
Base64 encoded local uri path
Let's jump into some code
Create a New Asset
If you're just uploading audio from a device, you can skip to the next code snippet, if you're uploading live audio, the walkthrough starts here.
On a side note, there may be other, more eloquent ways to do this, but this worked for me. This is also implemented in after creating a recording component with the Audio API in Expo. Once a recording is created it's file path is immediately generated from the promise it's function call returns. The getURI()
method is a way to utilize the recording's information for immediate use with other APIs and services.
First, we will utilize the MediaLibrary API to work with the newly generated recording:
Create a new recording(asset) that can be stored on the device
Create an Album for the assets to be stored in
Save the asset to the album in the device's Media Library
//create a new recording
async createAudioAsset() {
let newAsset = await MediaLibrary.createAssetAsync(this.recording.getURI())
//create an album on the device in to which the recordings should be stored, and pass in the new asset to store
MediaLibrary.createAlbumAsync('Recordings', newAsset)
.then(() => console.log('Album created!'))
.catch(err => console.log('Album creation error', err));
}
async saveToPhoneLibrary(){
//call the function that creates a new (if not already existing) album
this.createAudioAsset()
//then save the created asset to the phone's media library
.then(asset => MediaLibrary.saveToLibraryAsync(asset))
.catch(err => console.log('media library save asset err', err))
}
Next, we'll access the file using the FileSystem and DocumentPicker APIs, this is also the place to start if you plan on uploading previously saved audio only.
async uploadRecFromPhone(){
//Access the phones files, making sure all file `type`s are available to upload
DocumentPicker.getDocumentAsync({
type: '*/*',
copyToCacheDirectory: true,
base64: true
})
.then(succ => {
console.log(`Recording Information -- path: ${succ.uri},
type: ${succ.type},
size: ${succ.size}`)
}).catch(err => console.log('error uploading from phone', err)
};
To Upload the Audio File: First convert the audio's uri into a base64 string, here's a working base64 object to help the encoding:
var Base64 = {
// private property
_keyStr : "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=",
// public method for encoding
encode : function (input) {
var output = "";
var chr1, chr2, chr3, enc1, enc2, enc3, enc4;
var i = 0;
input = Base64._utf8_encode(input);
while (i < input.length) {
chr1 = input.charCodeAt(i++);
chr2 = input.charCodeAt(i++);
chr3 = input.charCodeAt(i++);
enc1 = chr1 >> 2;
enc2 = ((chr1 & 3) << 4) | (chr2 >> 4);
enc3 = ((chr2 & 15) << 2) | (chr3 >> 6);
enc4 = chr3 & 63;
if (isNaN(chr2)) {
enc3 = enc4 = 64;
} else if (isNaN(chr3)) {
enc4 = 64;
}
output = output +
this._keyStr.charAt(enc1) + this._keyStr.charAt(enc2) +
this._keyStr.charAt(enc3) + this._keyStr.charAt(enc4);
}
return output;
},
// public method for decoding
decode : function (input) {
var output = "";
var chr1, chr2, chr3;
var enc1, enc2, enc3, enc4;
var i = 0;
input = input.replace(/[^A-Za-z0-9\+\/\=]/g, "");
while (i < input.length) {
enc1 = this._keyStr.indexOf(input.charAt(i++));
enc2 = this._keyStr.indexOf(input.charAt(i++));
enc3 = this._keyStr.indexOf(input.charAt(i++));
enc4 = this._keyStr.indexOf(input.charAt(i++));
chr1 = (enc1 << 2) | (enc2 >> 4);
chr2 = ((enc2 & 15) << 4) | (enc3 >> 2);
chr3 = ((enc3 & 3) << 6) | enc4;
output = output + String.fromCharCode(chr1);
if (enc3 != 64) {
output = output + String.fromCharCode(chr2);
}
if (enc4 != 64) {
output = output + String.fromCharCode(chr3);
}
}
output = Base64._utf8_decode(output);
return output;
},
// private method for UTF-8 encoding
_utf8_encode : function (string) {
string = string.replace(/\r\n/g,"\n");
var utftext = "";
for (var n = 0; n < string.length; n++) {
var c = string.charCodeAt(n);
if (c < 128) {
utftext += String.fromCharCode(c);
}
else if((c > 127) && (c < 2048)) {
utftext += String.fromCharCode((c >> 6) | 192);
utftext += String.fromCharCode((c & 63) | 128);
}
else {
utftext += String.fromCharCode((c >> 12) | 224);
utftext += String.fromCharCode(((c >> 6) & 63) | 128);
utftext += String.fromCharCode((c & 63) | 128);
}
}
return utftext;
},
// private method for UTF-8 decoding
_utf8_decode : function (utftext) {
var string = "";
var i = 0;
var c = c1 = c2 = 0;
while ( i < utftext.length ) {
c = utftext.charCodeAt(i);
if (c < 128) {
string += String.fromCharCode(c);
i++;
}
else if((c > 191) && (c < 224)) {
c2 = utftext.charCodeAt(i+1);
string += String.fromCharCode(((c & 31) << 6) | (c2 & 63));
i += 2;
}
else {
c2 = utftext.charCodeAt(i+1);
c3 = utftext.charCodeAt(i+2);
string += String.fromCharCode(((c & 15) << 12) | ((c2 & 63) << 6) | (c3 & 63));
i += 3;
}
}
return string;
}
}
Now that we have a Base64 object, we can use it to encode the uri path to send across the internet.
//Call the `encode` method on the local URI that DocumentPicker returns
const cloudUri = Base64.encode(succ.uri);
//This line will let cloudinary know what MIME type is being sent
let base64Aud = `data:audio/mpeg;base64,${cloudUri}`;
//attach the recording to the FormData object
let fd = new FormData();
fd.append("file", `${base64Aud}`);
fd.append("upload_preset", process.env.UPLOAD_PRESET);
fd.append("resource_type", "video")
fetch('https://api.cloudinary.com/v1_1/${process.env.CLOUD_NAME}/upload', {
method: 'POST',
body: fd,
})
.then(async (response) => {
let recordingURL = await response.json();
console.log('Cloudinary Info:', recordingURL);
return recordingURL;
})
.catch(err => console.log('cloudinary err', err))
})
If you already have a Base64 object or string and just want to test that your audio transmissions are working, the kind soul that responded to my initial request for help previously created a working JSFiddle to test that your Base64 is also generating the results you expect, it was incredibly helpful in debugging so I'm happy to share it here as well.
Just replace the Base64 string with the one that you're generating as well as putting in your Cloudinary API information. It was really easy to use this tool and is a great way to test your code quickly!
Top comments (9)
Hi thank you for this. I'm encountering this error on the saveToPhoneLibrary function: media library save asset err [Error: This file type is not supported yet]
I'm using EXPO-AV to record and call this function as soon as the recording ends on the phone. How did you go around resolving this?
Hey William, are you developing for Android or iOS? You may have trouble with the iOS audio filetype because the extensions are different and iOS file/simulator isn't available yet, a very unfortunate nuance.
Thank you for replying. I'm writing react native app, currently using iOS. I used Expo-AV to record the audio and the file recorded as .caf, which cannot be stored in the album ( at least I don't think). I then tried to record it as a MP4 file with MPEG4AAC encoding. Then the error for saveToPhoneLibrary() became "[Error: Asset couldn't be saved to photo library]".
Did you encounter the same problem? Sounds like you were developing on Android so there's no issue for you.
Thats exactly the case William, my team and I discovered this issue when testing our app on both an android and iphone. We were able to save and retrieve our Android audio files at will for our audio recording app, but not for an iPhone, and thats when we discovered .caf is an incompatible file type. As far as I know, Expo hasnt had an update that supports iOS file types, which is a bummer.
Hi. Do you know if they support it now?
I'm sorry, I'm not sure, I havent looked into the Expo docs recently
Edit:
My curiosity got the better of me and y'all questions have inspired me to bring this project back to life so I will continue to keep this updated. As of 07/24/2020, Expo's Audio recording feature is still incompatible with the iOS Simulator so essentially, this still only works for Android audio files. Check out the Audio Recording Docs for more insight
Thanks for detailing your experience. I appreciate your efforts to help others.
Let me put my case, i am new to react native but built an app with Expo SDk and in that we want to use an audio recording feature and uploading to S3 storage for accessing it again.
If audio recording doesn't work in IOS Simulator is there any other way that I can test it with IOS mobiles? like in Android we will connect through USB cable and test or install APK and test?
or does it mean, we can't use Expo for this feature at all for IOS devices.
I have been testing our app in Android only so far, if IOS doesn't support at all, then we have to change the feature of APP itself.
Pls suggest.
Hi,
I am implementing the same functionality on my project. I am facing some troubles while implementing this. Can you please share the code, starting from recording to uploading?
Hi Ahmad, I'm sorry for the late reply, you can check out the code here: github.com/threeofcups/aloud/blob/...