While Flutter supports six plaforms — Windows, Linux, macOS, Android, iOS and web — the official Flutter camera plugin is limited to just three: Android, iOS, and web. The development pace for the official plugin has been sluggish, with no clear roadmap for desktop support. As a result, creating a cross-platform barcode scanner application that encompasses all six platforms remains unfeasible. To solve this problem, we will design and implement a desktop Flutter camera plugin from scratch, utilizing our C++ litecam project.
Desktop Flutter Multi-Barcode Scanner Demo
Scaffolding a Flutter Plugin Project for Windows, Linux, and macOS
Run the following command to create a new Flutter plugin project named flutter_lite_camera
. This project will include platform-specific folders and initial code for Windows, Linux, and macOS:
flutter create --org com.example --template=plugin --platforms=linux,windows,macos flutter_lite_camera
The supported programming languages for Windows and Linux are C++, while macOS uses Swift. The source code from the litecam project can be reused for Windows and Linux. For macOS, we need to adapt the Objective-C logic to Swift.
Defining the API for Desktop Camera Functionality
When using litecam, the camera feed is typically displayed in a system window. However, since Flutter has its own UI system, we need to render captured frames within a Flutter widget. To accomplish this, we define four essential functions:
class FlutterLiteCamera {
Future<List<String>> getDeviceList() {
return FlutterLiteCameraPlatform.instance.getDeviceList();
}
Future<bool> open(int index) {
return FlutterLiteCameraPlatform.instance.open(index);
}
Future<Map<String, dynamic>> captureFrame() {
return FlutterLiteCameraPlatform.instance.captureFrame();
}
Future<void> release() {
return FlutterLiteCameraPlatform.instance.release();
}
}
Explanation:
-
getDeviceList()
: Retrieves a list of available camera devices. -
open(int index)
: Opens the camera device with the specified index. -
captureFrame()
: Captures a frame from the camera feed. The returned object contains the frame's width, height, and pixel data in RGB888 format. -
release()
: Releases the camera device, freeing up resources.
Implementing the Native Interface for Desktop Platforms
The size of the captured frame is restricted to 640x480 with an RGB888 pixel format across all platforms. This restriction reduces the amount of data transferred between the native and Flutter layers, as well as the processing time required to render the frame in a Flutter widget.
Windows
- Copy the
Camera.h
andCameraWindows.cpp
files from thelitecam
project to thewindows
folder. -
Update the
CMakelists.txt
file to include theCameraWindows.cpp
file and link the required libraries:
... add_library(${PLUGIN_NAME} SHARED "include/flutter_lite_camera/flutter_lite_camera_plugin_c_api.h" "flutter_lite_camera_plugin_c_api.cpp" "CameraWindows.cpp" ${PLUGIN_SOURCES} ) ... target_link_libraries(${PLUGIN_NAME} PRIVATE flutter flutter_wrapper_plugin) target_link_libraries(${PLUGIN_NAME} PRIVATE ole32 uuid mfplat mf mfreadwrite mfuuid) ...
-
Implement the method channel logic in
flutter_lite_camera_plugin.cpp
:
#include "flutter_lite_camera_plugin.h" #include <windows.h> #include <VersionHelpers.h> #include <flutter/method_channel.h> #include <flutter/plugin_registrar_windows.h> #include <flutter/standard_method_codec.h> #include <memory> #include <sstream> #include <codecvt> namespace flutter_lite_camera { ... FlutterLiteCameraPlugin::FlutterLiteCameraPlugin() { camera = new Camera(); } FlutterLiteCameraPlugin::~FlutterLiteCameraPlugin() { delete camera; } void FlutterLiteCameraPlugin::HandleMethodCall( const flutter::MethodCall<flutter::EncodableValue> &method_call, std::unique_ptr<flutter::MethodResult<flutter::EncodableValue>> result) { if (method_call.method_name().compare("getDeviceList") == 0) { std::vector<CaptureDeviceInfo> devices = ListCaptureDevices(); flutter::EncodableList deviceList; for (size_t i = 0; i < devices.size(); i++) { CaptureDeviceInfo &device = devices[i]; std::wstring wstr(device.friendlyName); int size_needed = WideCharToMultiByte(CP_UTF8, 0, wstr.c_str(), (int)wstr.size(), NULL, 0, NULL, NULL); std::string utf8Str(size_needed, 0); WideCharToMultiByte(CP_UTF8, 0, wstr.c_str(), (int)wstr.size(), &utf8Str[0], size_needed, NULL, NULL); deviceList.push_back(flutter::EncodableValue(utf8Str)); } result->Success(flutter::EncodableValue(deviceList)); } else if (method_call.method_name().compare("open") == 0) { const auto *arguments = std::get_if<flutter::EncodableList>(method_call.arguments()); if (arguments && !arguments->empty()) { int index = std::get<int>((*arguments)[0]); bool success = camera->Open(index); result->Success(flutter::EncodableValue(success)); } else { result->Error("InvalidArguments", "Expected camera index"); } } else if (method_call.method_name().compare("captureFrame") == 0) { FrameData frame = camera->CaptureFrame(); if (frame.rgbData) { flutter::EncodableMap frameMap; frameMap[flutter::EncodableValue("width")] = flutter::EncodableValue(frame.width); frameMap[flutter::EncodableValue("height")] = flutter::EncodableValue(frame.height); frameMap[flutter::EncodableValue("data")] = flutter::EncodableValue(std::vector<uint8_t>(frame.rgbData, frame.rgbData + frame.size)); ReleaseFrame(frame); result->Success(flutter::EncodableValue(frameMap)); } else { result->Error("CaptureFailed", "Failed to capture frame"); } } else if (method_call.method_name().compare("release") == 0) { camera->Release(); result->Success(); } else { result->NotImplemented(); } } }
Linux
- Copy the
Camera.h
andCameraLinux.cpp
files from thelitecam
project to thelinux
folder. -
Update the
CMakelists.txt
file to include theCameraLinux.cpp
file and link the required libraries:
... add_library(${PLUGIN_NAME} SHARED "CameraLinux.cpp" ${PLUGIN_SOURCES} ) ... target_link_libraries(${PLUGIN_NAME} PRIVATE flutter) target_link_libraries(${PLUGIN_NAME} PRIVATE PkgConfig::GTK) ...
-
Implement the method channel logic in
flutter_lite_camera_plugin.cc
:
#include "include/Camera.h" #include "include/flutter_lite_camera/flutter_lite_camera_plugin.h" #include <flutter_linux/flutter_linux.h> #include <gtk/gtk.h> #include <sys/utsname.h> #include <cstring> #include "flutter_lite_camera_plugin_private.h" #define FLUTTER_LITE_CAMERA_PLUGIN(obj) \ (G_TYPE_CHECK_INSTANCE_CAST((obj), flutter_lite_camera_plugin_get_type(), \ FlutterLiteCameraPlugin)) struct _FlutterLiteCameraPlugin { GObject parent_instance; Camera *camera; }; G_DEFINE_TYPE(FlutterLiteCameraPlugin, flutter_lite_camera_plugin, g_object_get_type()) static void flutter_lite_camera_plugin_handle_method_call( FlutterLiteCameraPlugin *self, FlMethodCall *method_call) { g_autoptr(FlMethodResponse) response = nullptr; const gchar *method = fl_method_call_get_name(method_call); if (strcmp(method, "getDeviceList") == 0) { std::vector<CaptureDeviceInfo> devices = ListCaptureDevices(); FlValue *deviceList = fl_value_new_list(); for (const auto &device : devices) { FlValue *deviceName = fl_value_new_string(device.friendlyName); fl_value_append_take(deviceList, deviceName); } response = FL_METHOD_RESPONSE(fl_method_success_response_new(deviceList)); } else if (strcmp(method, "open") == 0) { FlValue *args = fl_method_call_get_args(method_call); FlValue *index = fl_value_get_list_value(args, 0); if (index) { int index_int = fl_value_get_int(index); bool success = self->camera->Open(index_int); response = FL_METHOD_RESPONSE(fl_method_success_response_new(fl_value_new_bool(success))); } else { response = FL_METHOD_RESPONSE(fl_method_error_response_new("INVALID_ARGUMENTS", "Expected camera index", nullptr)); } } else if (strcmp(method, "captureFrame") == 0) { FrameData frame = self->camera->CaptureFrame(); if (frame.rgbData == nullptr) { response = FL_METHOD_RESPONSE(fl_method_error_response_new("CAPTURE_FAILED", "No frame data available", nullptr)); } else { FlValue *frameData = fl_value_new_map(); fl_value_set_take(frameData, fl_value_new_string("width"), fl_value_new_int(frame.width)); fl_value_set_take(frameData, fl_value_new_string("height"), fl_value_new_int(frame.height)); FlValue *rgbData = fl_value_new_uint8_list(frame.rgbData, frame.size); fl_value_set_take(frameData, fl_value_new_string("data"), rgbData); ReleaseFrame(frame); response = FL_METHOD_RESPONSE(fl_method_success_response_new(frameData)); } } else if (strcmp(method, "release") == 0) { self->camera->Release(); response = FL_METHOD_RESPONSE(fl_method_success_response_new(nullptr)); } else { response = FL_METHOD_RESPONSE(fl_method_not_implemented_response_new()); } if (response == nullptr) { response = FL_METHOD_RESPONSE(fl_method_error_response_new("INTERNAL_ERROR", "Unexpected error", nullptr)); } fl_method_call_respond(method_call, response, nullptr); } static void flutter_lite_camera_plugin_dispose(GObject *object) { FlutterLiteCameraPlugin *self = FLUTTER_LITE_CAMERA_PLUGIN(object); delete self->camera; G_OBJECT_CLASS(flutter_lite_camera_plugin_parent_class)->dispose(object); } static void flutter_lite_camera_plugin_init(FlutterLiteCameraPlugin *self) { self->camera = new Camera(); } ...
macOS
-
Create a
CameraManager.swift
file to implement the camera functionality:
import AVFoundation import Accelerate import FlutterMacOS import Foundation class CameraManager: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate { private var captureSession: AVCaptureSession? private var videoOutput: AVCaptureVideoDataOutput? private var captureDevice: AVCaptureDevice? private var frameWidth: Int = 640 private var frameHeight: Int = 480 private let frameQueue = DispatchQueue( label: "com.flutter_lite_camera.frameQueue", attributes: .concurrent) private var _currentFrame: FrameData? var currentFrame: FrameData? { get { return frameQueue.sync { _currentFrame } } set { frameQueue.async(flags: .barrier) { self._currentFrame = newValue } } } struct FrameData { var width: Int var height: Int var rgbData: Data } override init() { super.init() } func listDevices() -> [String] { let devices = AVCaptureDevice.devices() .filter { $0.hasMediaType(.video) } return devices.map { $0.localizedName } } func open(cameraIndex: Int) -> Bool { guard cameraIndex < AVCaptureDevice.devices(for: .video).count else { print("Camera index out of range.") return false } let devices = AVCaptureDevice.devices(for: .video) self.captureDevice = devices[cameraIndex] do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() if self.captureSession?.canAddInput(input) == true { self.captureSession?.addInput(input) } else { print("Cannot add input to session.") return false } if let format = self.captureDevice?.formats.first(where: { let dimensions = CMVideoFormatDescriptionGetDimensions($0.formatDescription) return dimensions.width == 1280 && dimensions.height == 720 }) { try self.captureDevice?.lockForConfiguration() self.captureDevice?.activeFormat = format self.captureDevice?.unlockForConfiguration() print("Resolution set to 640x480") } else { print("640x480 resolution not supported") } self.videoOutput = AVCaptureVideoDataOutput() self.videoOutput?.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] self.videoOutput?.alwaysDiscardsLateVideoFrames = true if self.captureSession?.canAddOutput(self.videoOutput!) == true { self.captureSession?.addOutput(self.videoOutput!) self.videoOutput?.setSampleBufferDelegate( self, queue: DispatchQueue.global(qos: .userInteractive)) } else { print("Cannot add video output to session.") return false } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() return true } catch { print("Error initializing camera: \(error.localizedDescription)") return false } } func captureFrame() -> FrameData? { guard let frame = currentFrame else { return nil } return frame } func release() { self.captureSession?.stopRunning() self.captureSession = nil self.videoOutput = nil self.captureDevice = nil } func captureOutput( _ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection ) { guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { print("Failed to get pixel buffer.") return } CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly) let sourceWidth = CVPixelBufferGetWidth(pixelBuffer) let sourceHeight = CVPixelBufferGetHeight(pixelBuffer) let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer) let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer) guard let baseAddress = baseAddress else { print("Failed to get base address.") CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly) return } let targetWidth = 640 let targetHeight = 480 let aspectRatio = CGFloat(sourceWidth) / CGFloat(sourceHeight) let scaledWidth = Int(min(CGFloat(targetWidth), CGFloat(targetHeight) * aspectRatio)) let scaledHeight = Int(CGFloat(scaledWidth) / aspectRatio) var sourceBuffer = vImage_Buffer( data: baseAddress, height: vImagePixelCount(sourceHeight), width: vImagePixelCount(sourceWidth), rowBytes: bytesPerRow ) let targetBytesPerRowRGBA = scaledWidth * 4 // RGBA var resizedRGBAData = Data(count: scaledHeight * targetBytesPerRowRGBA) resizedRGBAData.withUnsafeMutableBytes { resizedPointer in var destinationBuffer = vImage_Buffer( data: resizedPointer.baseAddress!, height: vImagePixelCount(scaledHeight), width: vImagePixelCount(scaledWidth), rowBytes: targetBytesPerRowRGBA ) vImageScale_ARGB8888( &sourceBuffer, &destinationBuffer, nil, vImage_Flags(kvImageNoFlags) ) } let targetBytesPerRowRGB = targetWidth * 3 var paddedRGBData = Data(count: targetHeight * targetBytesPerRowRGB) paddedRGBData.withUnsafeMutableBytes { paddedPointer in resizedRGBAData.withUnsafeBytes { rgbaPointer in let rgbaBytes = rgbaPointer.baseAddress!.assumingMemoryBound(to: UInt8.self) let paddedBytes = paddedPointer.baseAddress!.assumingMemoryBound(to: UInt8.self) let paddingX = (targetWidth - scaledWidth) / 2 let paddingY = (targetHeight - scaledHeight) / 2 for y in 0..<scaledHeight { for x in 0..<scaledWidth { let srcIndex = (y * targetBytesPerRowRGBA) + (x * 4) let dstIndex = ((y + paddingY) * targetBytesPerRowRGB) + ((x + paddingX) * 3) paddedBytes[dstIndex] = rgbaBytes[srcIndex] // R paddedBytes[dstIndex + 1] = rgbaBytes[srcIndex + 1] // G paddedBytes[dstIndex + 2] = rgbaBytes[srcIndex + 2] // B } } } } CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly) let paddedFrame = FrameData( width: targetWidth, height: targetHeight, rgbData: paddedRGBData) self.currentFrame = paddedFrame frameWidth = targetWidth frameHeight = targetHeight } }
-
Edit
flutter_lite_camera.podspec
to link the required frameworks:
Pod::Spec.new do |s| ... s.frameworks = ['AVFoundation', 'CoreMedia', 'CoreVideo'] ... end
-
Implement the method channel logic in
FlutterLiteCameraPlugin.swift
:
import Cocoa import FlutterMacOS public class FlutterLiteCameraPlugin: NSObject, FlutterPlugin { private let cameraManager = CameraManager() public static func register(with registrar: FlutterPluginRegistrar) { let channel = FlutterMethodChannel( name: "flutter_lite_camera", binaryMessenger: registrar.messenger) let instance = FlutterLiteCameraPlugin() registrar.addMethodCallDelegate(instance, channel: channel) } public func handle(_ call: FlutterMethodCall, result: @escaping FlutterResult) { switch call.method { case "getDeviceList": result(cameraManager.listDevices()) case "open": if let args = call.arguments as? [Int], let index = args.first { result(cameraManager.open(cameraIndex: index)) } else { result(FlutterError(code: "INVALID_ARGUMENT", message: "Index required", details: nil)) } case "captureFrame": if let frame = cameraManager.captureFrame() { result([ "width": frame.width, "height": frame.height, "data": frame.rgbData, ]) } else { result(FlutterError(code: "CAPTURE_FAILED", message: "No frame available", details: nil)) } case "release": cameraManager.release() result(nil) default: result(FlutterMethodNotImplemented) } } }
Building a Flutter Application to Display Camera Feed
To display the camera feed in a Flutter application:
-
Use a timer (
Future.delayed
) to repeatedly capture frames from the camera and update the UI.
Future<void> _startCamera() async { try { List<String> devices = await _flutterLiteCameraPlugin.getDeviceList(); if (devices.isNotEmpty) { bool opened = await _flutterLiteCameraPlugin.open(0); if (opened) { setState(() { _isCameraOpened = true; _shouldCapture = true; }); _isCapturing = true; _captureFrames(); } else { } } } catch (e) { } } Future<void> _captureFrames() async { if (!_isCameraOpened || !_shouldCapture) return; try { Map<String, dynamic> frame = await _flutterLiteCameraPlugin.captureFrame(); if (frame.containsKey('data')) { Uint8List rgbBuffer = frame['data']; await _convertBufferToImage(rgbBuffer, frame['width'], frame['height']); } } catch (e) { } if (_shouldCapture) { Future.delayed(const Duration(milliseconds: 30), _captureFrames); } }
-
Convert the raw pixel data into a
ui.Image
object for rendering in the Flutter widget tree.
ui.Image? _latestFrame; Future<void> _convertBufferToImage( Uint8List rgbBuffer, int width, int height) async { final pixels = Uint8List(width * height * 4); for (int i = 0; i < width * height; i++) { int r = rgbBuffer[i * 3]; int g = rgbBuffer[i * 3 + 1]; int b = rgbBuffer[i * 3 + 2]; pixels[i * 4] = b; pixels[i * 4 + 1] = g; pixels[i * 4 + 2] = r; pixels[i * 4 + 3] = 255; } final completer = Completer<ui.Image>(); ui.decodeImageFromPixels( pixels, width, height, ui.PixelFormat.rgba8888, completer.complete, ); final image = await completer.future; setState(() { _latestFrame = image; }); }
-
Use a
CustomPaint
widget to render theui.Image
object on the screen.
class FramePainter extends CustomPainter { final ui.Image image; FramePainter(this.image); @override void paint(Canvas canvas, Size size) { final paint = Paint(); cavans.drawImage(image, Offset.zero, paint); } @override bool shouldRepaint(covariant CustomPainter oldDelegate) => true; } return Center( child: CustomPaint( painter: FramePainter(_latestFrame!), child: SizedBox( width: 640, height: 480, ), ), );
Integrating Multi-Barcode Scanning into the Flutter Application
Now that the desktop Flutter camera application is complete, we can integrate an image processing SDK like Dynamsoft Barcode Reader to enable multi-barcode scanning.
-
Add the
flutter_barcode_sdk
package to your project by running the following command:
flutter pub add flutter_barcode_sdk
-
Visit the Dynamsoft website to obtain a 30-day free trial license key. Initialize the barcode reader as follows:
import 'package:flutter_barcode_sdk/flutter_barcode_sdk.dart'; FlutterBarcodeSdk? _barcodeReader; Future<void> initBarcodeSDK() async { _barcodeReader = FlutterBarcodeSdk(); await _barcodeReader!.setLicense(licenseKey); await _barcodeReader!.init(); }
-
Decode 1D/2D barcodes from the captured frame using the flutter_barcode_sdk library.
Future<void> _captureFrames() async { if (!_isCameraOpened || !_shouldCapture) return; try { Map<String, dynamic> frame = await _flutterLiteCameraPlugin.captureFrame(); if (frame.containsKey('data')) { Uint8List rgbBuffer = frame['data']; _decodeFrame(rgbBuffer, frame['width'], frame['height']); await _convertBufferToImage(rgbBuffer, frame['width'], frame['height']); } } catch (e) { } if (_shouldCapture) { Future.delayed(const Duration(milliseconds: 30), _captureFrames); } } Future<void> _decodeFrame(Uint8List rgb, int width, int height) async { if (isDecoding || _barcodeReader == null) return; isDecoding = true; results = await _barcodeReader!.decodeImageBuffer( rgb, width, height, width * 3, ImagePixelFormat.IPF_RGB_888.index, ); isDecoding = false; }
Top comments (0)