In this post, we’ll build a Next.js app that allows users to upload images of cars, leverages your custom model for detecting license plates, and blurs them before returning the modified image. This guide focuses on integrating a custom inference model into a Next.js backend, so no Amazon Rekognition will be used for labeling.
Prerequisites
Before we get started, you’ll need the following:
- A trained model hosted on an inference service (e.g., SageMaker or a custom API).
- Basic knowledge of Next.js and API routes.
- An S3 bucket or similar for file storage (if needed).
Step 1: Install Required Packages
To process images and handle uploads, we need sharp
and multer
packages. Install them in your Next.js project:
npm install sharp multer
Step 2: Set Up API Route for Image Upload and Processing
Next.js provides powerful API routes that let you run server-side code. We will create an API endpoint that:
- Accepts image uploads.
- Sends the image to your custom model for inference to detect the license plate.
- Uses sharp to blur the detected license plate.
- Returns the processed image back to the user.
Here’s the code for the API route:
// pages/api/upload.js
import multer from 'multer';
import nextConnect from 'next-connect';
import sharp from 'sharp';
import axios from 'axios';
// Configure multer to store images in memory
const upload = multer({
storage: multer.memoryStorage(),
limits: { fileSize: 5 * 1024 * 1024 }, // Limit file size to 5MB
});
// Set up API route with next-connect and multer
const apiRoute = nextConnect({
onError(error, req, res) {
res.status(500).json({ error: `Error: ${error.message}` });
},
onNoMatch(req, res) {
res.status(405).json({ error: `Method ${req.method} Not Allowed` });
},
});
apiRoute.use(upload.single('carImage'));
apiRoute.post(async (req, res) => {
try {
const imageBuffer = req.file.buffer;
// Call your custom model for inference
const response = await axios.post('YOUR_MODEL_ENDPOINT_URL', imageBuffer, {
headers: { 'Content-Type': 'application/octet-stream' },
});
// Extract bounding box coordinates for the license plate
const boundingBox = response.data.boundingBox; // Adjust based on model response
// Apply blur to the license plate using sharp
const blurredImageBuffer = await blurLicensePlate(imageBuffer, boundingBox);
// Send back the processed image
res.setHeader('Content-Type', 'image/jpeg');
res.send(blurredImageBuffer);
} catch (error) {
res.status(500).json({ error: `Error: ${error.message}` });
}
});
export default apiRoute;
export const config = {
api: {
bodyParser: false, // Important to disable default body parser
},
};
// Function to blur the license plate using sharp
async function blurLicensePlate(imageBuffer, boundingBox) {
const image = sharp(imageBuffer);
// Extract bounding box coordinates from the model response
const { left, top, width, height } = boundingBox;
// Blur the license plate region
const blurredRegion = await image
.extract({ left, top, width, height })
.blur(20) // Adjust the blur intensity
.toBuffer();
// Composite the blurred region back onto the image
return image
.composite([{ input: blurredRegion, blend: 'over', top, left }])
.toBuffer();
}
Step 3: Frontend Upload Form
Now that our backend can handle image uploads and processing, we need a simple form on the frontend to let users upload their images.
// components/UploadForm.js
import { useState } from 'react';
const UploadForm = () => {
const [image, setImage] = useState(null);
const [resultImage, setResultImage] = useState(null);
const handleImageUpload = async (event) => {
const file = event.target.files[0];
if (!file) return;
const formData = new FormData();
formData.append('carImage', file);
try {
const response = await fetch('/api/upload', {
method: 'POST',
body: formData,
});
if (response.ok) {
const blob = await response.blob();
setResultImage(URL.createObjectURL(blob));
} else {
console.error('Image processing failed');
}
} catch (error) {
console.error('Error uploading image:', error);
}
};
return (
<div>
<input type="file" accept="image/*" onChange={handleImageUpload} />
{resultImage && <img src={resultImage} alt="Processed Image" />}
</div>
);
};
export default UploadForm;
Step 4: Deploy and Test
You can now test your application locally or deploy it on Vercel. After deployment:
Users will be able to upload images of cars. The backend will send the image to your model, blur the detected license plate, and return the modified image.
Securing the Inference Endpoint
If your inference endpoint requires authentication (like an API key or AWS credentials), you should ensure the request to the endpoint includes the necessary authentication. Here’s a basic example with API key authentication:
// Call the custom model endpoint with an API key
const response = await axios.post('YOUR_MODEL_ENDPOINT_URL', imageBuffer, {
headers: {
'Content-Type': 'application/octet-stream',
'Authorization': `Bearer ${process.env.MODEL_API_KEY}`,
},
});
Conclusion
In this tutorial, we’ve built a Next.js app that allows users to upload car images, uses a custom-trained model to detect license plates, and blurs the detected area. You can now extend this to different use cases by adjusting the model or image processing logic as needed.