computersees·main
dindigul · tamil nadu · in
▶ Scope a project
README.md×
 ██████╗ ██████╗ ███╗   ███╗██████╗ ██╗   ██╗████████╗███████╗██████╗ ███████╗███████╗███████╗███████╗
██╔════╝██╔═══██╗████╗ ████║██╔══██╗██║   ██║╚══██╔══╝██╔════╝██╔══██╗██╔════╝██╔════╝██╔════╝██╔════╝
██║     ██║   ██║██╔████╔██║██████╔╝██║   ██║   ██║   █████╗  ██████╔╝███████╗█████╗  █████╗  ███████╗
██║     ██║   ██║██║╚██╔╝██║██╔═══╝ ██║   ██║   ██║   ██╔══╝  ██╔══██╗╚════██║██╔══╝  ██╔══╝  ╚════██║
╚██████╗╚██████╔╝██║ ╚═╝ ██║██║     ╚██████╔╝   ██║   ███████╗██║  ██║███████║███████╗███████╗███████║
 ╚═════╝ ╚═════╝ ╚═╝     ╚═╝╚═╝      ╚═════╝    ╚═╝   ╚══════╝╚═╝  ╚═╝╚══════╝╚══════╝╚══════╝╚══════╝
 ██████╗ ██████╗ ███╗   ███╗██████╗ ██╗   ██╗████████╗███████╗██████╗
██╔════╝██╔═══██╗████╗ ████║██╔══██╗██║   ██║╚══██╔══╝██╔════╝██╔══██╗
██║     ██║   ██║██╔████╔██║██████╔╝██║   ██║   ██║   █████╗  ██████╔╝
██║     ██║   ██║██║╚██╔╝██║██╔═══╝ ██║   ██║   ██║   ██╔══╝  ██╔══██╗
╚██████╗╚██████╔╝██║ ╚═╝ ██║██║     ╚██████╔╝   ██║   ███████╗██║  ██║
 ╚═════╝ ╚═════╝ ╚═╝     ╚═╝╚═╝      ╚═════╝    ╚═╝   ╚══════╝╚═╝  ╚═╝
███████╗███████╗███████╗███████╗
██╔════╝██╔════╝██╔════╝██╔════╝
███████╗█████╗  █████╗  ███████╗
╚════██║██╔══╝  ██╔══╝  ╚════██║
███████║███████╗███████╗███████║
╚══════╝╚══════╝╚══════╝╚══════╝
 ██████╗ ██████╗ ███╗   ███╗██████╗ ██╗   ██╗████████╗███████╗██████╗     ███████╗███████╗███████╗███████╗
██╔════╝██╔═══██╗████╗ ████║██╔══██╗██║   ██║╚══██╔══╝██╔════╝██╔══██╗    ██╔════╝██╔════╝██╔════╝██╔════╝
██║     ██║   ██║██╔████╔██║██████╔╝██║   ██║   ██║   █████╗  ██████╔╝    ███████╗█████╗  █████╗  ███████╗
██║     ██║   ██║██║╚██╔╝██║██╔═══╝ ██║   ██║   ██║   ██╔══╝  ██╔══██╗    ╚════██║██╔══╝  ██╔══╝  ╚════██║
╚██████╗╚██████╔╝██║ ╚═╝ ██║██║     ╚██████╔╝   ██║   ███████╗██║  ██║    ███████║███████╗███████╗███████║
 ╚═════╝ ╚═════╝ ╚═╝     ╚═╝╚═╝      ╚═════╝    ╚═╝   ╚══════╝╚═╝  ╚═╝    ╚══════╝╚══════╝╚══════╝╚══════╝
 ██████╗ ██████╗ ███╗   ███╗██████╗ ██╗   ██╗████████╗███████╗██████╗     ███████╗███████╗███████╗███████╗
██╔════╝██╔═══██╗████╗ ████║██╔══██╗██║   ██║╚══██╔══╝██╔════╝██╔══██╗    ██╔════╝██╔════╝██╔════╝██╔════╝
██║     ██║   ██║██╔████╔██║██████╔╝██║   ██║   ██║   █████╗  ██████╔╝ ── ███████╗█████╗  █████╗  ███████╗
██║     ██║   ██║██║╚██╔╝██║██╔═══╝ ██║   ██║   ██║   ██╔══╝  ██╔══██╗    ╚════██║██╔══╝  ██╔══╝  ╚════██║
╚██████╗╚██████╔╝██║ ╚═╝ ██║██║     ╚██████╔╝   ██║   ███████╗██║  ██║    ███████║███████╗███████╗███████║
 ╚═════╝ ╚═════╝ ╚═╝     ╚═╝╚═╝      ╚═════╝    ╚═╝   ╚══════╝╚═╝  ╚═╝    ╚══════╝╚══════╝╚══════╝╚══════╝
 ┌──┐┌──┐┌──┐┌──┐┌──┐┌──┐┌──┐┌──┐┌──┐┌──┐┌──┐┌──┐
 │ c││ o││ m││ p││ u││ t││ e││ r││ s││ e││ e││ s│
 └──┘└──┘└──┘└──┘└──┘└──┘└──┘└──┘└──┘└──┘└──┘└──┘
    ______                                            __                      _____
   / ____/   ____     ____ ___    ____               / /_   ___      ____    / ___/   ___      ___      _____
  / /       / __ \   / __ `__ \  / __ \   __  __    / __/  / _ \    / __/    \__ \   / _ \    / _ \    / ___/
 / /___    / /_/ /  / / / / / / / /_/ /  / / / /   / /_   /  __/   / /      ___/ /  /  __/   /  __/   (__  )
 \____/    \____/  /_/ /_/ /_/  \____/  / /_/ /    \__/   \___/   /_/      /____/   \___/    \___/   /____/
                               /_/      \__,_/                                                                

8yrs Focused CV · AR · ML 40+ Projects shipped 3 In-house flagships 7 Engineers · one team

We see & breathe pixels & tensors — production computer vision, end to end. Any camera, any hardware, any model, any frame rate, any coding language. Great at PoCs, good at production.

01/21 telecom rack · dense wire detection (opener)
all 22 beats · click to jump
01

Services

// services/*.py

Every service is in production with a real client. What it is, what you get, latency budget we target.

01Object detection & trackingReal-time detection, classification, multi-camera ReID. People counting, vehicle tracking, defect inspection, weapon detection.< 30 ms / frame
02Mobile CV · AR · XR · UnityA pillar we're proud of: computer vision on mobile. iOS + Android on-device inference (CoreML / TFLite), Meta Quest in-headset recognition, 3D overlays, stereo depth, model-based 6DoF tracking.On-device
03Edge & embedded visionJetson (Nano/NX/Xavier), RPi, CSI cameras, TensorRT, DeepStream, GStreamer. 24/7 deployments with OTA and offline sync.Field-hardened
04Pipeline profiling & latencyEvery stage timed in ms — decode, preproc, pixel transforms, inference, post, serve. Keep the GPU saturated.2–2.5× typical
05Model conversion & quantizationPyTorch/TF → ONNX → TensorRT / CoreML / TFLite. FP16 & INT8. Reprojection-validated parity checks.3–7 day turn
06Classical CV & geometryOpenCV PnP, ArUco, intrinsic + extrinsic calibration, multi-camera rigs, 6DoF model-based tracking (DLR-RM M3T, ViSP).Sub-pixel
07OCR, segmentation & mattingIn-frame OCR on moving footage, PaddleSeg, PP-Matting, trimap + alpha for compositing. Dashcam speed-sign extraction.Frame-selected
08Cameras, lenses & calibrationUSB3, MIPI CSI, RTSP, industrial, TrueDepth. Lens selection, FOV / distortion trade-offs, multi-camera rig calibration.End-to-end
02

Stack · package.json

// everything we've shipped on

Not a vendor logo parade — the stack we have production code on. Amber chips are daily drivers; greyed are situational.

{
"detection_tracking":
YOLOv8YOLOv11MMDetectionMMPose YOLOv4YOLOv5YOLOx EfficientDetDetectron2SCRFDReID
"pose_seg_matting":
MediaPipeBlazePosePaddleSegPP-Matting Apple VisionHRNetOpenPoseST-GCN
"frameworks_runtime":
PyTorchTensorFlowOpenCV KerasGradioRoboflowHugging Face
"classical_cv_geometry":
OpenCV PnPViSPDLR-RM M3TIntrinsic + extrinsic calibration ArUcoChessboard stereoHistogram methodsGeometric transforms
"optimisation":
TensorRTONNXCoreMLTFLite FP16INT8nsysnvidia-smi
"edge_compute":
Jetson Nano · NX · XavierCUDADeepStream Raspberry PiOdroidARMGStreamerFFmpeg libargusV4L2
"cameras_sensors":
USB · USB3MIPI CSI · ArducamRTSP · IP · CCTVMobile phone IndustrialDashcamStereoTrueDepth UVC
"ar_xr_unity":
UnityMeta QuestThree.js ARKitARCoreReact
"languages":
PythonC++SwiftJavaScript · TypeScript Kotlin · JavaC#Objective-C BashCUDA C
"mobile_web":
iOS · CoreMLAndroid · TFLite macOSFirebaseFlaskNode.jsHTML5 Canvas
"cloud_ops":
AWS · Lambda · SageMaker · Rekognition · S3AzureGCP DockerKubernetessystemd · OTA
"connectivity_field":
4G / 5G cellularWiFiSIM HATsGPS · IMU · GNSS OpenVPNOTAOffline sync
"hardware_design":
3D CADEnclosure designThermal · airflowPower · battery BOM3D-print prototypingDFM handoff
"annotation_data":
Supervise.lyLabelImgLabelMeRoboflow MongoDBMySQLcustom capture pipelines
// every chip = production shipped, not just tried }
03

Flagship products

// /products

Three products, shipped and maintained. Each anchors a pillar of capability — AR/XR, edge hardware, multi-camera CCTV.

04

How we work

// engineering, not consulting

No slide decks about AI. We pick the camera, write the pipeline, ship the box — and keep it running 24/7. One team, one codebase.

/ E2E

End-to-end

Data → annotation → training → conversion → edge → dashboard. One team, one codebase.

/ EDGE

Edge ↔ cloud

Embedded SBCs, on-prem, or cloud GPUs. We deploy wherever your system must run.

/ PROD

Production-grade

24/7, vibration, lighting variance, OTA updates. Not demos — systems that run.

/ FULL

Full stack vision

Optics · embedded · ML · APIs · dashboards · XR — all under one roof.

05

Engagement tiers

// engage/*.yaml

Pick the scale. We'll size the team. Milestone-based delivery, commit-to-task traceability, hourly / fixed / retainer options.

TIER 01

PoC

~$1.5k
3–7 days · on your data
  • Test on your samples first
  • Free when scope is clear
  • Parity report + latency numbers
TIER 02

MVP

$5–15k
2–4 weeks · milestone-based
  • Functional prototype, core features
  • Edge device + dashboard
  • Integration with your system
TIER 03

Production

$15–60k+
3–5 months · support included
  • Camera setup, inference, storage
  • API, dashboard, OTA, fleet ops
  • Ongoing support retainer
The three-day offer

Start with a 3-day test on your data.

Book a 30-min call to walk through your scope — we'll run a sample and show you what the pipeline does before we quote.

06

Contact

// contact/hello.md
Phone+91 99445 31155
Studio74, Thadicombu Road, Dindigul 624001, Tamil Nadu · India
EntityThoht Delta R&D Centre (OPC) Pvt. Ltd.
PartnersNVIDIA Inception · AWS Activate
end of file · © 2026 computersees · helping computers see
PROBLEMS 0
OUTPUT
DEBUG CONSOLE
TERMINAL
PORTS 3
studio@dindigul ~/studio $ cv fleet --live --deploys ┌─ busnet/route-17a 24 cams 31.2 fps 99.97% up last sync 14s ├─ storeintel/mg-road 18 cams 28.4 fps 99.91% up last sync 8s ├─ visionguide/valve-line 6 rigs 72.0 fps 99.99% up last sync 2s ├─ defect/pcb-inspection 2 cams 42.1 fps 99.88% up last sync 1s └─ +8 more — 12 live total frames today 14,823,917 · inferences/sec (fleet) 412.7 · gpu-hours/mo 8,640 studio@dindigul ~/studio $ git log --oneline -n 3 ops/deploys a7c2f91 · 2h ago · busnet v3.2.1 → route-17a (TRT fp16, +8% fps) d04e17b · yday · storeintel reid-head retrain → mg-road (mAP 0.87 → 0.91) 1e9f6aa · 3d ago · visionguide/valve: 6DoF M3T tracker (sub-pixel, Quest 3) studio@dindigul ~/studio $ cv engage --tier poc --data sample.mp4
main 12 pipelines live parity · 99.94%
py 3.11 cuda 12.4 trt 10.0 IST ⛶ Focus mode