# Build System Summary ## Overview This document provides a comprehensive overview of the newly created build system for the Pixel to Voxel Projector. The build system has been designed to support multiple platforms, build methods, and configurations with a focus on ease of use and flexibility. --- ## What's New ### 1. Enhanced Setup Script (`/setup.py`) **Key Features:** - ✓ Automatic CUDA detection and configuration - ✓ GPU compute capability detection - ✓ Protocol buffer compilation - ✓ Multiple C++ and CUDA extension modules - ✓ Comprehensive dependency management - ✓ Support for development and production builds **Extensions Built:** - `process_image_cpp` - Image processing - `motion_extractor_cpp` - Motion extraction - `sparse_voxel_grid` - Voxel grid management - `stream_manager` - Protocol streaming - `drone_detector` - Drone detection - `thermal_mono_fusion` - Thermal camera fusion - `orientation_manager` - Camera orientation - `voxel_cuda` - CUDA voxel processing - `voxel_optimizer_cuda` - CUDA optimization - `small_object_detector_cuda` - CUDA object detection **Usage:** ```bash # Development install pip install -e . # With all features pip install -e ".[full,dev,cuda]" ``` ### 2. CMake Build System (`/CMakeLists.txt`) **Key Features:** - ✓ C++17 standard support - ✓ CUDA 11.x/12.x support - ✓ OpenMP parallel processing - ✓ pybind11 integration - ✓ Automatic GPU architecture detection - ✓ Configurable optimization flags - ✓ Support for production and debug builds **Build Options:** - `BUILD_CUDA` - Enable CUDA extensions - `BUILD_TESTS` - Build test suite - `BUILD_BENCHMARKS` - Build benchmarks - `BUILD_PYTHON_BINDINGS` - Build Python modules - `USE_OPENMP` - Enable OpenMP - `ENABLE_FAST_MATH` - Enable fast math optimizations **Usage:** ```bash mkdir build && cd build cmake .. -GNinja -DCMAKE_BUILD_TYPE=Release ninja ``` ### 3. Comprehensive Requirements (`/requirements.txt`) **Categories:** - Core scientific computing (numpy, scipy) - Computer vision (OpenCV, Pillow) - Video processing (FFmpeg) - GPU acceleration (CuPy, PyCUDA) - Protocol buffers and gRPC - Networking (ZeroMQ, WebSockets) - Compression (LZ4, Zstandard, Snappy) - 3D visualization (Open3D, VTK, PyOpenGL) - System monitoring (psutil, pynvml) - Testing and development tools **Total packages:** 100+ with version pinning for stability ### 4. Docker Support (`/docker/`) **Files Created:** - `Dockerfile` - CUDA-enabled container - `docker-compose.yml` - Multi-service orchestration - `entrypoint.sh` - Container initialization - `.dockerignore` - Build optimization **Services:** - `pixeltovoxel` - Main application - `jupyter` - Interactive development - `benchmark` - Performance testing **Features:** - ✓ NVIDIA GPU passthrough - ✓ CUDA 12.2 base image - ✓ All system dependencies pre-installed - ✓ X11 GUI support - ✓ Shared memory configuration - ✓ Multi-camera support **Usage:** ```bash # Build image docker build -t pixeltovoxel:latest -f docker/Dockerfile . # Run with GPU docker run --gpus all -it --rm -v $(pwd):/app pixeltovoxel:latest # Start all services docker-compose -f docker/docker-compose.yml up -d ``` ### 5. Build Documentation **Files Created:** - `BUILD.md` - Comprehensive build instructions - `DEPENDENCIES.md` - Complete dependency documentation - `Makefile` - Convenient build commands - `BUILD_SYSTEM_SUMMARY.md` - This file --- ## Build Methods Comparison | Method | Speed | Ease of Use | Flexibility | Platform | |--------|-------|-------------|-------------|----------| | **pip install** | Fast | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | All | | **CMake** | Medium | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | All | | **Docker** | Slow (first) | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | Linux | | **Makefile** | Fast | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | Unix | --- ## Quick Start Guide ### For Developers ```bash # 1. Clone repository git clone cd Pixeltovoxelprojector # 2. Install system dependencies (Ubuntu) sudo apt-get install -y build-essential cmake ninja-build \ libopencv-dev ffmpeg libzmq3-dev # 3. Install in development mode make dev # OR pip install -e ".[dev]" # 4. Run tests make test ``` ### For Production ```bash # Option 1: Direct install pip install . # Option 2: Docker deployment docker-compose -f docker/docker-compose.yml up -d ``` ### For GPU Users ```bash # 1. Ensure CUDA is installed export CUDA_HOME=/usr/local/cuda-12.0 # 2. Install with GPU support pip install -e ".[full,cuda]" # 3. Verify GPU access python -c "import cupy; print(cupy.cuda.is_available())" ``` --- ## Architecture ### Build System Flow ``` User Input (make/pip/cmake/docker) ↓ Build System Detection ├── Detect CUDA ├── Detect GPU Capabilities ├── Check System Libraries └── Configure Python Environment ↓ Compilation Phase ├── C++ Extensions (OpenMP) ├── CUDA Extensions (nvcc) ├── Protocol Buffers (protoc) └── Python Bindings (pybind11) ↓ Installation Phase ├── Python Packages ├── Compiled Extensions └── Configuration Files ↓ Verification ├── Import Tests ├── GPU Availability └── System Tests ``` ### Extension Module Architecture ``` Python Layer ↓ pybind11 Bindings ↓ C++ Core (OpenMP) ↓ CUDA Kernels (if available) ↓ GPU Hardware ``` --- ## Key Features ### 1. Automatic Configuration - **GPU Detection**: Automatically detects available GPUs and their capabilities - **CUDA Version**: Detects CUDA 11.x or 12.x and configures accordingly - **Compute Capabilities**: Optimizes for specific GPU architectures - **System Libraries**: Checks for required system dependencies ### 2. Multiple Build Paths - **Python setuptools**: Standard Python packaging - **CMake**: Professional C++ build system - **Docker**: Containerized deployment - **Makefile**: Convenient shortcuts ### 3. Optimization Options - **Compiler Flags**: `-O3`, `-march=native`, `-ffast-math` - **CUDA Flags**: `--use_fast_math`, architecture-specific optimization - **OpenMP**: Parallel processing on CPU - **Build Types**: Debug, Release, RelWithDebInfo ### 4. Cross-Platform Support - **Linux**: Primary platform (fully supported) - **Windows**: WSL2 support - **macOS**: CPU-only support - **Docker**: Universal container support ### 5. Development Tools - **Testing**: pytest with coverage and benchmarking - **Code Quality**: black, flake8, mypy, pylint - **Documentation**: Sphinx with RTD theme - **Debugging**: Debug builds with symbols --- ## Performance Optimizations ### Compiler Optimizations ```cpp // C++ flags -O3 // Maximum optimization -march=native // CPU-specific instructions -ffast-math // Fast floating-point math -fopenmp // Parallel processing ``` ### CUDA Optimizations ```cuda // CUDA flags --use_fast_math // Fast math operations -O3 // Maximum optimization -gencode arch=compute_89,code=sm_89 // RTX 4090 -maxrregcount=128 // Register optimization ``` ### Build Performance - **Ninja**: Parallel builds (faster than make) - **ccache**: Compilation caching (if available) - **Parallel Jobs**: `MAX_JOBS=8` environment variable - **Incremental Builds**: Only rebuild changed files --- ## Testing Infrastructure ### Test Categories 1. **Unit Tests**: Individual component testing 2. **Integration Tests**: Multi-component testing 3. **Benchmark Tests**: Performance measurement 4. **GPU Tests**: CUDA functionality testing 5. **Installation Tests**: Verify successful build ### Running Tests ```bash # All tests make test # Fast tests only make test-fast # With coverage make test-coverage # Installation verification make test-installation # Benchmarks make benchmark ``` --- ## Dependency Management ### System Dependencies - Automatically detected during configuration - Clear error messages for missing dependencies - Platform-specific installation instructions ### Python Dependencies - Version pinning for stability - Optional dependency groups (dev, cuda, full) - Compatibility checking ### GPU Dependencies - CUDA version detection - CuPy automatic installation - Driver compatibility checking --- ## Troubleshooting Quick Reference | Issue | Solution | |-------|----------| | CUDA not found | `export CUDA_HOME=/usr/local/cuda` | | GPU not detected | Install nvidia-container-toolkit | | Compilation fails | Update gcc: `sudo apt install g++-10` | | Import errors | `export PYTHONPATH=/path/to/project` | | Memory errors | Reduce parallel jobs: `MAX_JOBS=4` | | Protocol buffer errors | Install protoc: `sudo apt install protobuf-compiler` | --- ## Makefile Quick Reference ```bash make help # Show all available commands make install # Install package make install-dev # Development install make install-full # Full install with all deps make build # Build extensions make test # Run tests make benchmark # Run benchmarks make docker # Build Docker image make docker-run # Run Docker container make format # Format code make lint # Check code quality make clean # Clean build artifacts make info # Show system information ``` --- ## Docker Quick Reference ```bash # Build docker build -t pixeltovoxel:latest -f docker/Dockerfile . # Run basic docker run --gpus all -it --rm pixeltovoxel:latest # Run with volume docker run --gpus all -it --rm -v $(pwd):/app pixeltovoxel:latest # Run Jupyter docker run --gpus all -p 8888:8888 pixeltovoxel:latest \ jupyter lab --ip=0.0.0.0 --allow-root # Docker Compose docker-compose -f docker/docker-compose.yml up -d docker-compose -f docker/docker-compose.yml logs -f docker-compose -f docker/docker-compose.yml down ``` --- ## File Structure ``` Pixeltovoxelprojector/ ├── setup.py # Enhanced Python build script ├── CMakeLists.txt # CMake build configuration ├── requirements.txt # Python dependencies ├── Makefile # Convenient build commands ├── BUILD.md # Build instructions ├── DEPENDENCIES.md # Dependency documentation ├── BUILD_SYSTEM_SUMMARY.md # This file ├── .dockerignore # Docker build optimization ├── docker/ │ ├── Dockerfile # CUDA-enabled container │ ├── docker-compose.yml # Service orchestration │ └── entrypoint.sh # Container initialization ├── src/ # Python source code ├── cuda/ # CUDA source code ├── tests/ # Test suite └── examples/ # Example scripts ``` --- ## Next Steps ### For Users 1. Choose your build method (pip recommended for most users) 2. Follow the Quick Start Guide above 3. Run verification tests: `make test-installation` 4. Try the examples in `examples/` directory ### For Developers 1. Install in development mode: `make dev` 2. Set up pre-commit hooks (optional) 3. Read `BUILD.md` for detailed instructions 4. Run tests before committing: `make test` ### For Deployment 1. Use Docker for production: `make docker` 2. Configure environment variables 3. Set up monitoring and logging 4. Scale with docker-compose --- ## Support and Resources ### Documentation - **BUILD.md**: Detailed build instructions - **DEPENDENCIES.md**: Complete dependency list - **README.md**: Project overview - **API Documentation**: In `docs/` directory ### Common Commands ```bash # Get help make help # Check system info make info # Check dependencies make check-deps # Clean and rebuild make clean-all && make install-full # Run full test suite make all ``` ### Troubleshooting 1. Check `BUILD.md` troubleshooting section 2. Verify CUDA installation: `nvidia-smi` 3. Check Python environment: `which python` 4. Test imports: `python -c "import voxel_cuda"` --- ## Version Information - **Build System Version**: 1.0.0 - **Python Support**: 3.8, 3.9, 3.10, 3.11, 3.12 - **CUDA Support**: 11.x, 12.x - **Platform**: Linux (primary), Windows WSL2, macOS (limited) - **Last Updated**: 2025-01-13 --- ## Conclusion The new build system provides: ✓ **Flexibility**: Multiple build methods for different use cases ✓ **Automation**: Automatic detection and configuration ✓ **Performance**: Optimized compilation flags and GPU support ✓ **Reliability**: Comprehensive testing and error handling ✓ **Documentation**: Extensive guides and examples ✓ **Portability**: Docker support for consistent environments The system is production-ready and supports both development and deployment workflows. For questions or issues, please refer to the documentation or create an issue in the project repository. --- **Build system created by**: Voxel Processing Team **Date**: 2025-01-13 **Status**: Production Ready ✓