MareArts ANPR mobile app

Showing posts with label Python. Show all posts
Showing posts with label Python. Show all posts

12/31/2025

AWS SAM Troubleshooting - Fixing pip/runtime and AWS CLI Issues


πŸ”§ AWS SAM Troubleshooting - Fixing pip/runtime and AWS CLI Issues

If you're deploying AWS Lambda functions with SAM (Serverless Application Model), you may have encountered frustrating build errors. This guide explains the two most common issues and how to fix them permanently.

Note: This guide uses generic examples (<YOUR_STACK_NAME>) and is safe to share publicly.

🚨 The Two Common Problems

Problem A — sam build fails with pip/runtime error

You may see this error:

Error: PythonPipBuilder:ResolveDependencies - Failed to find a Python runtime containing pip on the PATH.

What this means: SAM is trying to build for a specific Lambda runtime (like python3.11), but your shell has:

  • python from one location (e.g., conda environment)
  • pip from another location (e.g., ~/.local/bin or /usr/bin)

SAM requires a matching pair - the pip must belong to the same Python interpreter that matches your Lambda runtime version.

Problem B — AWS CLI crashes with botocore conflicts

You may see errors like:

KeyError: 'opsworkscm'
ModuleNotFoundError: No module named 'dateutil'

What this means: Your system AWS CLI (/usr/bin/aws) is accidentally importing incompatible botocore or boto3 packages from ~/.local/lib/python..., causing version conflicts.

πŸ” Quick Diagnosis (5 Commands)

Run these from your SAM project directory to diagnose the issue:

# 1. What Python runtime does your template.yaml require?
grep "Runtime: python" template.yaml

# 2. What python are you using?
which python
python -V

# 3. What pip are you using?
which pip
pip -V

# 4. Does pip belong to this python?
python -m pip -V

🚩 Red flag: If pip -V and python -m pip -V show different paths or Python versions, your PATH is contaminated.

✅ The Fix: Dedicated Environment + Clean PATH

The solution is to create an isolated environment that matches your Lambda runtime and force clean PATH ordering.

Step 1: Create an environment matching your Lambda runtime

If your template.yaml specifies Runtime: python3.11, create a Python 3.11 environment:

# Using conda (recommended)
conda create -n aws-sam-py311 python=3.11 pip -y
conda activate aws-sam-py311

# Or using venv
python3.11 -m venv ~/.virtualenvs/aws-sam-py311
source ~/.virtualenvs/aws-sam-py311/bin/activate

Step 2: Install SAM CLI and AWS CLI inside the environment

# Upgrade pip first
python -m pip install --upgrade pip

# Install SAM CLI
python -m pip install aws-sam-cli

# Optional: Install AWS CLI v2 (avoids system aws/botocore conflicts)
# Using conda-forge:
conda install -c conda-forge awscli -y

# Or using pip:
python -m pip install awscli

Step 3: Disable user-site imports and fix PATH ordering

This is the critical step that prevents ~/.local contamination:

# Disable user site-packages (~/.local)
export PYTHONNOUSERSITE=1

# Force clean PATH (conda/venv bin first, then system)
export PATH="$CONDA_PREFIX/bin:/usr/bin:/bin"

# Or for venv:
# export PATH="$VIRTUAL_ENV/bin:/usr/bin:/bin"

# Clear shell hash table
hash -r

Step 4: Verify the fix

# All should point to your environment
which python
which pip
which sam
which aws

# Verify versions match
python -V      # Should be 3.11.x
pip -V         # Should show python 3.11
sam --version  # Should work without errors
aws --version  # Should work without errors

Step 5: Build and deploy

sam build --cached --parallel
sam deploy --no-confirm-changeset --stack-name <YOUR_STACK_NAME> --region <YOUR_AWS_REGION>

🐳 Alternative: Container Build (Docker)

If you have Docker installed, you can avoid all Python toolchain issues by building in a container:

sam build --use-container
sam deploy --no-confirm-changeset --stack-name <YOUR_STACK_NAME> --region <YOUR_AWS_REGION>

Pros:

  • ✅ No need to match local Python version
  • ✅ Builds in environment identical to Lambda
  • ✅ Most reproducible approach

Cons:

  • ❌ Slower than native builds
  • ❌ Requires Docker installed and running

⚡ Quick Fix for Broken AWS CLI (Emergency)

If you need to use system AWS CLI right now and it's broken:

# Force it to ignore user-site packages
PYTHONNOUSERSITE=1 /usr/bin/aws --version
PYTHONNOUSERSITE=1 /usr/bin/aws sts get-caller-identity
PYTHONNOUSERSITE=1 /usr/bin/aws s3 ls

But the proper fix is: Install AWS CLI inside your dedicated environment (see Step 2 above).

πŸ€” Why Lambda is python3.11 but my machine uses python3.12?

This is a common source of confusion. They are different things:

Component What It Is Where It's Defined
Lambda Runtime Python version AWS runs in production template.yamlRuntime: python3.11
Your Local Python Python version for development/training/scripts Your system default or conda environment

Key point: When SAM builds your Lambda functions, it must build dependencies compatible with the Lambda runtime, even if your system default is Python 3.12.

Example from template.yaml:

AnprDeviceLicenseValidateFunction:
  Type: AWS::Serverless::Function
  Properties:
    CodeUri: anpr_device_license_validate/
    Handler: app.lambda_handler
    Runtime: python3.11          # ← Lambda uses 3.11
    Architectures:
      - x86_64

So you have three options:

  1. Match local environment to Lambda (recommended) - Create python3.11 env for SAM work
  2. Use container build - Let Docker handle it with sam build --use-container
  3. Upgrade Lambda runtime - Change template.yaml to python3.12 (requires testing)

πŸ“‹ Complete Example: Deploy Script

Here's a complete bash script that implements all the fixes:

#!/usr/bin/env bash
set -euo pipefail

# Activate conda environment (matches Lambda runtime)
source ~/anaconda3/etc/profile.d/conda.sh
conda activate aws-sam-py311

# Critical: Clean PATH and disable user-site
export PYTHONNOUSERSITE=1
export PATH="$CONDA_PREFIX/bin:/usr/bin:/bin"
hash -r

echo "Environment ready:"
echo "  Python: $(python -V)"
echo "  SAM: $(sam --version | head -1)"
echo "  AWS: $(aws --version)"

# Build and deploy
sam build --cached --parallel
sam deploy --no-confirm-changeset

🎯 Troubleshooting Checklist

Issue Check Fix
sam build fails which pip vs python -m pip -V Create dedicated env, fix PATH
aws command crashes echo $PYTHONNOUSERSITE Set PYTHONNOUSERSITE=1
Wrong Python version python -V vs Lambda runtime Create env matching Lambda
Multiple pip versions which -a pip Fix PATH ordering
Conda conflicts conda env list Create separate env for SAM

πŸ”’ Security Best Practices

⚠️ When sharing code publicly:

  • Never publish template.yaml with secrets (API keys, tokens, webhook URLs)
  • ✅ Use AWS Secrets Manager or SSM Parameter Store for secrets
  • ✅ Redact from logs:
    • AWS account IDs
    • API Gateway URLs
    • Stack names and ARNs
    • Any access keys/tokens

πŸ’‘ Pro Tips

1. Create a deployment script

Instead of remembering all these environment variables, create a deploy.sh script:

#!/usr/bin/env bash
set -euo pipefail

# Activate environment
source ~/anaconda3/etc/profile.d/conda.sh
conda activate aws-sam-py311

# Clean environment
export PYTHONNOUSERSITE=1
export PATH="$CONDA_PREFIX/bin:/usr/bin:/bin"
hash -r

# Build and deploy
sam build --cached --parallel
sam deploy --no-confirm-changeset

echo "✅ Deployment complete!"

Make it executable: chmod +x deploy.sh

2. Use SAM build cache for faster builds

# First build (slow)
sam build

# Subsequent builds (much faster!)
sam build --cached --parallel

3. Test locally before deploying

# Invoke function locally
sam local invoke MyFunction --event events/test.json

# Start local API
sam local start-api

4. Skip changeset confirmation in CI/CD

# Manual deployment - shows changes
sam deploy

# CI/CD deployment - no prompts
sam deploy --no-confirm-changeset

πŸ“Š Before vs After

❌ Before (Broken)
$ sam build
Error: Failed to find Python runtime containing pip

$ aws --version
KeyError: 'opsworkscm'

$ which pip
/home/user/.local/bin/pip  # Wrong location!

$ pip -V
pip 24.0 (python 3.12)     # Wrong version!
✅ After (Fixed)
$ conda activate aws-sam-py311
$ export PYTHONNOUSERSITE=1
$ export PATH="$CONDA_PREFIX/bin:/usr/bin:/bin"

$ sam build
Build Succeeded ✨

$ aws --version
aws-cli/2.32.26 Python/3.11.14

$ which pip
/home/user/anaconda3/envs/aws-sam-py311/bin/pip  # Correct!

$ pip -V
pip 25.3 (python 3.11)     # Matches Lambda runtime!

πŸŽ“ Summary

The root cause of most SAM build failures is PATH contamination - your shell mixes Python versions and pip locations from different sources (~/.local, /usr/bin, conda environments).

The complete fix:

  1. ✅ Create dedicated environment matching Lambda runtime (python3.11)
  2. ✅ Install SAM CLI and AWS CLI inside that environment
  3. ✅ Set PYTHONNOUSERSITE=1 to disable user-site packages
  4. ✅ Fix PATH ordering: export PATH="$CONDA_PREFIX/bin:/usr/bin:/bin"
  5. ✅ Run hash -r to clear shell cache

After this, sam build and aws commands will work reliably! πŸš€

πŸ”— Additional Resources


Tags: AWS, SAM, Lambda, Python, DevOps, Deployment, Troubleshooting, ServerlessFramework, CICD, CloudComputing

12/30/2025

FREE ANPR/ALPR/LPR API - Try Before You Buy (1000 Requests/Day)


🎁 FREE License Plate Recognition API - No Credit Card Required!

Want to try ANPR (Automatic Number Plate Recognition) / ALPR (Automatic License Plate Recognition) / LPR (License Plate Recognition) without buying a license? We offer a completely FREE test API with 1000 requests per day!

✨ What You Get (FREE!)

  • 1000 requests/day - Perfect for testing and evaluation
  • No credit card required
  • No registration needed
  • 5 regions supported: Korea, Europe, USA/Canada, China, Universal
  • Multiple models to test (pico to large)
  • Works instantly - just install and run!

πŸš€ Quick Start (30 Seconds)

# Install
pip install marearts-anpr

# Test immediately (NO CONFIG NEEDED!)
ma-anpr test-api your-plate.jpg --region eup

# That's it! πŸŽ‰

🌍 Supported Regions

Region Code Coverage Example
kr South Korea 123κ°€4567
eup Europe (EU standards) AB-123-CD
na USA, Canada, Mexico ABC-1234
cn China δΊ¬A·12345
univ Universal (all) Any format

πŸ’» Usage Examples

Command Line (Easiest!)

# European plates
ma-anpr test-api eu-plate.jpg --region eup

# Korean plates
ma-anpr test-api kr-plate.jpg --region kr

# US plates
ma-anpr test-api us-plate.jpg --region na

# Chinese plates
ma-anpr test-api cn-plate.jpg --region cn

# Unknown region? Use universal
ma-anpr test-api unknown-plate.jpg --region univ

Python Script

#!/usr/bin/env python3
import subprocess

def test_free_anpr(image_path, region='eup'):
    """Test free ANPR API - no credentials needed!"""
    
    cmd = f'ma-anpr test-api "{image_path}" --region {region}'
    result = subprocess.run(cmd, shell=True, capture_output=True, text=True)
    
    if result.returncode == 0:
        print(result.stdout)
        return True
    else:
        print(f"Error: {result.stderr}")
        return False

# Test European plate
test_free_anpr("plate.jpg", "eup")

# Test Korean plate
test_free_anpr("plate2.jpg", "kr")

Test Multiple Regions

# Test same image with different regions
for region in eup kr na cn univ; do
    echo "Testing $region..."
    ma-anpr test-api plate.jpg --region $region
done

🎯 Advanced Options

Try Different Models

# List all available models
ma-anpr test-api --list-models

# Try different detector models
ma-anpr test-api plate.jpg --region eup --detector small_640p_fp32
ma-anpr test-api plate.jpg --region eup --detector medium_640p_fp32
ma-anpr test-api plate.jpg --region eup --detector large_640p_fp32

# Try different OCR models
ma-anpr test-api plate.jpg --region eup --ocr small_fp32
ma-anpr test-api plate.jpg --region eup --ocr medium_fp32
ma-anpr test-api plate.jpg --region eup --ocr large_fp32

Batch Testing

# Test all images in a folder
for img in ./plates/*.jpg; do
    echo "Processing $img..."
    ma-anpr test-api "$img" --region eup
done

πŸ“Š Sample Output

{
  "results": [
    {
      "ocr": "AB-123-CD",
      "ocr_conf": 98.5,
      "ltrb": [120, 230, 380, 290],
      "ltrb_conf": 95
    }
  ],
  "ltrb_proc_sec": 0.15,
  "ocr_proc_sec": 0.03,
  "status": "success"
}

πŸ†“ FREE vs PAID Comparison

Feature FREE Test API Paid License
Requests/Day 1000 Unlimited
Speed ~0.5s (cloud) ~0.02s (local GPU)
Internet Required Yes No (offline OK)
Configuration None One-time setup
Regions All 5 All 5
Models All All
Price $0 Contact sales

πŸŽ“ Use Cases for Free API

  • Evaluation: Try before buying a license
  • Prototyping: Build POC applications
  • Testing: Test accuracy on your specific plates
  • Education: Learn ANPR/ALPR/LPR technology
  • Small Projects: Personal projects under 1000/day
  • Region Testing: Find which region works best
  • Model Comparison: Compare different model sizes

πŸ“ˆ When to Upgrade to Paid License?

Consider upgrading when you need:

  • πŸš€ Unlimited requests (no daily limit)
  • 10-100x faster processing (local GPU)
  • πŸ”’ Offline operation (no internet needed)
  • 🏒 Commercial deployment
  • πŸ“Ή Real-time video processing
  • 🎯 High-volume applications (>1000/day)

πŸ’‘ Pro Tips

# 1. Use specific regions for best accuracy
ma-anpr test-api plate.jpg --region eup  # ✅ Better
ma-anpr test-api plate.jpg --region univ # ⚠️ OK but less accurate

# 2. Test different models to find best speed/accuracy balance
ma-anpr test-api plate.jpg --region eup --detector small_640p_fp32   # Faster
ma-anpr test-api plate.jpg --region eup --detector large_640p_fp32   # More accurate

# 3. Check remaining quota
ma-anpr test-api --check-quota

# 4. Get help
ma-anpr test-api --help

# 5. See all options
ma-anpr test-api --list-models

πŸ” Troubleshooting

Rate limit exceeded?

# Wait until midnight UTC (resets daily)
# OR upgrade to paid license for unlimited requests

No plates detected?

# Try different detector models
ma-anpr test-api plate.jpg --region eup --detector large_640p_fp32

# Try universal region
ma-anpr test-api plate.jpg --region univ

Wrong text recognized?

# Make sure you're using correct region!
ma-anpr test-api plate.jpg --region kr   # For Korean plates
ma-anpr test-api plate.jpg --region eup  # For European plates

# Try larger OCR model
ma-anpr test-api plate.jpg --region eup --ocr large_fp32

πŸ“– Complete Example Script

#!/usr/bin/env python3
"""
Free ANPR Test Script
Test license plate recognition with different regions and models
"""
import subprocess
import json

def test_anpr_free(image_path, region='eup', detector='medium_640p_fp32', ocr='medium_fp32'):
    """Test free ANPR API"""
    
    cmd = [
        'ma-anpr', 'test-api', image_path,
        '--region', region,
        '--detector', detector,
        '--ocr', ocr
    ]
    
    result = subprocess.run(cmd, capture_output=True, text=True)
    
    if result.returncode == 0:
        try:
            data = json.loads(result.stdout)
            return data
        except:
            return result.stdout
    else:
        return {"error": result.stderr}

# Test European plate with different models
image = "eu-plate.jpg"

print("Testing different detector models...")
for detector in ['small_640p_fp32', 'medium_640p_fp32', 'large_640p_fp32']:
    result = test_anpr_free(image, 'eup', detector)
    print(f"{detector}: {result}")

print("\nTesting different regions...")
for region in ['eup', 'kr', 'na', 'univ']:
    result = test_anpr_free(image, region)
    print(f"{region}: {result}")

🎯 Real-World Example

# Parking lot monitoring (Europe)
ma-anpr test-api parking-cam.jpg --region eup

# Toll booth (USA)
ma-anpr test-api toll-booth.jpg --region na

# Security gate (Korea)
ma-anpr test-api security-cam.jpg --region kr

# Traffic enforcement (China)
ma-anpr test-api traffic.jpg --region cn

# Multi-national (airport parking)
ma-anpr test-api airport.jpg --region univ

🌟 Why Choose MareArts ANPR?

  • FREE tier available - Try before you buy!
  • State-of-the-art AI - Latest deep learning models
  • Multi-region support - Works worldwide
  • Fast processing - ~0.02s with GPU
  • Easy integration - Python, HTTP API, CLI
  • Regular updates - New models and features
  • Commercial ready - Production-grade quality

πŸš€ Get Started Now!

# Install (takes 10 seconds)
pip install marearts-anpr

# Test (takes 20 seconds)
ma-anpr test-api your-plate.jpg --region eup

# Celebrate! πŸŽ‰
# You just recognized your first license plate!

πŸ“ž Need More?

πŸ’¬ What People Are Saying

"Finally, an ANPR API I can test without entering my credit card!" - Developer

"1000 requests/day is perfect for my small parking lot project." - Small Business Owner

"Tested all 5 regions before buying. Confident in my purchase!" - System Integrator

🎁 Summary

MareArts ANPR offers a completely FREE test API with 1000 requests per day. No credit card, no registration, no strings attached. Just install and start recognizing license plates!

  • ✅ Install: pip install marearts-anpr
  • ✅ Test: ma-anpr test-api plate.jpg --region eup
  • ✅ Evaluate: Try all regions and models
  • ✅ Upgrade: When ready for unlimited use

Start your ANPR/ALPR/LPR journey today - completely FREE! πŸš—πŸ“Έ



MareArts ANPR V14 - Advanced Manual Processing & Performance Tuning

 

⚡ MareArts ANPR V14 - Advanced Manual Processing

Ready to take control? In this advanced guide, I'll show you how to manually process detections, measure performance, and optimize for your specific use case.

🎯 Why Manual Processing?

  • Full control over detection pipeline
  • Custom filtering and post-processing
  • Performance measurement and optimization
  • Integration with existing computer vision pipelines
  • Custom confidence thresholds per stage

πŸ”§ Manual Detection & OCR Pipeline

from marearts_anpr import ma_anpr_detector_v14, ma_anpr_ocr_v14
import cv2
from PIL import Image
import time

# Initialize models
detector = ma_anpr_detector_v14(
    "medium_640p_fp32",
    user_name, serial_key, signature,
    backend="cpu",
    conf_thres=0.25,
    iou_thres=0.5
)

ocr = ma_anpr_ocr_v14("medium_fp32", "eup", user_name, serial_key, signature)

# Load image
img = cv2.imread("plate.jpg")

# Step 1: Detect license plates
start = time.time()
detections = detector.detector(img)
detection_time = time.time() - start

print(f"Detection time: {detection_time:.4f}s")
print(f"Found {len(detections)} plate(s)")

# Step 2: Process each detection
results = []
ocr_time = 0

for i, box_info in enumerate(detections):
    # Get bounding box
    bbox = box_info['bbox']  # [x1, y1, x2, y2]
    score = box_info['score']  # Detection confidence
    
    # Crop plate region
    x1, y1, x2, y2 = int(bbox[0]), int(bbox[1]), int(bbox[2]), int(bbox[3])
    crop = img[y1:y2, x1:x2]
    
    if crop.size == 0:
        continue
    
    # Convert to PIL for OCR
    pil_img = Image.fromarray(crop)
    if pil_img.mode != "RGB":
        pil_img = pil_img.convert("RGB")
    
    # Run OCR
    start = time.time()
    text, confidence = ocr.predict(pil_img)
    elapsed = time.time() - start
    ocr_time += elapsed
    
    print(f"Plate {i+1}: {text} ({confidence}%) - {elapsed:.4f}s")
    
    results.append({
        "ocr": text,
        "ocr_conf": confidence,
        "bbox": [x1, y1, x2, y2],
        "det_conf": int(score * 100)
    })

print(f"\nTotal time: {detection_time + ocr_time:.4f}s")

πŸ“Š Detection Object Structure

# detector.detector(img) returns list of dictionaries:
[
    {
        'bbox': [x1, y1, x2, y2],  # Bounding box coordinates
        'score': 0.95,              # Detection confidence (0-1)
        'class': 'license_plate'    # Object class
    },
    ...
]

# ocr.predict(pil_image) returns tuple:
("ABC1234", 98.5)  # (text, confidence_percentage)

πŸš€ Backend Performance Comparison

backends = ["cpu", "cuda"]  # Add "directml" on Windows

for backend_name in backends:
    try:
        print(f"\nπŸ”§ Testing {backend_name}...")
        
        # Initialize with specific backend
        test_detector = ma_anpr_detector_v14(
            "medium_640p_fp32",
            user_name, serial_key, signature,
            backend=backend_name,
            conf_thres=0.25
        )
        
        # Measure performance
        start = time.time()
        detections = test_detector.detector(img)
        elapsed = time.time() - start
        
        print(f"Detected {len(detections)} plates in {elapsed:.4f}s")
        print(f"Speed: {1/elapsed:.1f} FPS")
        
    except Exception as e:
        print(f"⚠️ {backend_name} not available: {e}")

⚙️ Performance Results (Typical)

Backend Detection OCR Total FPS
CPU (i7) ~0.15s ~0.03s ~0.18s ~5.5
CUDA (RTX 3060) ~0.008s ~0.002s ~0.01s ~100

Result: GPU acceleration = 18x faster! πŸš€

πŸŽ›️ Custom Filtering

# Filter detections by confidence
min_detection_conf = 0.50
min_ocr_conf = 80.0

filtered_results = []

for box_info in detections:
    if box_info['score'] < min_detection_conf:
        continue  # Skip low confidence detections
    
    # Process with OCR...
    text, conf = ocr.predict(plate_crop)
    
    if conf < min_ocr_conf:
        continue  # Skip low confidence OCR
    
    filtered_results.append({
        "text": text,
        "confidence": conf,
        "bbox": bbox
    })

print(f"After filtering: {len(filtered_results)} high-confidence plates")

🎨 Custom Visualization

import cv2

# Draw boxes and text on image
for result in results:
    x1, y1, x2, y2 = result['bbox']
    text = result['ocr']
    conf = result['ocr_conf']
    
    # Draw rectangle
    cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2)
    
    # Draw text
    label = f"{text} ({conf}%)"
    cv2.putText(img, label, (x1, y1-10), 
                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

cv2.imwrite("result.jpg", img)

πŸ“Ή Video Processing Pipeline

import cv2

# Open video
cap = cv2.VideoCapture("traffic.mp4")

frame_count = 0
plate_history = {}

while cap.isOpened():
    ret, frame = cap.read()
    if not ret:
        break
    
    frame_count += 1
    
    # Process every N frames (skip frames for speed)
    if frame_count % 5 != 0:
        continue
    
    # Detect plates
    detections = detector.detector(frame)
    
    for det in detections:
        bbox = det['bbox']
        x1, y1, x2, y2 = int(bbox[0]), int(bbox[1]), int(bbox[2]), int(bbox[3])
        crop = frame[y1:y2, x1:x2]
        
        if crop.size == 0:
            continue
        
        # OCR
        pil_crop = Image.fromarray(cv2.cvtColor(crop, cv2.COLOR_BGR2RGB))
        text, conf = ocr.predict(pil_crop)
        
        # Track plates (simple tracking by position)
        plate_id = f"{x1//50}_{y1//50}"
        
        if plate_id not in plate_history:
            plate_history[plate_id] = []
        plate_history[plate_id].append(text)
        
        # Draw
        cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2)
        cv2.putText(frame, text, (x1, y1-10), 
                    cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
    
    cv2.imshow('ANPR', frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

# Print detected plates
print("\nDetected plates:")
for plate_id, texts in plate_history.items():
    # Most common text for this plate
    most_common = max(set(texts), key=texts.count)
    print(f"  {most_common} (seen {len(texts)} times)")

πŸ’Ύ Batch Processing from Directory

import os
from pathlib import Path

image_dir = Path("./images")
results_all = {}

for img_path in image_dir.glob("*.jpg"):
    print(f"Processing {img_path.name}...")
    
    img = cv2.imread(str(img_path))
    detections = detector.detector(img)
    
    plates = []
    for det in detections:
        bbox = det['bbox']
        x1, y1, x2, y2 = int(bbox[0]), int(bbox[1]), int(bbox[2]), int(bbox[3])
        crop = img[y1:y2, x1:x2]
        
        if crop.size > 0:
            pil_crop = Image.fromarray(cv2.cvtColor(crop, cv2.COLOR_BGR2RGB))
            text, conf = ocr.predict(pil_crop)
            plates.append({"text": text, "conf": conf})
    
    results_all[img_path.name] = plates

# Save results
import json
with open("results.json", "w") as f:
    json.dump(results_all, f, indent=2)

print(f"\nProcessed {len(results_all)} images")

πŸŽ“ Advanced Tips

  • GPU Memory: Use cuda backend for 10-100x speedup
  • Confidence Tuning: Lower conf_thres to 0.15-0.20 for difficult images
  • IOU Threshold: Increase iou_thres to reduce duplicate detections
  • Batch Processing: Process multiple crops at once with ocr.predict([img1, img2, ...])
  • Frame Skipping: Process every Nth frame in videos for speed
  • Multi-threading: Run detector and OCR in separate threads

πŸ” Troubleshooting

No detections?

  • Lower conf_thres to 0.15
  • Try larger model (large_640p_fp32)
  • Check image quality and resolution

Wrong OCR results?

  • Verify correct region (kr, eup, na, cn)
  • Try larger OCR model (large_fp32)
  • Check plate crop quality

Slow performance?

  • Use GPU backend (cuda or directml)
  • Use smaller models (small_640p_fp32, small_fp32)
  • Skip video frames
  • Batch process multiple images

πŸ’‘ Conclusion

Manual processing gives you complete control over the ANPR pipeline. Use it for:

  • ✅ Custom filtering and validation
  • ✅ Performance optimization
  • ✅ Video stream processing
  • ✅ Integration with existing CV pipelines
  • ✅ Advanced visualization and tracking

Happy optimizing! ⚡πŸš—



MareArts ANPR V14 - Easy 3-Method Integration (File, OpenCV, PIL)


πŸš€ MareArts ANPR V14 - Getting Started in 3 Easy Ways

Welcome to MareArts ANPR V14! Today I'll show you how to process license plates using three different methods: from files, OpenCV, or PIL. Plus, the new multi-region switching feature that saves memory.

πŸ“¦ Quick Setup

pip install marearts-anpr
ma-anpr config  # Enter your credentials

🎯 Basic Usage - Three Input Methods

from marearts_anpr import ma_anpr_detector_v14, ma_anpr_ocr_v14
from marearts_anpr import marearts_anpr_from_image_file, marearts_anpr_from_cv2, marearts_anpr_from_pil
import cv2
from PIL import Image

# Initialize detector and OCR (once)
detector = ma_anpr_detector_v14(
    "medium_640p_fp32",
    user_name, serial_key, signature,
    backend="cpu",
    conf_thres=0.25
)

ocr = ma_anpr_ocr_v14("medium_fp32", "eup", user_name, serial_key, signature)

# Method 1: From file (easiest!)
result = marearts_anpr_from_image_file(detector, ocr, "plate.jpg")
print(result)

# Method 2: From OpenCV
img = cv2.imread("plate.jpg")
result = marearts_anpr_from_cv2(detector, ocr, img)
print(result)

# Method 3: From PIL
pil_img = Image.open("plate.jpg")
result = marearts_anpr_from_pil(detector, ocr, pil_img)
print(result)

🌍 NEW: Dynamic Region Switching (Saves 180MB!)

Previously, you needed separate OCR instances for each region. Now use set_region():

# Initialize once with any region
ocr = ma_anpr_ocr_v14("medium_fp32", "eup", user_name, serial_key, signature)

# Switch regions instantly!
ocr.set_region('eup')   # European plates
result = marearts_anpr_from_image_file(detector, ocr, "eu-plate.jpg")

ocr.set_region('kr')    # Korean plates  
result = marearts_anpr_from_image_file(detector, ocr, "kr-plate.jpg")

ocr.set_region('na')    # North American plates
result = marearts_anpr_from_image_file(detector, ocr, "us-plate.jpg")

ocr.set_region('cn')    # Chinese plates
ocr.set_region('univ')  # Universal

Memory savings: Single instance vs multiple = ~180MB saved per additional region!

πŸ“Š Available Regions

  • kr - Korean plates (123κ°€4567)
  • eup - European plates (EU standards)
  • na - North American plates (USA, Canada, Mexico)
  • cn - Chinese plates (δΊ¬A·12345)
  • univ - Universal (all regions, slightly lower accuracy)

🎨 Batch Processing

# Detect plates from multiple images
img1 = cv2.imread("plate1.jpg")
img2 = cv2.imread("plate2.jpg")

detections1 = detector.detector(img1)
detections2 = detector.detector(img2)

# Collect plate crops
plates = []
for det in detections1:
    bbox = det['bbox']
    crop = img1[int(bbox[1]):int(bbox[3]), int(bbox[0]):int(bbox[2])]
    plates.append(Image.fromarray(cv2.cvtColor(crop, cv2.COLOR_BGR2RGB)))

for det in detections2:
    bbox = det['bbox']
    crop = img2[int(bbox[1]):int(bbox[3]), int(bbox[0]):int(bbox[2])]
    plates.append(Image.fromarray(cv2.cvtColor(crop, cv2.COLOR_BGR2RGB)))

# Process all plates at once!
results = ocr.predict(plates)  # Pass list of images

for i, (text, conf) in enumerate(results):
    print(f"Plate {i+1}: {text} ({conf}%)")

πŸ”§ Model Options

Detector models:

  • pico_640p_fp32 - Smallest, fastest
  • micro_640p_fp32
  • small_640p_fp32
  • medium_640p_fp32 - Recommended balance
  • large_640p_fp32 - Most accurate

OCR models:

  • pico_fp32 - Fastest
  • micro_fp32
  • small_fp32
  • medium_fp32 - Recommended
  • large_fp32 - Best accuracy

Backends:

  • cpu - Works everywhere
  • cuda - NVIDIA GPU (10-100x faster!)
  • directml - Windows GPU

πŸ“ Complete Example

from marearts_anpr import ma_anpr_detector_v14, ma_anpr_ocr_v14
from marearts_anpr import marearts_anpr_from_image_file
import os

# Load credentials
user_name = os.getenv('MAREARTS_ANPR_USERNAME')
serial_key = os.getenv('MAREARTS_ANPR_SERIAL_KEY')
signature = os.getenv('MAREARTS_ANPR_SIGNATURE')

# Initialize models
detector = ma_anpr_detector_v14(
    "medium_640p_fp32",
    user_name, serial_key, signature,
    backend="cpu",
    conf_thres=0.25,
    iou_thres=0.5
)

ocr = ma_anpr_ocr_v14("medium_fp32", "eup", user_name, serial_key, signature)

# Process European plate
print("Processing European plate...")
result = marearts_anpr_from_image_file(detector, ocr, "eu-plate.jpg")
print(result)

# Switch to Korean region
ocr.set_region('kr')
print("\nProcessing Korean plate...")
result = marearts_anpr_from_image_file(detector, ocr, "kr-plate.jpg")
print(result)

πŸ’‘ Key Takeaways

  • ✅ Three input methods: file, OpenCV, PIL
  • ✅ Dynamic region switching saves memory
  • ✅ Batch processing for efficiency
  • ✅ Multiple model sizes for different needs
  • ✅ GPU acceleration available

πŸ”— Try It Free!

No license yet? Try the free API (1000 requests/day):

ma-anpr test-api your-plate.jpg --region eup

Happy coding! πŸš—πŸ“Έ


Labels:

MareArts ANPR HTTP Server Integration - Load Once, Process Fast

πŸš€ MareArts ANPR HTTP Server - Easy Integration for Any Platform

One of the biggest challenges in ANPR (Automatic Number Plate Recognition) integration is the model loading time. Loading deep learning models can take 20+ seconds, which is impractical if you reload them for every image. Today, I'm sharing our solution: a lightweight HTTP server that loads models once and processes images from memory.

πŸ“Š The Problem: Model Loading Overhead

  • Model loading: ~22 seconds (one time)
  • Image processing: ~0.03 seconds per image
  • Traditional approach: Load models for EVERY image = slow!
  • Server approach: Load models ONCE, process thousands of images = fast!

✨ The Solution: Simple HTTP Server

Our simple_server.py creates a FastAPI server that:

  1. Loads ANPR models once at startup
  2. Accepts images through 3 different methods (file upload, raw bytes, base64)
  3. Processes images directly from memory (no disk I/O)
  4. Perfect for integration with C#, Visual Studio, or any HTTP client

πŸ”§ Server Implementation

Here's the core server code:

#!/usr/bin/env python3
from fastapi import FastAPI, File, UploadFile, Request
from fastapi.responses import JSONResponse
from marearts_anpr import ma_anpr_detector_v14, ma_anpr_ocr_v14, marearts_anpr_from_cv2
import cv2
import numpy as np

# ============================================================================
# LOAD MODELS (Once at startup)
# ============================================================================

detector = ma_anpr_detector_v14(
    "medium_640p_fp32", USER, KEY, SIG,
    backend="cpu",  # or "cuda" for GPU
    conf_thres=0.20
)

ocr = ma_anpr_ocr_v14("small_fp32", "eup", USER, KEY, SIG, backend="cpu")

# ============================================================================
# CREATE SERVER
# ============================================================================

app = FastAPI(title="MareArts ANPR Server")

@app.post("/detect")
async def detect_plate_file(image: UploadFile = File(...)):
    """Method 1: Upload image file (multipart/form-data)"""
    image_bytes = await image.read()
    return process_image_bytes(image_bytes)

@app.post("/detect/binary")
async def detect_plate_binary(request: Request):
    """Method 2: Send raw image bytes"""
    image_bytes = await request.body()
    return process_image_bytes(image_bytes)

@app.post("/detect/base64")
async def detect_plate_base64(data: Base64Image):
    """Method 3: Send base64 encoded image"""
    image_bytes = base64.b64decode(data.image)
    return process_image_bytes(image_bytes)

def process_image_bytes(image_bytes):
    """Process image from bytes"""
    nparr = np.frombuffer(image_bytes, np.uint8)
    img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
    result = marearts_anpr_from_cv2(detector, ocr, img)
    return result

πŸ’» Client Examples

Python Client (test_server.py)

import requests

def test_server(image_path, server_url="http://localhost:8000"):
    # Health check
    response = requests.get(f"{server_url}/health")
    print(response.json())
    
    # Detect plates
    with open(image_path, 'rb') as f:
        files = {'image': f}
        response = requests.post(f"{server_url}/detect", files=files)
    
    result = response.json()
    if result.get('results'):
        print(f"✅ Detected {len(result['results'])} plate(s):")
        for plate in result['results']:
            print(f"  • {plate['ocr']} ({plate['ocr_conf']}%)")

cURL Command Line

# Method 1: File upload
curl -X POST http://localhost:8000/detect -F "image=@plate.jpg"

# Method 2: Binary data
curl -X POST http://localhost:8000/detect/binary --data-binary "@plate.jpg"

# Health check
curl http://localhost:8000/health

C# / Visual Studio Integration

using System.Net.Http;

// Example 1: Send raw bytes
var client = new HttpClient();
var content = new ByteArrayContent(imageBytes);
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
var response = await client.PostAsync("http://localhost:8000/detect/binary", content);

// Example 2: Send base64 JSON
var base64Image = Convert.ToBase64String(imageBytes);
var json = JsonSerializer.Serialize(new { image = base64Image });
var content = new StringContent(json, Encoding.UTF8, "application/json");
var response = await client.PostAsync("http://localhost:8000/detect/base64", content);

🎯 Usage Guide

Step 1: Install dependencies

pip install marearts-anpr fastapi uvicorn python-multipart
ma-anpr config  # Configure your credentials

Step 2: Start the server (Terminal 1)

python simple_server.py
# Models load once (~22s), then server waits for requests

Step 3: Send images (Terminal 2 or your application)

python test_server.py your_image.jpg

🌟 Key Benefits

  • Load Once, Use Forever: Models load at startup, not per request
  • Memory Processing: No disk I/O, process images from RAM
  • Multiple Input Methods: File upload, raw bytes, or base64
  • Cross-Platform: Works with Python, C#, JavaScript, or any HTTP client
  • Production Ready: Built on FastAPI with async support
  • Easy Integration: RESTful API with JSON responses

πŸ“ˆ Performance Comparison

Approach First Image Subsequent Images
Traditional (load per image) ~22 seconds ~22 seconds each
HTTP Server (load once) ~22 seconds ~0.03 seconds each

Result: 700x faster for subsequent images! πŸš€

πŸ”— Available Endpoints

  • POST /detect - Upload file (multipart/form-data)
  • POST /detect/binary - Send raw bytes (application/octet-stream)
  • POST /detect/base64 - Send base64 JSON
  • GET / - Server info
  • GET /health - Health check

πŸŽ“ When to Use This

  • ✅ Integrating ANPR into C# / Visual Studio projects
  • ✅ Building web applications with ANPR
  • ✅ Processing multiple images efficiently
  • ✅ Microservice architecture
  • ✅ Real-time video processing

πŸ“¦ Complete Example Package

All code is available in our SDK:

  • simple_server.py - HTTP server (202 lines)
  • test_server.py - Python client test (52 lines)
  • README.md - Complete documentation

Install: pip install marearts-anpr

πŸ” Configuration

The server uses environment variables for credentials:

# Configure once
ma-anpr config

# Credentials are stored in ~/.marearts/.marearts_env
# Server automatically loads from environment variables

πŸ’‘ Conclusion

This HTTP server approach makes ANPR integration incredibly simple. Whether you're building a C# desktop application, a web service, or a microservice architecture, you can now integrate license plate recognition with just a few HTTP calls. No need to worry about Python integration complexity - just send HTTP requests!

The key insight: separate model loading from image processing. Load once, process thousands of times.

Happy coding! πŸš—πŸ“Έ

2/02/2025

Find and search which webcam is online on your computer.

 python code:


.

import cv2
import time

def check_cameras():
print("\nChecking camera indices 0-9...")
print("----------------------------------------")
working_cameras = []
for i in range(10):
cap = cv2.VideoCapture(i)
if cap.isOpened():
# Try to read a frame
ret, frame = cap.read()
if ret:
# Get camera properties
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = cap.get(cv2.CAP_PROP_FPS)
# Get backend info
backend = cap.getBackendName()
print(f"\n✓ Camera {i} is ONLINE:")
print(f" Resolution: {width}x{height}")
print(f" FPS: {fps}")
print(f" Backend: {backend}")
print(f" Frame shape: {frame.shape}")
# Try to get more detailed format information
fourcc = int(cap.get(cv2.CAP_PROP_FOURCC))
fourcc_str = "".join([chr((fourcc >> 8 * i) & 0xFF) for i in range(4)])
print(f" Format: {fourcc_str}")
working_cameras.append(i)
# Test a few more frames to ensure stability
frames_to_test = 5
success_count = 0
for _ in range(frames_to_test):
ret, frame = cap.read()
if ret:
success_count += 1
time.sleep(0.1)
print(f" Stability test: {success_count}/{frames_to_test} frames captured successfully")
else:
print(f"✗ Camera {i}: Device found but cannot read frames")
cap.release()
else:
print(f"✗ Camera {i}: Not available")
print("\n----------------------------------------")
print("Summary:")
if working_cameras:
print(f"Working camera indices: {working_cameras}")
else:
print("No working cameras found")
print("----------------------------------------")

def main():
print("Starting camera detection...")
check_cameras()
print("\nCamera check complete!")

if __name__ == "__main__":
main()

..



output is looks like:

Starting camera detection...


Checking camera indices 0-9...

----------------------------------------


✓ Camera 0 is ONLINE:

  Resolution: 640x480

  FPS: 30.0

  Backend: V4L2

  Frame shape: (480, 640, 3)

  Format: YUYV

  Stability test: 5/5 frames captured successfully

[ WARN:0@0.913] global cap_v4l.cpp:999 open VIDEOIO(V4L2:/dev/video1): can't open camera by index

[ERROR:0@0.972] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range

✗ Camera 1: Not available


✓ Camera 2 is ONLINE:

  Resolution: 640x480

  FPS: 30.0

  Backend: V4L2

  Frame shape: (480, 640, 3)

  Format: YUYV

  Stability test: 5/5 frames captured successfully

[ WARN:0@1.818] global cap_v4l.cpp:999 open VIDEOIO(V4L2:/dev/video3): can't open camera by index

[ERROR:0@1.820] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range

✗ Camera 3: Not available

[ WARN:0@1.820] global cap_v4l.cpp:999 open VIDEOIO(V4L2:/dev/video4): can't open camera by index

[ERROR:0@1.822] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range

✗ Camera 4: Not available

[ WARN:0@1.822] global cap_v4l.cpp:999 open VIDEOIO(V4L2:/dev/video5): can't open camera by index

[ERROR:0@1.823] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range

✗ Camera 5: Not available

[ WARN:0@1.824] global cap_v4l.cpp:999 open VIDEOIO(V4L2:/dev/video6): can't open camera by index

[ERROR:0@1.825] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range

✗ Camera 6: Not available

[ WARN:0@1.825] global cap_v4l.cpp:999 open VIDEOIO(V4L2:/dev/video7): can't open camera by index

[ERROR:0@1.828] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range

✗ Camera 7: Not available

[ WARN:0@1.828] global cap_v4l.cpp:999 open VIDEOIO(V4L2:/dev/video8): can't open camera by index

[ERROR:0@1.830] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range

✗ Camera 8: Not available

[ WARN:0@1.830] global cap_v4l.cpp:999 open VIDEOIO(V4L2:/dev/video9): can't open camera by index

[ERROR:0@1.831] global obsensor_uvc_stream_channel.cpp:158 getStreamChannelGroup Camera index out of range

✗ Camera 9: Not available


----------------------------------------

Summary:

Working camera indices: [0, 2]

----------------------------------------


Camera check complete!



so you can know which one is online


Thank you!


9/22/2024

What is TorchOps.cpp.inc in torch-mlir

 

What is TorchOps.cpp.inc?

  • TorchOps.cpp.inc: This file contains implementations of the operations for the torch-mlir dialect. It is typically generated from .td (TableGen) files that define the dialect and its operations.
  • The .td (TableGen) files describe MLIR operations in a high-level, declarative form, and the cmake build process automatically generates .cpp.inc files (like TorchOps.cpp.inc) from these .td files.

How it gets generated:

  1. TableGen: The TableGen tool processes .td files that define the operations and attributes for the torch dialect.
  2. CMake Build: During the CMake build process, the mlir-tblgen tool is invoked to generate various .inc files, including TorchOps.cpp.inc.

Where It Is Generated:

The TorchOps.cpp.inc file is usually generated in the build directory under the subdirectories for the torch-mlir project. For example:


build/tools/torch-mlir/lib/Dialect/Torch/IR/TorchOps.cpp.inc

This file gets included in the compiled source code to provide the implementation of the Torch dialect operations.

How to Ensure It Is Generated:

If the file is missing, it's likely because there was an issue in the build process. Here’s how to ensure it’s generated:

  1. Ensure CMake and Ninja Build: Make sure the CMake and Ninja build process is working correctly by following the steps we discussed earlier. You can check that the TorchOps.cpp.inc file is generated by looking in the build directory:

    ls build/tools/torch-mlir/lib/Dialect/Torch/IR/
  2. Check for TableGen Files: Make sure that the .td files (such as TorchOps.td) are present in the source directory. These are used by mlir-tblgen to generate the .cpp.inc files.

Debugging if Not Generated:

If TorchOps.cpp.inc or similar files are not generated, ensure:

  • You are running the full build using ninja or make.
  • mlir-tblgen is being invoked during the build process (you should see log messages referencing mlir-tblgen).

IREE test code and explanation

.

from iree import compiler, runtime
import numpy as np
import sys

def print_step(step):
print(f'Step: {step}', file=sys.stderr)

# MLIR code as a string
module_str = '''
func.func @simple_add(%arg0: tensor<4xf32>, %arg1: tensor<4xf32>) -> tensor<4xf32> {
%0 = arith.addf %arg0, %arg1 : tensor<4xf32>
return %0 : tensor<4xf32>
}
'''

print_step('Compiling module')
compiled_module = compiler.compile_str(module_str, target_backends=['llvm-cpu'])

print_step('Creating runtime config')
config = runtime.Config('local-task')

print_step('Creating system context')
ctx = runtime.SystemContext(config=config)

print_step('Creating VM instance')
vm_instance = runtime.VmInstance()

print_step('Creating VM module')
vm_module = runtime.VmModule.from_flatbuffer(vm_instance, compiled_module, warn_if_copy=False)

print_step('Adding VM module to context')
ctx.add_vm_module(vm_module)

print_step('Getting device')
device = runtime.get_driver('local-task').create_default_device()
print(f'Device: {device}', file=sys.stderr)

print_step('Getting function')
f = ctx.modules.module.simple_add

print_step('Creating device arrays')
arg1 = runtime.asdevicearray(device, np.array([1.0, 2.0, 3.0, 4.0], dtype=np.float32))
arg2 = runtime.asdevicearray(device, np.array([5.0, 6.0, 7.0, 8.0], dtype=np.float32))

print_step('Calling function')
result = f(arg1, arg2)

print_step('Getting result')
print(result.to_host())

print_step('Script completed successfully')

..

To run this code:

  1. Save it to a file, e.g., test_iree.py.
  2. Make sure you have IREE and its Python bindings installed and properly set up in your environment.
  3. Run the script using Python:
    python test_iree.py

This script will:

  1. Define a simple MLIR function that adds two 4-element float32 tensors.
  2. Compile this MLIR code to an IREE module.
  3. Set up the IREE runtime environment.
  4. Create input data as NumPy arrays.
  5. Execute the compiled function with the input data.
  6. Print the result.

The output should show each step of the process and finally print the result, which should be [ 6. 8. 10. 12.].

This example demonstrates the basic workflow for testing MLIR code with IREE using Python. You can modify the MLIR code string and input data to test different functions and operations as needed.



6/04/2024

Embedding invisible water mark on image

 Firstly, install opencv

pip install opencv-python numpy


code for invisible water mark embedding 

import cv2
import numpy as np

def embed_watermark(image_path, watermark, output_path):
# Load the image
img = cv2.imread(image_path)
# Ensure the image is in 3 channels RGB
if img.shape[2] != 3:
print("Image needs to be RGB")
return
# Prepare the watermark
# For simplicity, the watermark is repeated to match the image size
watermark = (watermark * (img.size // len(watermark) + 1))[:img.size]
watermark = np.array(list(watermark), dtype=np.uint8).reshape(img.shape)
# Embed watermark by altering the least significant bit
img_encoded = img & ~1 | (watermark & 1)
# Save the watermarked image
cv2.imwrite(output_path, img_encoded)
print("Watermarked image saved to", output_path)

# Usage
embed_watermark('path_to_your_image.jpg', 'your_watermark_text', 'watermarked_image.jpg')


retrive code

def extract_watermark(watermarked_image_path, original_image_path, output_path):
# Load the watermarked and the original image
img_encoded = cv2.imread(watermarked_image_path)
img_original = cv2.imread(original_image_path)

# Extract the watermark by comparing the least significant bits
watermark = img_encoded & 1 ^ img_original & 1
watermark = (watermark * 255).astype(np.uint8) # Scale to 0-255 for visibility

# Save or display the extracted watermark
cv2.imwrite(output_path, watermark)
print("Extracted watermark saved to", output_path)

# Usage
extract_watermark('watermarked_image.jpg', 'path_to_your_image.jpg', 'extracted_watermark.jpg')

2/26/2024

Dominant frequency extraction.

 



Let's say we have channel x Length signal data ex)EEG (electroencephalogram) or time series data.

We might wonder what dominant Hz is there.

The code analysis this question and return 5 top dominant frequency. 

.

import numpy as np
from collections import Counter
from scipy.signal import welch

def identify_dominant_frequencies(signal, fs, top_n=5):
freqs, psd = welch(signal, fs)
peak_indices = np.argsort(psd)[-top_n:]
dominant_freqs = freqs[peak_indices]
return dominant_freqs

..
dominant_freqs = identify_dominant_frequencies(signal, fs, top_n)
dominant_freqs_summary[channel].extend(dominant_freqs) # Append the frequencies
..
median_dominant_freqs = {channel: np.median(freqs) if freqs else None for channel, freqs in dominant_freqs_summary.items()}
..

def get_top_n_frequencies(freq_list, top_n=5, bin_width=1.0):
# Bin frequencies into discrete intervals
binned_freqs = np.round(np.array(freq_list) / bin_width) * bin_width
# Count the frequency of each binned frequency
freq_counter = Counter(binned_freqs)
# Find the top N most common binned frequencies
top_freqs = freq_counter.most_common(top_n)
# Extract just the frequencies from the top N tuples (freq, count)
top_freqs = [freq for freq, count in top_freqs]
return top_freqs

# Initialize a dictionary to store the top 5 frequencies for each channel
top_5_freqs_all_channels = {}
bin_width = 1.0

# Calculate the top 5 frequencies for each channel
for channel, freqs in dominant_freqs_summary.items():
top_5_freqs = get_top_n_frequencies(freqs, top_n=5, bin_width=bin_width)
top_5_freqs_all_channels[channel] = top_5_freqs
print(f"{channel}: Top 5 Frequencies = {top_5_freqs}")

..


2/05/2024

Download all YouTube videos in playlist (python)

pip install pytube

replace playlist url in string

.

from pytube import Playlist, YouTube

def download_video(url, max_attempts=3):
for attempt in range(1, max_attempts + 1):
try:
yt = YouTube(url)
video = yt.streams.get_highest_resolution()
video.download()
print(f"Downloaded: {yt.title}")
break
except Exception as e:
print(f"Error downloading video (attempt {attempt}): {url}\n{e}")
if attempt == max_attempts:
print(f"Failed to download video after {max_attempts} attempts: {url}")

# Replace with your playlist URL
playlist_url = 'https://www.youtube.com/playlist?list=xxx'

playlist = Playlist(playlist_url)

# Fetch video URLs
video_urls = playlist.video_urls

# Download each video
for url in video_urls:
download_video(url)

..


Thank you.

πŸ™‡πŸ»‍♂️

2/01/2024

get list of torch from conda installation

 input > conda list | grep torch

> conda list | grep torch
ffmpeg 4.3 hf484d3e_0 pytorch
libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
pytorch 2.2.0 py3.8_cpu_0 pytorch
pytorch-mutex 1.0 cpu pytorch
torchaudio 2.2.0 py38_cpu pytorch
torchvision 0.17.0 py38_cpu pytorch

1/15/2024

unreal engine, create asset by python widget and copy asset to game env

 refer to code:

.

import unreal
import os

def main_process(input_args):
# Use a directory within your user's documents or another location you have write access to
local_directory = "/Users/user/Documents/Unreal Projects/prj_name/Content/prj_name/Scripts"
# Example usage
filename = "chamfered_cube.obj"
file_path = create_chamfered_cube_obj_file(filename, local_directory, 100.0, 0.1)
imported_asset_path = import_obj_to_unreal(file_path, "/Game/prj_name")
place_static_mesh_in_world('/Game/prj_name/chamfered_cube', (1000, 1000, 100))

def place_static_mesh_in_world(mesh_asset_path, location, rotation=(0, 0, 0), scale=(1, 1, 1)):
# Load the Static Mesh asset
static_mesh = unreal.load_asset(mesh_asset_path, unreal.StaticMesh)
# Get the current editor world
editor_world = unreal.EditorLevelLibrary.get_editor_world()
# Spawn a new StaticMeshActor in the world
static_mesh_actor = unreal.EditorLevelLibrary.spawn_actor_from_class(
unreal.StaticMeshActor, location, rotation
)
if static_mesh_actor:
# Access the StaticMeshComponent property and set the static mesh
static_mesh_component = static_mesh_actor.get_component_by_class(unreal.StaticMeshComponent)
if static_mesh_component:
static_mesh_component.set_static_mesh(static_mesh)
# Set the scale if necessary
static_mesh_actor.set_actor_scale3d(unreal.Vector(*scale))
print(f"Placed Static Mesh at location: {location}")
return static_mesh_actor
else:
print("Failed to access StaticMeshComponent.")
return None
else:
print("Failed to place Static Mesh in the world.")
return None

def import_obj_to_unreal(obj_file_path, unreal_asset_path):
# Set up the import task
import_task = unreal.AssetImportTask()
import_task.filename = obj_file_path # The full path to the OBJ file on disk
import_task.destination_path = unreal_asset_path # The path in Unreal where to import the asset
import_task.automated = True
import_task.save = True

# Set up the import options for Static Mesh
options = unreal.FbxImportUI()
# Set various options on the options object here...

import_task.options = options

# Execute the import task
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([import_task])

# Return the imported asset path if successful, None otherwise
return import_task.imported_object_paths[0] if import_task.imported_object_paths else None

import os

def create_chamfered_cube_obj_file(filename, directory, scale=1.0, chamfer_ratio=0.1):
# Calculate the chamfer size
chamfer_size = scale * chamfer_ratio
half_scale = scale / 2
inner_size = half_scale - chamfer_size

# Define the vertices for a chamfered cube
vertices = [
# Bottom vertices (4 corners)
f"v {-inner_size} {-inner_size} {-half_scale}", f"v {inner_size} {-inner_size} {-half_scale}",
f"v {inner_size} {inner_size} {-half_scale}", f"v {-inner_size} {inner_size} {-half_scale}",
# Top vertices (4 corners)
f"v {-inner_size} {-inner_size} {half_scale}", f"v {inner_size} {-inner_size} {half_scale}",
f"v {inner_size} {inner_size} {half_scale}", f"v {-inner_size} {inner_size} {half_scale}",
# Chamfer vertices on the bottom (4)
f"v {-half_scale} {-half_scale} {-inner_size}", f"v {half_scale} {-half_scale} {-inner_size}",
f"v {half_scale} {half_scale} {-inner_size}", f"v {-half_scale} {half_scale} {-inner_size}",
# Chamfer vertices on the top (4)
f"v {-half_scale} {-half_scale} {inner_size}", f"v {half_scale} {-half_scale} {inner_size}",
f"v {half_scale} {half_scale} {inner_size}", f"v {-half_scale} {half_scale} {inner_size}",
]

# Define the faces for a chamfered cube (using the vertex indices)
faces = [
# Bottom square
"f 1 2 3 4",
# Top square
"f 5 6 7 8",
# Side squares (4 sides)
"f 1 2 6 5", "f 2 3 7 6",
"f 3 4 8 7", "f 4 1 5 8",
# Chamfer triangles (8 triangles)
"f 1 9 2", "f 2 10 3",
"f 3 11 4", "f 4 12 1",
"f 5 13 6", "f 6 14 7",
"f 7 15 8", "f 8 16 5",
# Chamfer squares (connecting the triangles - 4 squares)
"f 9 10 14 13", "f 10 11 15 14",
"f 11 12 16 15", "f 12 9 13 16",
]
# Ensure the directory exists
if not os.path.exists(directory):
os.makedirs(directory)
# Create a full system file path
file_path = os.path.join(directory, filename)
# Writing vertices and faces to the OBJ file
with open(file_path, 'w') as file:
for v in vertices:
file.write(f"{v}\n")
for f in faces:
file.write(f"{f}\n")
print(f"Chamfered Cube OBJ file created at {file_path}")
return file_path

def create_cube_obj_file(filename, directory):
# Create a full system file path
file_path = os.path.join(directory, filename)
# Cube vertices and faces
vertices = [
"v -0.5 -0.5 -0.5", "v -0.5 -0.5 0.5", "v -0.5 0.5 -0.5", "v -0.5 0.5 0.5",
"v 0.5 -0.5 -0.5", "v 0.5 -0.5 0.5", "v 0.5 0.5 -0.5", "v 0.5 0.5 0.5"
]
faces = [
"f 1 3 4 2", "f 5 7 8 6", "f 1 5 6 2", "f 3 7 8 4",
"f 1 5 7 3", "f 2 6 8 4"
]
# Ensure the directory exists
if not os.path.exists(directory):
os.makedirs(directory)
# Writing vertices and faces to the OBJ file
with open(file_path, 'w') as file:
for v in vertices:
file.write(f"{v}\n")
for f in faces:
file.write(f"{f}\n")
print(f"Cube OBJ file created at {file_path}")
return file_path


..