This directory contains example scripts that demonstrate various features of DIGY. Each example is designed to work with DIGYβs execution environments (local, docker, ram, remote).
basic/hello_world.py
A simple βHello, World!β example that shows basic script execution.
Dependencies: None (uses standard library only)
Run it with:
# Run locally
digy local examples/basic/hello_world.py
# Run in RAM for better performance
digy ram examples/basic/hello_world.py
# Run with debug output
digy --debug local examples/basic/hello_world.py
Expected Output:
Hello, DIGY!
This is a basic example running in the local environment.
You can pass arguments to this script after the filename.
env/environment_info.py
Shows detailed information about the current execution environment, including:
Dependencies: None (uses standard library only)
Run it with different environments:
# Local environment (default)
digy local examples/env/environment_info.py
# Docker environment (isolated)
digy docker --image python:3.9 examples/env/environment_info.py
# RAM execution (fastest, no disk I/O)
digy ram examples/env/environment_info.py
# Remote execution (via SSH)
digy remote [email protected] github.com/pyfunc/yourrepo examples/env/environment_info.py
Example Output:
DIGY Environment Information
==============================
Python Version
--------------
3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0]
Platform
--------
Linux-6.14.11-300.fc42.x86_64-x86_64-with-glibc2.41
Current Directory
-----------------
/home/user/digy
Environment Variables
---------------------
PATH: /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
PYTHONPATH: /app
VIRTUAL_ENV: /venv
Process Info
------------
Process ID: 12345
Parent Process ID: 67890
User: user
Effective User: 1000
Environment check complete!
data_processing/data_analyzer.py
Demonstrates data analysis with pandas and matplotlib. This example:
Dependencies:
Run it with sample data:
# Install dependencies
pip install pandas matplotlib
# Run with the included sample data
digy local examples/data_processing/data_analyzer.py --input-file examples/data_processing/sample_data.csv
# Save output to a specific directory
digy local examples/data_processing/data_analyzer.py --input-file examples/data_processing/sample_data.csv --output-dir my_results
# Run in Docker with all dependencies included
digy docker --image python:3.9-slim examples/data_processing/data_analyzer.py --input-file /app/examples/data_processing/sample_data.csv
Example Output:
π Loading data from examples/data_processing/sample_data.csv
β
Loaded 10 rows
π Columns: id, value, category, score
π Basic Statistics:
id value score
count 10.00000 10.000000 10.000000
mean 5.50000 14.490000 86.000000
std 3.02765 3.926675 6.146363
min 1.00000 9.800000 76.000000
25% 3.25000 11.675000 82.500000
50% 5.50000 13.750000 86.500000
75% 7.75000 16.625000 90.500000
max 10.00000 22.100000 95.000000
π Generating plots for numeric columns...
β
Saved id distribution plot to analysis_output/id_distribution.png
β
Saved value distribution plot to analysis_output/value_distribution.png
β
Saved score distribution plot to analysis_output/score_distribution.png
β
Analysis complete! Results saved to /path/to/analysis_output/
web_scraping/website_scraper.py
Demonstrates web scraping with requests and BeautifulSoup. This example:
Dependencies:
Install dependencies:
pip install requests beautifulsoup4 lxml
Run it to scrape a website:
# Basic usage (scrapes example.com)
digy local examples/web_scraping/website_scraper.py --url https://example.com
# Scrape a different website
digy local examples/web_scraping/website_scraper.py --url https://pypi.org
# Save results to a custom directory
digy local examples/web_scraping/website_scraper.py --url https://example.com --output-dir scrape_results
# Limit number of links to extract
digy local examples/web_scraping/website_scraper.py --url https://example.com --max-links 10
# Run in Docker with all dependencies included
digy docker --image python:3.9-slim examples/web_scraping/website_scraper.py --url https://example.com
Example Output:
π Starting web scraping of https://example.com
π Output directory: scrape_results
π Maximum links to extract: 20
π Fetching https://example.com
πΎ Results saved to scrape_results/scrape_example_com_20230622_123456.json
β
Scraping complete! Results saved to scrape_results/scrape_example_com_20230622_123456.json
π Scraping Results:
π URL: https://example.com
π Title: Example Domain
π Links found: 1
π Description: No description
π Top links:
1. https://www.iana.org/domains/example
More information...
machine_learning/iris_classifier.py
A complete machine learning workflow example using scikit-learn. This script:
Dependencies:
Installation and Setup:
# Create and activate a virtual environment (recommended)
python -m venv ml_env
source ml_env/bin/activate # On Windows: ml_env\Scripts\activate
# Install required packages
pip install scikit-learn numpy joblib
Running the Example:
# Run the script directly with Python
python -m examples.machine_learning.iris_classifier
# Or navigate to the examples directory and run:
cd examples/machine_learning
python iris_classifier.py
Example Output:
Loading Iris dataset...
Splitting data into training and test sets...
Training Random Forest classifier...
Evaluating model...
Saving model...
Training complete!
Model saved to: output/iris_classifier.joblib
Metrics:
accuracy: 0.9
setosa_precision: 1.0
setosa_recall: 1.0
setosa_f1-score: 1.0
...
Key Features:
Note: This is a standalone Python script that demonstrates a complete ML workflow. For DIGY integration examples, see the other examples in this directory.
attachments/file_processor.py
Demonstrates how to work with attached files in your scripts. This example shows:
Dependencies: None (uses standard library only)
Run it with file attachments:
# Create some test files
echo "Test content 1" > test1.txt
echo "Test content 2" > test2.txt
# Attach specific files
digy local examples/attachments/file_processor.py --attach test1.txt --attach test2.txt
# Use interactive mode to select files
digy local examples/attachments/file_processor.py --interactive-attach
# Run in Docker with file attachments
digy docker --image python:3.9-slim examples/attachments/file_processor.py --attach test1.txt
# Clean up test files
rm test1.txt test2.txt
Example Output:
π Processing 2 attached files:
π File 1: test1.txt
Size: 14 bytes
Type: text/plain
Content: Test content 1
π File 2: test2.txt
Size: 14 bytes
Type: text/plain
Content: Test content 2
β
Processed 2 files successfully!
Note: When using Docker, make sure to:
digy docker --image python:3.9-slim -v $(pwd):/data examples/attachments/file_processor.py --attach /data/test1.txt
DIGY supports various authentication methods for secure access to resources. Here are some examples:
# Run with SQL authentication
digy local --auth sql --auth-config dbconfig.json your_script.py
# Example dbconfig.json:
# {
# "database": "mydb",
# "user": "user",
# "password": "password",
# "host": "localhost",
# "port": 5432
# }
# Web-based OAuth2 authentication
digy local --auth oauth2 --auth-config oauth_config.json your_script.py
# Load credentials from environment variables
digy local --auth env your_script.py
# Required environment variables:
# DIGY_AUTH_USERNAME
# DIGY_AUTH_PASSWORD
You can implement custom authentication by creating a Python module that implements the authenticate()
function.
# custom_auth.py
def authenticate(config):
# Your authentication logic here
return {"token": "your_auth_token"}
digy local --auth custom:custom_auth --auth-config auth_config.json your_script.py
To test all examples, you can use the following commands:
# Test basic example in different environments
digy local examples/basic/hello_world.py
digy ram examples/basic/hello_world.py
# Test environment info in different modes
digy local examples/env/environment_info.py
digy docker --image python:3.9-slim examples/env/environment_info.py
# Test data analyzer with sample data
digy local examples/data_processing/data_analyzer.py --input-file examples/data_processing/sample_data.csv
# Check output in analysis_output/
ls -l analysis_output/
# Test web scraper with example.com
digy local examples/web_scraping/website_scraper.py --url https://example.com
# Check output in scrape_results/
ls -l scrape_results/
# Create virtual environment for ML dependencies
python -m venv ml_env
source ml_env/bin/activate # On Windows: ml_env\Scripts\activate
pip install scikit-learn pandas matplotlib joblib scipy
# Run ML example
digy local examples/machine_learning/iris_classifier.py --output-dir ml_output
# Check output in ml_output/
ls -l ml_output/
# Create test files
echo "Test content 1" > test1.txt
echo "Test content 2" > test2.txt
# Test file processor
digy local examples/attachments/file_processor.py --attach test1.txt --attach test2.txt
# Clean up
deactivate # Exit virtual environment
rm test1.txt test2.txt
# Make scripts executable
chmod +x examples/*/*.py
# If running with Docker, ensure your user is in the docker group
sudo usermod -aG docker $USER
newgrp docker
# Check if package is installed
pip list | grep package_name
# Install missing dependencies
pip install package_name
# For ML examples, create a virtual environment
python -m venv ml_env
source ml_env/bin/activate # On Windows: ml_env\Scripts\activate
pip install -r requirements.txt # Or install packages individually
# Check if Docker is running
docker info
# Pull the latest image
docker pull python:3.9-slim
# Run with more verbose output
digy --debug docker --image python:3.9-slim examples/your_script.py
pip install --upgrade certifi
import matplotlib
matplotlib.use('Agg') # Non-interactive backend
~/.cache/digy/logs/
--debug
flag for more verbose output