As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
Python Automation for System Administration
System administration tasks can be efficiently automated using Python, saving time and reducing human error. I've spent years implementing these techniques in production environments, and I'll share the most effective approaches.
Fabric for Remote Administration
Fabric is a powerful library for remote system administration. It provides a clean interface for executing commands on remote servers and automating deployment processes.
from fabric import Connection, Config
from invoke import Responder
def deploy_application():
config = Config(overrides={'sudo': {'password': 'your_sudo_password'}})
with Connection('user@server.com', config=config) as conn:
# Update application code
conn.run('cd /app && git pull origin main')
# Install dependencies
conn.run('pip install -r requirements.txt')
# Restart service
sudo_resp = Responder(pattern=r'\[sudo\] password:',
response='your_sudo_password\n')
conn.run('sudo systemctl restart myapp.service', watchers=[sudo_resp])
System Monitoring with psutil
The psutil library provides comprehensive system monitoring capabilities. I frequently use it to track resource usage and implement automated alerts.
import psutil
import smtplib
from email.message import EmailMessage
def monitor_resources(threshold=90):
cpu_usage = psutil.cpu_percent(interval=1)
memory = psutil.virtual_memory()
disk = psutil.disk_usage('/')
if cpu_usage > threshold or memory.percent > threshold:
send_alert(f"High resource usage detected!\n"
f"CPU: {cpu_usage}%\n"
f"Memory: {memory.percent}%\n"
f"Disk: {disk.percent}%")
def send_alert(message):
email = EmailMessage()
email.set_content(message)
email['Subject'] = 'System Alert'
email['From'] = 'monitor@example.com'
email['To'] = 'admin@example.com'
with smtplib.SMTP('smtp.example.com', 587) as server:
server.starttls()
server.login('username', 'password')
server.send_message(email)
Subprocess for Command Execution
The subprocess module enables programmatic execution of system commands. Here's how I handle command execution with proper error handling:
import subprocess
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def execute_command(command):
try:
result = subprocess.run(
command,
shell=True,
check=True,
capture_output=True,
text=True
)
logger.info(f"Command output: {result.stdout}")
return result.stdout
except subprocess.CalledProcessError as e:
logger.error(f"Command failed: {e.stderr}")
raise
Secure Remote Operations with Paramiko
Paramiko provides secure SSH connections for remote operations. I use it when more detailed SSH control is needed:
import paramiko
import logging
def remote_operations(hostname, username, password):
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
client.connect(hostname, username=username, password=password)
# Execute commands
stdin, stdout, stderr = client.exec_command('df -h')
print(stdout.read().decode())
# File transfer
sftp = client.open_sftp()
sftp.put('local_file.txt', '/remote/path/file.txt')
sftp.close()
except Exception as e:
logging.error(f"Remote operation failed: {str(e)}")
finally:
client.close()
Task Scheduling with Schedule
The schedule library makes it easy to automate periodic tasks. Here's my approach to implementing reliable scheduled operations:
import schedule
import time
import threading
def run_scheduler():
while True:
schedule.run_pending()
time.sleep(1)
def setup_scheduled_tasks():
# Daily backup at 2 AM
schedule.every().day.at("02:00").do(backup_database)
# System check every 15 minutes
schedule.every(15).minutes.do(check_system_health)
# Weekly log rotation
schedule.every().sunday.do(rotate_logs)
# Start scheduler in a separate thread
scheduler_thread = threading.Thread(target=run_scheduler)
scheduler_thread.start()
def backup_database():
execute_command('pg_dump -U postgres mydb > backup.sql')
def check_system_health():
monitor_resources()
def rotate_logs():
execute_command('logrotate /etc/logrotate.conf')
Windows Remote Management with PyWinRM
For Windows environments, PyWinRM provides remote management capabilities:
from winrm import Protocol
def manage_windows_server(hostname, username, password):
protocol = Protocol(
endpoint=f'http://{hostname}:5985/wsman',
transport='ntlm',
username=username,
password=password
)
shell = protocol.open_shell()
try:
# Check system status
command_id = protocol.run_command(shell, 'systeminfo')
stdout, stderr, status_code = protocol.get_command_output(shell, command_id)
if status_code == 0:
process_output(stdout)
else:
handle_error(stderr)
finally:
protocol.close_shell(shell)
Comprehensive Monitoring Solution
Here's a complete monitoring solution combining multiple techniques:
import psutil
import schedule
import logging
import json
from datetime import datetime
from pathlib import Path
class SystemMonitor:
def __init__(self):
self.log_dir = Path('system_logs')
self.log_dir.mkdir(exist_ok=True)
self.setup_logging()
self.setup_monitoring()
def setup_logging(self):
logging.basicConfig(
filename=self.log_dir / 'monitor.log',
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
def setup_monitoring(self):
schedule.every(5).minutes.do(self.collect_metrics)
schedule.every(1).hour.do(self.generate_report)
def collect_metrics(self):
metrics = {
'timestamp': datetime.now().isoformat(),
'cpu': psutil.cpu_percent(interval=1),
'memory': psutil.virtual_memory()._asdict(),
'disk': {disk.mountpoint: psutil.disk_usage(disk.mountpoint)._asdict()
for disk in psutil.disk_partitions()},
'network': psutil.net_io_counters()._asdict()
}
self.save_metrics(metrics)
self.check_thresholds(metrics)
def save_metrics(self, metrics):
date_str = datetime.now().strftime('%Y-%m-%d')
metrics_file = self.log_dir / f'metrics_{date_str}.json'
try:
with metrics_file.open('a') as f:
json.dump(metrics, f)
f.write('\n')
except Exception as e:
logging.error(f"Failed to save metrics: {str(e)}")
def check_thresholds(self, metrics):
if metrics['cpu'] > 90:
self.alert('High CPU usage detected', metrics['cpu'])
if metrics['memory']['percent'] > 90:
self.alert('High memory usage detected',
metrics['memory']['percent'])
def alert(self, message, value):
logging.warning(f"Alert: {message} - Value: {value}")
# Implement your alert mechanism (email, SMS, etc.)
def generate_report(self):
# Generate daily/hourly reports from collected metrics
pass
if __name__ == '__main__':
monitor = SystemMonitor()
while True:
schedule.run_pending()
time.sleep(1)
These automation techniques significantly improve system administration efficiency. I've found that combining multiple approaches creates robust, reliable automation solutions. The key is to implement proper error handling, logging, and monitoring to ensure automated tasks run smoothly.
Remember to secure sensitive information, implement proper error handling, and test thoroughly in a development environment before deploying to production. Regular maintenance and updates of automation scripts ensure their continued effectiveness and security.
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)