Linux Log Integrity Monitoring Playbook: Detect Tampering Early with journald, auditd, and Remote Syslog

Last updated on


Target keyword: linux log integrity monitoring
Search intent: Problem-solving / best-practice

If your Linux server gets compromised, the first thing attackers often do is not encrypt files or kill services.

They clean up traces.

That means deleting command history, rotating logs aggressively, disabling agents, or modifying timestamps so the timeline looks “normal.” For small teams, this is dangerous because incident response quality depends on one thing: can you still trust your logs?

This guide is a practical playbook to build tamper-resistant logging on Linux using tools you already have: journald, auditd, and remote syslog forwarding. No enterprise SIEM required on day one. The goal is simple: make log tampering noisy, hard, and quickly detectable.

Why log integrity is a bigger deal than “having logs”

A lot of teams say, “we have logs, so we’re fine.”

Not exactly.

Security logging has two layers:

  1. Collection — do you record meaningful events?
  2. Integrity — can an attacker change, delete, or forge those events without being caught?

You can have excellent collection and still fail an investigation if integrity is weak.

Typical failure patterns on Linux production:

  • logs only stored locally on the same host,
  • no immutable or append-only strategy,
  • weak rsyslog transport security,
  • no alert when auditd stops,
  • no baseline for expected log volume.

If your team is building a stronger baseline first, check this related guide: Linux Security Baseline Audit Checklist for Small Teams.

Architecture that works for small teams

You don’t need a complex architecture to get meaningful protection. Start with this:

  1. Local source logs (journald, auth logs, app logs)
  2. Kernel/user-space auditing via auditd
  3. Remote forwarding to a separate log receiver
  4. Integrity checks and alerts for service disruption/tamper indicators

Golden rule: the box being monitored should not be the only place that stores evidence.

Even if your central stack is simple (another Linux VM + storage), remote copy gives investigators a second source of truth.

Step 1 — Harden journald retention and behavior

Many servers run default journald settings with small retention and volatile buffers. That’s risky.

Set explicit retention and persistence in /etc/systemd/journald.conf:

[Journal]
Storage=persistent
SystemMaxUse=2G
SystemKeepFree=1G
MaxFileSec=1month
ForwardToSyslog=yes
Compress=yes
Seal=yes

Then restart journald:

sudo systemctl restart systemd-journald

Why this matters:

  • Storage=persistent keeps logs after reboot.
  • capped retention prevents sudden log loss due to disk pressure.
  • Seal=yes adds Forward Secure Sealing (when supported), helping detect journal file tampering.

Tip: tune limits to your disk profile. If your server is tiny, lower SystemMaxUse but make it explicit.

Step 2 — Use auditd for high-value security events

journald is broad, but auditd is strong for security-critical tracking: privilege changes, sensitive file modifications, and policy changes.

Install and enable:

sudo apt-get update
sudo apt-get install -y auditd audispd-plugins
sudo systemctl enable --now auditd

Example baseline rules (/etc/audit/rules.d/hardening.rules):

-w /etc/passwd -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/sudoers -p wa -k priv_esc
-w /etc/ssh/sshd_config -p wa -k ssh_change
-w /var/log/auth.log -p wa -k authlog_tamper
-a always,exit -F arch=b64 -S execve -k command_exec

Load rules:

sudo augenrules --load
sudo systemctl restart auditd

Now you can query targeted events quickly:

sudo ausearch -k ssh_change -ts today
sudo ausearch -k priv_esc -ts recent

This helps your team answer “who changed what, and when?” with less guesswork.

For deeper triage workflows, pair this with: Threat Hunting Linux dengan auditd + journald untuk Tim Kecil (Praktis, Bukan Teori).

Step 3 — Forward logs to a remote host (separate trust boundary)

Local logs can be destroyed if attacker gains root. Remote forwarding is your resilience layer.

On source host (rsyslog example):

# /etc/rsyslog.d/60-forwarding.conf
module(load="imuxsock")
module(load="imjournal")

# send all logs over TCP with queue
*.* action(
  type="omfwd"
  target="10.10.10.20"
  port="514"
  protocol="tcp"
  action.resumeRetryCount="-1"
  queue.type="linkedList"
  queue.size="10000"
)

Restart rsyslog:

sudo systemctl restart rsyslog

Minimum best practice:

  • use TCP, not UDP,
  • isolate log receiver network access,
  • protect receiver with strict firewall,
  • keep receiver credentials separate from app hosts.

If possible, add TLS transport and disk-backed queueing for unstable links.

Step 4 — Make tampering attempts visible (alerts, not just data)

Most teams fail here. They store logs but do not alert on suspicious logging behavior.

Add alert conditions for:

  • auditd stopped/restarted unexpectedly,
  • sudden drop in auth log volume,
  • log service config changes (journald.conf, rsyslog configs),
  • time synchronization disruption (chronyd, systemd-timesyncd),
  • mass deletion in /var/log.

Simple detection examples:

# service status sanity check
systemctl is-active auditd || echo "ALERT: auditd inactive"

# detect recent deletions in log path
find /var/log -type f -mmin -5 -name "*.log" -size 0

This can be integrated with your existing monitoring stack (Prometheus alertmanager, cron notifier, or webhook).

Step 5 — Protect the timeline (clock integrity)

Attackers may alter time to blur forensic sequencing.

Defensive checklist:

  • enforce NTP sync and alert on drift,
  • store UTC in logs,
  • monitor sudden manual time changes,
  • ensure all hosts in incident scope use consistent time source.

Without reliable time, cross-host event correlation becomes painful and slow.

Step 6 — Build an incident mini-playbook for log tamper suspicion

When you suspect tampering, speed matters more than perfect analysis.

Use this quick sequence:

  1. Preserve evidence: snapshot volatile data + copy key logs.
  2. Verify service state: auditd, journald, rsyslog, time sync.
  3. Compare local vs remote logs for gaps.
  4. Identify first divergence timestamp.
  5. Contain host if privilege escalation is likely.
  6. Document chain of custody for investigation.

If you need a broader containment flow, this is relevant: Linux Incident Response Playbook: Practical Troubleshooting and Containment.

Common mistakes (and how to avoid them)

1) “We’ll centralize logs later”

Later often means after an incident. Start with one receiver and improve iteratively.

2) Collect everything, understand nothing

Huge raw data with no priority events is expensive noise. Track high-value controls first.

3) No ownership

If no one owns log integrity, no one notices when pipeline silently breaks.

4) No test drill

Run monthly mini-drills: stop auditd in staging, modify sudoers, simulate auth brute-force, and verify alerts trigger.

A good exercise model: Tabletop Exercise Cyber Security Linux untuk Tim Kecil: Simulasi Insiden Tanpa Drama.

30-day rollout plan (realistic for busy teams)

Week 1

  • define critical hosts,
  • enforce persistent journald,
  • validate retention settings.

Week 2

  • deploy baseline auditd rules,
  • document key search queries (ausearch cheatsheet).

Week 3

  • enable remote forwarding for critical systems,
  • test receiver failure and queue behavior.

Week 4

  • add alerts for logging service disruptions,
  • run one tamper simulation drill,
  • record lessons learned and fix gaps.

This cadence is small enough to execute, but strong enough to materially improve your forensic readiness.

KPI to prove this is working

Use simple measurable indicators:

  • % critical hosts forwarding logs remotely,
  • mean time to detect logging pipeline failure,
  • % incidents with complete timeline reconstruction,
  • number of unauthorized config changes detected,
  • auditd uptime over 30 days.

If these metrics trend in the right direction, your detection and investigation capability is improving—not just your dashboard count.

FAQ (Schema-Ready)

1) Is journald alone enough for security investigations?

Not for most production threat scenarios. Journald is useful, but you still need remote forwarding and audit-focused telemetry to resist tampering and improve traceability.

2) Should small teams use auditd even without a full SIEM?

Yes. Auditd gives high-value event trails with low overhead. Start with minimal rules for identity, privilege, and SSH config changes, then expand gradually.

3) What’s the minimum viable setup for log integrity on Linux?

Persistent journald + baseline auditd rules + remote syslog forwarding to a separate host + alerting when logging services fail.

4) How often should we test logging tamper detection?

At least monthly in staging, and after major platform changes. Small controlled drills catch blind spots before real incidents happen.

{
  "@context": "https://schema.org",
  "@type": "FAQPage",
  "mainEntity": [
    {
      "@type": "Question",
      "name": "Is journald alone enough for security investigations?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "No. Journald is helpful, but resilient investigations usually require remote log forwarding and audit-focused telemetry to reduce tampering risk and improve traceability."
      }
    },
    {
      "@type": "Question",
      "name": "Should small teams use auditd even without a full SIEM?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Yes. Auditd provides high-value security events with relatively low overhead. Start with minimal rules for identity, privilege, and SSH configuration changes, then expand over time."
      }
    },
    {
      "@type": "Question",
      "name": "What’s the minimum viable setup for log integrity on Linux?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "A practical minimum is persistent journald, baseline auditd rules, remote syslog forwarding to a separate host, and alerting for logging pipeline failures."
      }
    },
    {
      "@type": "Question",
      "name": "How often should we test logging tamper detection?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "At least monthly in staging and after major platform changes. Controlled drills reveal monitoring gaps before real incidents occur."
      }
    }
  ]
}

Conclusion

Log integrity monitoring is one of those controls that feels optional—until your first serious incident.

You don’t need a giant security platform to do this well. Start with dependable basics:

  • persistent journald,
  • focused auditd rules,
  • remote forwarding to separate trust boundary,
  • alerts for tampering signals,
  • regular simulation drills.

Do these consistently, and your team will move from “we have logs” to “we can trust our evidence when it matters most.”

Komentar

Real-time

Memuat komentar...

Tulis Komentar

Email tidak akan ditampilkan

0/2000 karakter

Catatan: Komentar akan dimoderasi sebelum ditampilkan. Mohon bersikap sopan dan konstruktif.