Publish local repo state
This commit is contained in:
0
.gemini/settings.json
Normal file → Executable file
0
.gemini/settings.json
Normal file → Executable file
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
.aider*
|
||||
75
ai_dev_plan.md
Normal file
75
ai_dev_plan.md
Normal file
@@ -0,0 +1,75 @@
|
||||
# Personal AI Agent: "About Me" Profile Generator
|
||||
|
||||
**Project Goal**
|
||||
Build a showcase AI system that scans and summarizes your professional/personal work from self-hosted services (primarily Gitea for code/repos, plus Flatnotes/Trillium/HedgeDoc for notes/ideas/projects). The agent answers employer-style questions dynamically (e.g., "Summarize Giordano's coding projects and skills") with RAG-grounded responses, links, and image embeds where relevant.
|
||||
|
||||
Emphasize broad AI toolchain integration for skill development and portfolio impact: agentic workflows, RAG pipelines, orchestration, multi-LLM support. No frontend focus — terminal/API-triggered queries only.
|
||||
|
||||
**Key Features**
|
||||
- Periodic/full scanning of services to extract text, summaries, code snippets, links, images.
|
||||
- Populate & query a local vector DB (RAG) for semantic search.
|
||||
- Agent reasons, retrieves, generates responses with evidence (links/images).
|
||||
- Multi-LLM fallback (DeepSeek primary, Gemini/OpenCode trigger).
|
||||
- Scheduled/automated updates via pipelines.
|
||||
- Local/Docker deployment for privacy & control.
|
||||
|
||||
**Tools & Stack Overview**
|
||||
|
||||
| Category | Tool(s) | Purpose & Why Chosen | Integration Role |
|
||||
|-----------------------|----------------------------------|--------------------------------------------------------------------------------------|------------------|
|
||||
| Core Framework | LangChain / LangGraph | Build agent, tools, chains, RAG logic. Modular, industry-standard for LLM apps. | Heart of agent & retrieval |
|
||||
| Crawling/Extraction | Selenium / Playwright + Firecrawl (via LangChain loaders) | Handle auth/dynamic pages (Gitea login/nav), structured extraction (Markdown/JSON). | Scan web views & APIs |
|
||||
| Vector Database | Chroma | Local, lightweight RAG store. Easy Docker setup, native LangChain integration. | Store embeddings for fast semantic search |
|
||||
| LLM(s) | DeepSeek (via API) + Gemini / OpenCode | DeepSeek: cheap, strong reasoning (primary). Gemini/OpenCode: terminal trigger/fallback. | Reasoning & generation |
|
||||
| Data Pipeline / Scheduling | Apache Airflow (Docker) | Industry-best for ETL/ETL-like scans (DAGs). Local install via official Compose. | Schedule periodic scans/updates to Chroma |
|
||||
| Visual Prototyping | Flowise | No-code visual builder on LangChain. Quick agent/RAG prototyping & debugging. | Experiment with chains before code |
|
||||
| Script/Workflow Orchestration | Windmill | Turn Python/LangChain scripts into reusable, scheduled flows. Dev-first, high growth.| Reactive workflows (e.g., on-commit triggers) |
|
||||
| Event-Driven Automation | Activepieces | Connect services event-based (e.g., Gitea webhook → re-scan). AI-focused pieces. | Glue for reactive triggers |
|
||||
|
||||
**High-Level Architecture & Flow**
|
||||
|
||||
1. **Ingestion Pipeline (Airflow + Crawlers)**
|
||||
- Airflow DAG runs on schedule (daily/weekly) or manually.
|
||||
- Task 1: LangChain agent uses Selenium/Playwright tool to browse/authenticate to services (e.g., Gitea repos, Flatnotes/Trillium pages).
|
||||
- Task 2: Firecrawl loader extracts structured content (text, code blocks, links, image URLs).
|
||||
- Task 3: LangChain chunks, embeds (DeepSeek embeddings), upserts to Chroma vector DB.
|
||||
- Optional: Activepieces listens for events (e.g., Gitea push webhook) → triggers partial re-scan.
|
||||
|
||||
2. **Agent Runtime (LangChain/LangGraph + DeepSeek)**
|
||||
- Core agent (ReAct-style): Receives query (e.g., via terminal/OpenCode: "opencode query 'Giordano's top projects'").
|
||||
- Tools: Retrieve from Chroma (RAG), fetch specific pages/images if needed.
|
||||
- LLM: DeepSeek for cost-effective reasoning/summarization. Fallback to Gemini if complex.
|
||||
- Output: Natural response with summaries, links (e.g., Gitea repo URLs), embedded image previews (from scanned pages).
|
||||
|
||||
3. **Prototyping & Orchestration Layer**
|
||||
- Use Flowise to visually build/test agent chains/RAG flows before committing to code.
|
||||
- Windmill wraps scripts (e.g., scan script) as jobs/APIs.
|
||||
- Activepieces adds event-driven glue (e.g., new note in Trillium → notify/update DB).
|
||||
|
||||
**Deployment & Running Locally**
|
||||
- Everything in Docker Compose: Airflow (official image), Chroma, Python services (LangChain agent), optional Flowise/Windmill containers.
|
||||
- Secrets: Env vars for API keys (DeepSeek, service auth).
|
||||
- Trigger: Terminal via OpenCode/Gemini CLI → calls agent endpoint/script.
|
||||
- Scale: Start simple (manual scans), add Airflow scheduling later.
|
||||
|
||||
**Skill Showcase & Portfolio Value**
|
||||
- Demonstrates: Agentic AI, RAG pipelines, web crawling with auth, multi-tool orchestration, cost-optimized LLMs, local/self-hosted infra.
|
||||
- Broad coverage: LangChain ecosystem + industry ETL (Airflow) + modern AI workflow tools (Flowise/Windmill/Activepieces).
|
||||
- Low cost: DeepSeek keeps API bills minimal (often <$5/month even with frequent scans/queries).
|
||||
|
||||
**Next Steps (Implementation Phases)**
|
||||
1. Setup local Docker env + Chroma + DeepSeek API key.
|
||||
2. Build basic crawler tools (Selenium + Firecrawl) for Gitea/Flatnotes.
|
||||
3. Prototype agent in Flowise, then code in LangChain.
|
||||
4. Add Airflow DAG for scheduled ingestion.
|
||||
5. Integrate Windmill/Activepieces for extras.
|
||||
6. Test queries, refine summaries/links/images.
|
||||
|
||||
This setup positions you strongly for AI engineering roles while building real, integrated skills.
|
||||
|
||||
** Extra tools to add.
|
||||
- AutoMaker
|
||||
- AutoCoder - These assist in set and forget long review AI
|
||||
- OpenRouter - Single access point for any CLI with useage fee.
|
||||
- Aider - CLI code and file editing with OpenROuter for any model
|
||||
- Goose - integrates with system and MCP servers like ClawBot
|
||||
1
all_software.md
Normal file
1
all_software.md
Normal file
@@ -0,0 +1 @@
|
||||
|
||||
@@ -1,67 +0,0 @@
|
||||
# Phase 0: Full System Backup (CRITICAL)
|
||||
|
||||
**Objective:** To create a complete, offline, and verified backup of all critical data from both the Ubuntu and Windows operating systems before beginning the migration process.
|
||||
|
||||
**Do not proceed to any other phase until this phase is 100% complete.** Data loss is irreversible.
|
||||
|
||||
## Instructions for Backup Operator (Human or AI)
|
||||
|
||||
### 1. Identify Backup Target
|
||||
|
||||
- **Requirement:** You will need an external storage device (e.g., a USB hard drive or NAS) with enough free space to hold all the data you intend to back up.
|
||||
- **Recommendation:** This drive should be dedicated to the backup and stored offline (disconnected from the computer) once the backup is complete.
|
||||
|
||||
### 2. Backup Ubuntu Data
|
||||
|
||||
Your personal files are the top priority. System files can be reinstalled by the new OS.
|
||||
|
||||
- **Primary Tool:** `rsync` is the recommended tool for its efficiency and ability to preserve permissions and metadata.
|
||||
- **Source Directories:** The most common locations for user data on Linux are within the `/home/<username>` directory. You must identify and back up, at a minimum:
|
||||
- `/home/<username>/Documents`
|
||||
- `/home/<username>/Pictures`
|
||||
- `/home/<username>/Music`
|
||||
- `/home/<username>/Videos`
|
||||
- `/home/<username>/Desktop`
|
||||
- `/home/<username>/Downloads`
|
||||
- `/home/<username>/dotfiles` (as mentioned in the main plan)
|
||||
- Any other project or data directories inside your home folder (e.g., `/home/<username>/dev`, `/home/<username>/workspaces`).
|
||||
- **Docker Data:**
|
||||
- Stop all running containers: `docker stop $(docker ps -aq)`
|
||||
- Identify Docker's data directory, typically `/var/lib/docker`. This contains volumes, images, and container configurations. Back this entire directory up.
|
||||
- **Server Configurations:**
|
||||
- Snapcast config: Locate and back up the configuration files (e.g., `/etc/snapserver.conf`, `/etc/snapclient.conf`).
|
||||
- Other server configs (Apache, Node.js services): Back up relevant files from `/etc/` and any service data files.
|
||||
|
||||
**Example `rsync` Command:**
|
||||
```bash
|
||||
# Replace <username>, <external_drive_mount_point>, and <backup_folder_name>
|
||||
# The -a flag archives, -v is verbose, -h is human-readable, --progress shows progress.
|
||||
rsync -avh --progress /home/<username>/Documents /<external_drive_mount_point>/<backup_folder_name>/
|
||||
```
|
||||
*Run this for each source directory.*
|
||||
|
||||
### 3. Backup Windows Data
|
||||
|
||||
- **Method:** Boot into your Windows 10 operating system.
|
||||
- **Source Directories:** Connect your external backup drive. Manually copy the entire contents of your user folders to the backup drive. These are typically located at:
|
||||
- `C:\Users\<YourUsername>\Documents`
|
||||
- `C:\Users\<YourUsername>\Pictures`
|
||||
- `C:\Users\<YourUsername>\Music`
|
||||
- `C:\Users\<YourUsername>\Videos`
|
||||
- `C:\Users<YourUsername>\Desktop`
|
||||
- `C:\Users<YourUsername>\Downloads`
|
||||
- **Thoroughness:** Be meticulous. Ensure you copy all personal data. Do not worry about program files or the Windows directory itself.
|
||||
|
||||
### 4. Verification
|
||||
|
||||
A backup is not a backup until it is verified.
|
||||
|
||||
- **Procedure:** After the copy process is complete for both operating systems, safely eject and reconnect the external drive.
|
||||
- **Spot Check:** Browse the directories on the backup drive. Open a few files of different types (documents, images, music files) from both the Ubuntu and Windows backups to ensure they are not corrupted and are fully readable.
|
||||
- **Compare Sizes:** Use a disk usage tool (like `du -sh` on Linux or checking folder properties on Windows) to compare the size of a few source directories with their backed-up counterparts. They should be identical.
|
||||
|
||||
### 5. Completion
|
||||
|
||||
- Once verified, disconnect the external backup drive.
|
||||
- Store it in a safe, separate physical location.
|
||||
- You may now proceed to Phase 1.
|
||||
@@ -1,39 +0,0 @@
|
||||
# Phase 1: System Reconnaissance Guide
|
||||
|
||||
**Objective:** To execute the reconnaissance script that gathers essential information about the current system. This information will be used in subsequent phases to plan the file and software migration.
|
||||
|
||||
**Prerequisite:** Ensure you have completed **Phase 0: Full System Backup**. Do not run this script until you have a complete and verified offline backup of your data.
|
||||
|
||||
## Instructions for Operator (Human or AI)
|
||||
|
||||
### 1. Understand the Script
|
||||
|
||||
- **Script Location:** `scripts/01_system_recon.sh`
|
||||
- **Purpose:** This script is designed to be **non-destructive**. It reads information about the system and saves it to a log file. It does not modify any files or settings.
|
||||
- **Actions Performed:**
|
||||
- Gathers disk, partition, and filesystem information.
|
||||
- Calculates the total size of major user directories (Documents, Pictures, etc.).
|
||||
- Lists installed software from `apt` and `snap`.
|
||||
- Collects detailed information about the Docker setup (containers, images, volumes).
|
||||
- Checks for versions of common development languages (Rust, Node, etc.).
|
||||
- Looks for evidence of common servers and development workspaces (Eclipse, Arduino).
|
||||
- **Output:** All findings are saved to `logs/01_system_recon.log`.
|
||||
|
||||
### 2. Execution
|
||||
|
||||
1. Open a terminal on the Ubuntu machine that is being migrated.
|
||||
2. Navigate to the `nixos-migration` project directory.
|
||||
3. Run the script. It may ask for a password as some commands (like inspecting Docker's data directory or listing packages) can require elevated privileges to get a complete picture.
|
||||
|
||||
```bash
|
||||
sudo ./scripts/01_system_recon.sh
|
||||
```
|
||||
|
||||
### 3. Review the Output
|
||||
|
||||
- Upon completion, the script will have created or updated the log file at `logs/01_system_recon.log`.
|
||||
- Review this file to ensure the information appears correct and complete. This log is the foundation for all future planning steps.
|
||||
|
||||
### 4. Next Steps
|
||||
|
||||
Once the reconnaissance is complete and the log file has been generated, you may proceed to **Phase 2: Migration Analysis & Planning**. The data in the log file will be the primary input for this next phase.
|
||||
0
docs/02_software_migration_plan.md
Normal file → Executable file
0
docs/02_software_migration_plan.md
Normal file → Executable file
0
docs/02_software_migration_plan_filled.md
Normal file → Executable file
0
docs/02_software_migration_plan_filled.md
Normal file → Executable file
@@ -1,105 +0,0 @@
|
||||
# Phase 3: File Migration Scripting Guide
|
||||
|
||||
**Objective:** To prepare for the physical relocation of user data from the old directory structures to the new, consolidated structure on the NixOS system.
|
||||
|
||||
**Prerequisite:** A full analysis of the disk and file reconnaissance log (`logs/01_system_recon.log`) must be complete. The target directory structure should be agreed upon.
|
||||
|
||||
**Core Principle:** We will not move files directly. We will write a script that **copies** the files. The original data will be left untouched until the new NixOS system is fully configured and the copied data is verified. We will also implement a `--dry-run` feature for safety.
|
||||
|
||||
---
|
||||
|
||||
### Target Directory for Staging
|
||||
|
||||
To avoid disturbing the existing file structure on the 2.7TB drive, the migration script should consolidate all files from the Windows and old Ubuntu partitions into a single, new directory.
|
||||
|
||||
- **Staging Directory:** `/mnt/ubuntu_storage_3TB/migration_staging`
|
||||
|
||||
The script's primary purpose is to copy data from the other drives *into* this location. From there, you can organize it manually at your leisure after the migration is complete.
|
||||
|
||||
---
|
||||
|
||||
|
||||
### Instructions for Operator (Human or AI)
|
||||
|
||||
Your task is to create a shell script named `scripts/02_migrate_files.sh`. This script will contain a series of `rsync` commands to copy data from the source drives to the target directories.
|
||||
|
||||
#### 1. Script Requirements
|
||||
|
||||
- **Shebang:** The script must start with `#!/bin/bash`.
|
||||
- **Safety:** The script should not perform any operations if run as root without a specific override.
|
||||
- **Dry-Run Flag:** The script must accept a `--dry-run` argument. If this flag is present, all `rsync` commands should be executed with the `--dry-run` flag, which shows what would be done without making any actual changes.
|
||||
- **Verbosity:** All commands should be verbose (`-v`) and output human-readable sizes (`-h`) so the user can see the progress.
|
||||
- **Logging:** The script should log its output to a file in the `logs/` directory.
|
||||
|
||||
#### 2. Source Data Locations
|
||||
|
||||
The script will need to access data from the following locations. These drives will be mounted on the running NixOS system *before* the script is executed (as defined in `configuration.nix`).
|
||||
|
||||
- **Primary Ubuntu Home:** `/home/sam/` on the old root partition. (This will need to be mounted temporarily during migration).
|
||||
- **Ubuntu Storage Drive:** The contents of `/dev/sdd1` (which will become `/data`). The script will mostly be organizing files *within* this drive.
|
||||
- **Windows Storage Drive:** `/mnt/windows-storage` (mounted from `/dev/sdb2`).
|
||||
- **Windows User Folders:** The script may need to access `C:\Users\<YourUsername>` from one of the `ntfs` partitions.
|
||||
|
||||
#### 3. `rsync` Command Structure
|
||||
|
||||
Use the `rsync` command for all file copy operations. It is efficient, safe, and preserves metadata.
|
||||
|
||||
**Example `rsync` command for the script:**
|
||||
```bash
|
||||
# -a: archive mode (preserves permissions, ownership, etc.)
|
||||
# -v: verbose
|
||||
# -h: human-readable numbers
|
||||
# --progress: show progress during transfer
|
||||
# --exclude='*.tmp': example of excluding files
|
||||
|
||||
rsync -avh --progress --exclude='cache' /path/to/source/documents/ /data/work/
|
||||
```
|
||||
|
||||
#### 4. Script Skeleton (to be created in `scripts/02_migrate_files.sh`)
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# ---
|
||||
# Configuration ---
|
||||
# ---
|
||||
LOG_FILE="logs/02_file_migration.log"
|
||||
DRY_RUN=""
|
||||
|
||||
# Check for --dry-run flag
|
||||
if [ "$1" == "--dry-run" ]; then
|
||||
DRY_RUN="--dry-run"
|
||||
echo "---
|
||||
PERFORMING DRY RUN ---"
|
||||
fi
|
||||
|
||||
# ---
|
||||
# Helper Functions ---
|
||||
# ---
|
||||
log() {
|
||||
echo "$1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
# ---
|
||||
# Main Execution ---
|
||||
# ---
|
||||
log "Starting file migration script..."
|
||||
|
||||
# Create target directories
|
||||
log "Creating target directories..."
|
||||
mkdir -p $DRY_RUN /data/personal /data/work /data/dev /data/backups /data/media
|
||||
|
||||
# ---
|
||||
# Migration Commands ---
|
||||
# ---
|
||||
# Add rsync commands here. Be specific.
|
||||
|
||||
# Example:
|
||||
# log "Migrating Documents from Windows..."
|
||||
# rsync -avh $DRY_RUN /mnt/windows-storage/Users/Sam/Documents/ /data/work/project-archives/
|
||||
|
||||
|
||||
log "File migration script finished."
|
||||
```
|
||||
|
||||
**Next Step:** A developer or another AI instance will now write the full `scripts/02_migrate_files.sh` script based on these instructions and a deeper analysis of the file contents revealed in `logs/02_deeper_scan.log`.
|
||||
@@ -1,40 +0,0 @@
|
||||
# Migration Status and Next Steps (Jan 28, 2026)
|
||||
|
||||
This document summarizes the current status of the NixOS migration project and outlines the remaining critical steps.
|
||||
|
||||
## Current Status
|
||||
|
||||
**Phase 1: Data Staging and Relocation - COMPLETE**
|
||||
|
||||
* **Initial Data Review:** Identified and confirmed that primary user personal files (Documents, Pictures, Music, Videos, Downloads) were largely already present in `/data/personal`.
|
||||
* **Missing Project Data Identified:** Discovered missing web projects (XAMPP `htdocs`) and IoT projects (`frei0r`) from the old Windows drive (`/media/sam/8294CD2994CD2111`).
|
||||
* **Data Staging Completed:** Successfully copied the missing web projects to `/data/work/htdocs` and IoT projects to `/data/work/frei0r`. The `Espressif` toolchain was intentionally excluded.
|
||||
* **Critical Data Relocation:** Both the entire `/data` directory (containing all staged user data, total ~86GB) and the `nixos-migration` project directory were successfully copied from the original Ubuntu OS drive to the external USB drive: `/media/sam/Integral300/`. This is crucial for safeguarding data before the main drive is formatted.
|
||||
|
||||
**Phase 2: Deep System Reconnaissance - COMPLETE**
|
||||
|
||||
* A comprehensive `04_nixos_recon.sh` script was created and executed.
|
||||
* Detailed logs (`logs/04_nixos_recon.log`) have been generated, containing:
|
||||
* Lists of all installed APT and Snap packages.
|
||||
* Information on active systemd services and timers (system-wide and user-specific).
|
||||
* Output from Docker commands (version, info, running containers, images, volumes) and a search for `docker-compose.yml` files.
|
||||
* Analysis of shell history for frequently used command-line tools.
|
||||
* Lists of manually installed binaries in `/usr/local/bin`, `~/bin`, and `~/.local/bin`.
|
||||
|
||||
**Gitea Archival - COMPLETE**
|
||||
|
||||
* The essential `logs/` and `results/` directories from this `nixos-migration` project have been successfully pushed to the `nixos-4screen` Gitea repository (`ssh://git@gitea.lab.audasmedia.com.au:2222/sam/nixos-4screen.git`). This ensures the reconnaissance data and any future NixOS configuration templates are safely version-controlled.
|
||||
|
||||
## Next Steps / Remaining Considerations
|
||||
|
||||
1. **Review All Reconnaissance Logs:** A thorough manual review of all logs (`04_nixos_recon.log` and `07_deep_cli_scan.log`) is essential to build your final NixOS configurations. The deep scan successfully identified numerous Cargo-installed CLI tools like `atuin`, `starship`, `zellij`, `yazi`, etc.
|
||||
|
||||
2. **Build `configuration.nix`:** Use the `results/generated_configuration.nix` as a starting template. Cross-reference with the logs to add any missing system-wide packages and services.
|
||||
|
||||
3. **Build `home.nix`:** A new draft, `results/generated_home.nix`, has been created. This file is a comprehensive template for using Home Manager to declaratively build your entire terminal environment, including Zsh, Oh My Zsh, Starship, and all the CLI tools discovered in the deep scan.
|
||||
|
||||
4. **Backup of Local Application Data (See `06_application_data_notes.md`):** Ensure critical items like GPG/SSH keys are securely backed up.
|
||||
|
||||
5. **Fireship Desktop App:** The "fireship" application was not found via standard package managers or `.desktop` files. It is likely an AppImage. You will need to re-download this manually on your new NixOS system.
|
||||
|
||||
Once these review steps are complete, you will be ready to begin the NixOS installation.
|
||||
0
docs/06_application_data_notes.md
Normal file → Executable file
0
docs/06_application_data_notes.md
Normal file → Executable file
0
logs/01_system_recon.log
Normal file → Executable file
0
logs/01_system_recon.log
Normal file → Executable file
File diff suppressed because one or more lines are too long
0
logs/04_nixos_recon.log
Normal file → Executable file
0
logs/04_nixos_recon.log
Normal file → Executable file
0
logs/05_hardware_scan.log
Normal file → Executable file
0
logs/05_hardware_scan.log
Normal file → Executable file
0
logs/06_netplan_config.log
Normal file → Executable file
0
logs/06_netplan_config.log
Normal file → Executable file
File diff suppressed because it is too large
Load Diff
148
niri-4screen.md
Normal file
148
niri-4screen.md
Normal file
@@ -0,0 +1,148 @@
|
||||
Niri + 4-Monitor Intel (DP) Migration Notes (Ubuntu 24.04+ → NixOS)
|
||||
|
||||
OWNER / CONTEXT
|
||||
- User: sam (IT)
|
||||
- Goal: Move to NixOS and daily-drive Niri on a 4-monitor setup.
|
||||
- Priority: Reliability and broad tool compatibility over “polish”.
|
||||
- Testing style: often SSH in from another machine because local display can go
|
||||
black during compositor/DM experiments.
|
||||
|
||||
REMOTE REQUIREMENT (clarified)
|
||||
- SSH is sufficient even when nobody is logged in locally (sshd runs pre-login).
|
||||
- Remote GUI login is optional/rare. Do not design around RustDesk-at-greeter on
|
||||
Wayland.
|
||||
- If remote GUI login is ever needed later, consider adding GNOME+GDM+RDP as a
|
||||
separate capability; keep Niri as main local session.
|
||||
|
||||
HARDWARE SUMMARY
|
||||
- GPU: Intel iGPU (exact model TBD)
|
||||
- Outputs: 4x DisplayPort to 4x HP LA2205 monitors
|
||||
- DRM nodes observed on Ubuntu (node numbering may differ on NixOS):
|
||||
- Primary KMS card for the 4 DP outputs: /dev/dri/card2
|
||||
- Render node: /dev/dri/renderD129
|
||||
- Notes: There may be multiple /dev/dri/card* devices. Session must pick the
|
||||
correct device driving the 4 DP outputs.
|
||||
|
||||
KNOWN KERNEL / PLATFORM ISSUES
|
||||
- IOMMU faults / “Operation not permitted” style crashes were avoided on Ubuntu
|
||||
with kernel flags:
|
||||
- intel_iommu=off
|
||||
- dev_mem_signed_off=1
|
||||
- These flags may or may not be needed on NixOS; keep them as a known-good
|
||||
baseline and only remove once stable.
|
||||
|
||||
UBUNTU WORKING STATE (IMPORTANT BEHAVIORAL FINDINGS)
|
||||
1) GDM “gear icon” / Wayland sessions
|
||||
- GDM did not show Wayland sessions until Wayland was enabled.
|
||||
- /etc/gdm3/custom.conf had WaylandEnable=false. Commenting it out fixed
|
||||
session availability after restarting GDM.
|
||||
|
||||
2) .desktop Exec path issue
|
||||
- Session .desktop pointing Exec to /home/sam/start-niri.sh caused GDM issues.
|
||||
- Home perms were drwxr-x--- (750), so greeter user couldn’t traverse /home
|
||||
reliably.
|
||||
- Fix: Exec must point to a system path (/usr/bin or /usr/local/bin), not
|
||||
/home.
|
||||
|
||||
3) niri-session issue (major root cause of login loop)
|
||||
- /usr/bin/niri-session existed but session immediately returned to login.
|
||||
- Logs showed:
|
||||
Failed to start niri.service: Unit niri.service not found.
|
||||
Failed to start niri-shutdown.target: Unit niri-shutdown.target not found.
|
||||
- Therefore niri-session was not usable as packaged (missing systemd user
|
||||
units).
|
||||
|
||||
4) FINAL WORKING FIX ON UBUNTU (proven)
|
||||
- /usr/share/wayland-sessions/niri.desktop set to start Niri directly:
|
||||
Exec=/usr/bin/niri --session
|
||||
- This bypassed niri-session and made Niri start successfully from GDM.
|
||||
|
||||
SESSION START METHOD (proven)
|
||||
- Known working from a display manager: Exec = `niri --session`
|
||||
- Avoid relying on `niri-session` unless NixOS packaging provides the required
|
||||
systemd user units (niri.service, niri-shutdown.target).
|
||||
|
||||
PERMISSIONS / SECURITY WORKAROUNDS USED DURING TESTING
|
||||
- User group membership on Ubuntu: video, render, seat
|
||||
- Custom udev rules were created to chmod 666 DRM nodes.
|
||||
- Result: /dev/dri/card2 and /dev/dri/renderD129 became world-writable.
|
||||
- This is NOT desired long term; prefer logind seat ACLs.
|
||||
- On NixOS, aim to avoid chmod 666 rules unless absolutely needed for debugging.
|
||||
|
||||
NIRI CONFIG NOTES
|
||||
- Config validated successfully on Ubuntu: ~/.config/niri/config.kdl
|
||||
- Xwayland started via config:
|
||||
- spawn-at-startup "Xwayland" ":" "1"
|
||||
- Avoid exporting XDG_RUNTIME_DIR manually; let pam/systemd-logind manage it.
|
||||
- If needed to force GPU device, Niri supports choosing a render DRM device
|
||||
(exact config syntax/version dependent). On Ubuntu, correct render node was
|
||||
renderD129.
|
||||
|
||||
NIXOS TARGET STATE (WHAT WE WANT)
|
||||
- Boot to a login method that reliably starts Niri on the Intel GPU with 4
|
||||
monitors.
|
||||
- Must keep a working fallback (at minimum TTY + SSH; optionally a full DE).
|
||||
- Remote recovery/admin always possible via SSH.
|
||||
|
||||
LOGIN / DISPLAY MANAGER STRATEGY OPTIONS (pick one)
|
||||
Option A: greetd + tuigreet (recommended for Niri-first reliability)
|
||||
- Minimal moving parts, compositor-agnostic.
|
||||
- Start the session with: `niri --session`.
|
||||
- Ideal when “polish doesn’t matter” and reliability does.
|
||||
|
||||
Option B: GDM (works; proven on Ubuntu)
|
||||
- Ensure Wayland sessions enabled.
|
||||
- Ensure session Exec is not in /home.
|
||||
- If `niri-session` is incomplete, start `niri --session` directly.
|
||||
|
||||
DISPLAY MANAGER DECISION NOTE
|
||||
- If SSH-only remote is the requirement: prefer greetd for simplicity.
|
||||
- If remote graphical login is ever required: consider GDM + GNOME RDP later as
|
||||
a separate capability. (Not required now.)
|
||||
|
||||
SCREENSHARE / PORTALS REQUIREMENTS (broad tool compatibility)
|
||||
- Enable PipeWire + WirePlumber.
|
||||
- Ensure xdg-desktop-portal is installed and functional in the user session.
|
||||
- Choose a portal backend compatible with Niri (often portal-gnome and/or portal
|
||||
gtk; exact best choice may be NixOS-specific).
|
||||
- If screencast/screen-share fails in apps: check portal backend selection,
|
||||
permissions prompts, and PipeWire.
|
||||
|
||||
GPU/DRM PERMISSIONS
|
||||
- Avoid global chmod 666 udev rules in final config.
|
||||
- Use logind seat/ACLs; add user to video/render groups if needed.
|
||||
- When debugging device selection:
|
||||
- ls -l /dev/dri /dev/dri/by-path
|
||||
- loginctl seat-status seat0
|
||||
|
||||
FALLBACK PLAN
|
||||
- Minimum: TTY + SSH access always available.
|
||||
- Optional: install a full fallback DE only if needed (GNOME or Plasma).
|
||||
- Not required for Niri; just a safety net.
|
||||
|
||||
DEBUG / TROUBLESHOOTING CHECKLIST (capture these on failure)
|
||||
- niri config:
|
||||
- niri validate
|
||||
- user session logs:
|
||||
- journalctl --user -b -l --no-pager | tail -n 300
|
||||
- kernel DRM messages:
|
||||
- journalctl -b -k -l --no-pager | grep -iE "drm|i915|kms|atomic|permission" | tail
|
||||
- device inventory:
|
||||
- ls -l /dev/dri /dev/dri/by-path
|
||||
- session type:
|
||||
- echo $XDG_SESSION_TYPE
|
||||
- loginctl session-status
|
||||
|
||||
ACCEPTANCE CRITERIA (DONE WHEN)
|
||||
- Niri starts reliably after reboot from the chosen DM
|
||||
- All 4 monitors are active consistently
|
||||
- Screen sharing works in at least one browser-based app and one native app
|
||||
- SSH recovery works even if local display is broken
|
||||
- No chmod 666 DRM hacks required in the final config (preferred)
|
||||
|
||||
OPEN QUESTIONS FOR NIXOS MIGRATION
|
||||
- Exact Intel GPU model + correct DRM node mapping on NixOS (may differ)
|
||||
- Whether the kernel flags are still required on NixOS
|
||||
- Whether NixOS niri packaging includes full systemd integration units
|
||||
(niri.service, niri-shutdown.target)
|
||||
- Best portal backend combo for Niri screencast on NixOS
|
||||
@@ -1,21 +0,0 @@
|
||||
/home/sam/Desktop
|
||||
/home/sam/snap/code
|
||||
/home/sam/Arduino
|
||||
/home/sam/Music
|
||||
/home/sam/.arduino15/packages/arduino
|
||||
/home/sam/.arduino15/packages/esp32
|
||||
/home/sam/Documents
|
||||
/home/sam/Pictures
|
||||
/home/sam/.config/Code
|
||||
/home/sam/.cache/arduino
|
||||
/home/sam/Videos
|
||||
/home/sam/.rustup/downloads
|
||||
/home/sam/dotfiles
|
||||
/home/sam/Downloads
|
||||
8.0K /home/sam/Desktop
|
||||
20M /home/sam/Documents
|
||||
1.3G /home/sam/Downloads
|
||||
1.9M /home/sam/Pictures
|
||||
4.0K /home/sam/Music
|
||||
31M /home/sam/Videos
|
||||
96M /home/sam/Arduino
|
||||
54
plan.md
Normal file
54
plan.md
Normal file
@@ -0,0 +1,54 @@
|
||||
You are a devops engineer.
|
||||
You are brief and concise and will help guide me through my plan.
|
||||
Plan is to migrate my ubuntu system on the main drive with OS installed to NixOS.
|
||||
I think I have already backed up the required files from windows dual boot and Ubuntu to Inegral300 mount at /media/sam/Integral300/
|
||||
|
||||
We need to check the nvme0n1p5 drive to make sure I have not missed a partition.
|
||||
We need to make a list of software that needs to be installed on the NixOS. There are several files, and some folders to look through.
|
||||
|
||||
We need to create a finalized list of software to ensure we have what we need.
|
||||
|
||||
We do not need to include everything in the list, just the things I have installed and need that are not generic system utils. The list needs to be consise and withouit duplicate.
|
||||
|
||||
Then we will be making a configuration.nix and using homemanager and flake.nix for the installtion on the new NixOS that we will store in the Ingrnal300 and push to my a gitea server I have on my home network.
|
||||
|
||||
I have dotfiles that will need to be included and we will use homemanger.
|
||||
|
||||
We will be using Niri as the primary display.
|
||||
|
||||
List folders and what they are and how to extract software.
|
||||
Some software is in the ai dev plan. If this is easy to include thats ok. Otherwise I can integrate that once the system is running.
|
||||
This approach applies to anything else that maybe problematic. We do not have to go all in at once. Get the main things working with niri, display manager, essential bash /zsh utils/, then dotfiles with homemanager, new ai, etc. Keep adding to successful install.
|
||||
I have a second machine on the desktop that can be used to ssh into this new NixOS if need be.
|
||||
We need to ensure we set the IP to 192.168.20.27
|
||||
|
||||
ai_dev_plan.md this file contains software and a plan for a complex ai development set up. If these files can be included without major hassle that is fine. Otherwise skip and we I can implement as part of incremental set up.
|
||||
|
||||
niri-4screen.md this has documentation on how I implemented niri on my ubuntu and advice on how to implement it on nixos along with display manager and wayland etc.
|
||||
|
||||
previous_setup.md has info on how I set up my last system, this one is slightly different with more software, niri etc. But the idea and approach particularly with homemanager and dotfiles is important along ssh etc.
|
||||
|
||||
dotfiles are on this system at /media/sam/Integral300/data/home_sam_ubuntu/dotfiles/
|
||||
|
||||
previous_setup_software.md has more information on software than needs to be consolidated.
|
||||
|
||||
setup.md and software_to_add.md are again more files for software consolidation. Aprise and Obisidan can be left out, these wil be installed as docker containers later.
|
||||
|
||||
We need development frameworks for python, php, docker-compose, node.
|
||||
|
||||
Folder /docs has more directions for the PLAN and software list.
|
||||
|
||||
|
||||
Folder /logs has some scans and hardware profiles which can be used for planning and software. Including looking at the disks. Please ask for more info on the disks if needed.
|
||||
|
||||
Folder /results has the original migration configuration.nix results . We will eventually replicate this and create a new more upto date one. This can be used as a reference if needed.
|
||||
|
||||
Reminder we only formatting and reninstalling on the drive with the ubuntu OS and windows partition.
|
||||
|
||||
Please ask questions, request access to file systems where needed.
|
||||
|
||||
Summary.
|
||||
|
||||
Build list. Finalize list in collaboration with me. Build configuration.nix. Store in Integral300 and gitea. Install NixOS, configure nixos with the set up we have created.
|
||||
|
||||
|
||||
192
previous_setup.md
Normal file
192
previous_setup.md
Normal file
@@ -0,0 +1,192 @@
|
||||
# NixOS + Home Manager Setup Overview (sam/nixos)
|
||||
|
||||
This document is a practical overview of how this NixOS setup was built and how
|
||||
“dotfiles” are managed, so another AI session (or you later) can replicate it on
|
||||
another machine.
|
||||
|
||||
Repo: `ssh://git@gitea.lab.audasmedia.com.au:2222/sam/nixos.git`
|
||||
|
||||
## Goals of this setup
|
||||
|
||||
- Reproducible NixOS install via flakes (`nixos-rebuild` / `nixos-install`)
|
||||
- Home Manager managed user config (zsh, kitty, nvim config, etc.)
|
||||
- KDE Plasma + Hyprland selectable at SDDM login
|
||||
- Neovim works reliably on NixOS:
|
||||
- config tracked in git
|
||||
- plugins installed via lazy.nvim into a writable directory
|
||||
- avoid writing any lockfiles into `/nix/store` (read-only)
|
||||
|
||||
## High-level architecture
|
||||
|
||||
- System config: `hosts/<host>/configuration.nix`
|
||||
- Hardware config: `hosts/<host>/hardware-configuration.nix`
|
||||
- generated per-machine during install, then committed
|
||||
- Home Manager (as NixOS module): `home/sam/home.nix`
|
||||
- Neovim config stored in repo: `home/sam/nvim/...`
|
||||
|
||||
### Repo structure (typical)
|
||||
- `flake.nix`
|
||||
- `hosts/aspire-laptop/configuration.nix`
|
||||
- `hosts/aspire-laptop/hardware-configuration.nix`
|
||||
- `home/sam/home.nix`
|
||||
- `home/sam/nvim/` (init.lua, lua/, lazy-lock.json from old setup if needed)
|
||||
- `scripts/install-from-iso.sh`
|
||||
|
||||
## Installation procedure (wipe disk)
|
||||
|
||||
### BIOS notes
|
||||
- Secure Boot disabled on the Acer test laptop for easiest install.
|
||||
(If Secure Boot is locked by a BIOS Supervisor password, bare-metal install may
|
||||
be blocked; use a VM test instead.)
|
||||
|
||||
### From the NixOS graphical ISO (live environment)
|
||||
1. Connect to the internet.
|
||||
2. Clone repo to the live environment:
|
||||
- `git clone ssh://git@gitea.lab.audasmedia.com.au:2222/sam/nixos.git /tmp/nixos`
|
||||
|
||||
3. Partition/mount (WIPES DISK):
|
||||
- Identify disk (e.g. `/dev/sda` or `/dev/nvme0n1`)
|
||||
- Run:
|
||||
- `sudo DISK=/dev/<disk> bash /tmp/nixos/scripts/install-from-iso.sh`
|
||||
|
||||
This creates:
|
||||
- EFI partition (vfat)
|
||||
- Btrfs root with subvolumes `@` and `@home`
|
||||
- Mounts under `/mnt` and generates `/mnt/etc/nixos/hardware-configuration.nix`
|
||||
|
||||
4. Copy repo into target:
|
||||
- `sudo rm -rf /mnt/etc/nixos`
|
||||
- `sudo mkdir -p /mnt/etc`
|
||||
- `sudo cp -a /tmp/nixos /mnt/etc/nixos`
|
||||
|
||||
5. Copy generated hardware config into the repo host path:
|
||||
- `sudo cp -f /mnt/etc/nixos/hardware-configuration.nix /mnt/etc/nixos/hosts/<host>/hardware-configuration.nix`
|
||||
|
||||
6. Install:
|
||||
- `sudo nixos-install --flake /mnt/etc/nixos#<host>`
|
||||
- reboot
|
||||
|
||||
### After first boot
|
||||
- Set password for `sam` if needed:
|
||||
- `sudo passwd sam`
|
||||
- If using Tailscale:
|
||||
- `sudo tailscale up`
|
||||
|
||||
## SSH access (to administer remotely)
|
||||
|
||||
This setup enabled OpenSSH server via NixOS config.
|
||||
|
||||
- `services.openssh.enable = true;`
|
||||
- `services.openssh.openFirewall = true;`
|
||||
- Password auth was enabled for convenience in testing (not best practice).
|
||||
|
||||
To apply:
|
||||
- `sudo nixos-rebuild switch --flake /etc/nixos#<host>`
|
||||
|
||||
## “Dotfiles” / config management approach (what we actually did)
|
||||
|
||||
### The key rule
|
||||
Home Manager symlinks managed files into `/nix/store` (read-only). That is fine
|
||||
for config files, but NOT fine for files that apps need to write to at runtime.
|
||||
|
||||
### Neovim (special case)
|
||||
Neovim + lazy.nvim expects to write:
|
||||
- lockfile
|
||||
- plugin installs
|
||||
- cache/state
|
||||
|
||||
So:
|
||||
|
||||
1) The Neovim config code is kept in git and linked by Home Manager, but we do
|
||||
NOT have HM own the entire `~/.config/nvim` directory.
|
||||
|
||||
We link only:
|
||||
- `~/.config/nvim/init.lua`
|
||||
- `~/.config/nvim/lua/`
|
||||
|
||||
Example Home Manager linking (conceptual):
|
||||
- `xdg.configFile."nvim/init.lua".source = ./nvim/init.lua;`
|
||||
- `xdg.configFile."nvim/lua".source = ./nvim/lua;`
|
||||
|
||||
2) lazy.nvim is configured to write lockfile into a writable location:
|
||||
- lockfile path: `vim.fn.stdpath("data") .. "/lazy-lock.json"`
|
||||
(=> `~/.local/share/nvim/lazy-lock.json`)
|
||||
|
||||
3) Plugins are installed by lazy.nvim into:
|
||||
- `~/.local/share/nvim/lazy/`
|
||||
|
||||
4) After a new install / new machine, bootstrap plugins with:
|
||||
- `nvim --headless "+Lazy! sync" "+qa"`
|
||||
|
||||
### Why we avoided Nix-managed Neovim plugins in HM
|
||||
If `programs.neovim.plugins = ...` is used, Neovim may load plugins from a
|
||||
read-only Nix “vim pack dir” under `/nix/store/...`.
|
||||
Some plugins (notably treesitter) try to write build artifacts into the plugin
|
||||
directory, which fails on read-only paths.
|
||||
|
||||
Therefore:
|
||||
- Nix installs `nvim` + dependencies (node/python/rg/fd/compilers).
|
||||
- lazy.nvim installs the plugins at runtime into user-writable dirs.
|
||||
|
||||
### Other tools
|
||||
Most other CLI tools can be installed declaratively via NixOS or Home Manager.
|
||||
Their configs can be safely managed by HM as symlinks (read-only is fine).
|
||||
|
||||
## Notable fixes/decisions made during setup
|
||||
|
||||
- If you see errors like “Read-only file system” writing `lazy-lock.json`,
|
||||
it means HM is managing the lockfile path. Fix by moving lockfile to data dir
|
||||
and not linking `lazy-lock.json` into `/nix/store`.
|
||||
|
||||
- Treesitter module name mismatch was fixed in config to handle upstream changes:
|
||||
attempt `require("nvim-treesitter.config")` and fallback to
|
||||
`require("nvim-treesitter.configs")`.
|
||||
|
||||
- Avante was disabled on low-power machines by removing/renaming its plugin spec
|
||||
file so lazy.nvim does not load it.
|
||||
|
||||
- Git remote update issues were resolved using:
|
||||
- `git fetch origin`
|
||||
- `git pull --rebase origin main`
|
||||
- `git push`
|
||||
|
||||
## Adding programs (basic workflow)
|
||||
|
||||
### System-wide packages
|
||||
Edit:
|
||||
- `hosts/<host>/configuration.nix`
|
||||
Add to:
|
||||
- `environment.systemPackages = with pkgs; [ ... ];`
|
||||
Apply:
|
||||
- `sudo nixos-rebuild switch --flake /etc/nixos#<host>`
|
||||
|
||||
### User-only packages
|
||||
Edit:
|
||||
- `home/sam/home.nix`
|
||||
Add to:
|
||||
- `home.packages = with pkgs; [ ... ];`
|
||||
Apply:
|
||||
- `sudo nixos-rebuild switch --flake /etc/nixos#<host>`
|
||||
|
||||
### Then commit + push
|
||||
- `cd /etc/nixos`
|
||||
- `git add -A`
|
||||
- `git commit -m "..." && git push`
|
||||
|
||||
## Secrets (do not put in git)
|
||||
Do not commit API keys (Gemini/OpenAI/etc.) into this repo.
|
||||
|
||||
Preferred:
|
||||
- store secrets outside git (password manager) and export into your shell
|
||||
- or use a secret manager like `sops-nix` later
|
||||
|
||||
Example (local-only) environment file:
|
||||
- `~/.config/environment.d/10-secrets.conf`
|
||||
- contains `GEMINI_API_KEY=...`
|
||||
- not tracked in git
|
||||
|
||||
## References
|
||||
- NixOS Manual: https://nixos.org/manual/nixos/stable/
|
||||
- Home Manager Manual: https://nix-community.github.io/home-manager/
|
||||
- Flakes: https://nixos.wiki/wiki/Flakes
|
||||
- Packages/options search: https://search.nixos.org/
|
||||
112
previous_setup_software.md
Normal file
112
previous_setup_software.md
Normal file
@@ -0,0 +1,112 @@
|
||||
System (NixOS) services / core
|
||||
|
||||
- NetworkManager
|
||||
|
||||
- OpenSSH server (sshd) (password auth enabled)
|
||||
|
||||
- Tailscale (client)
|
||||
|
||||
- PipeWire audio (Pulse/ALSA)
|
||||
|
||||
- Firewall (enabled)
|
||||
|
||||
Shell / terminal
|
||||
|
||||
- zsh (default shell)
|
||||
|
||||
- kitty (terminal emulator)
|
||||
|
||||
Browsers / GUI apps
|
||||
|
||||
- Google Chrome
|
||||
|
||||
- VS Code
|
||||
|
||||
- Thunderbird
|
||||
|
||||
CLI / terminal utilities
|
||||
|
||||
- git, curl, wget, jq
|
||||
|
||||
- ripgrep (rg), fd (fd)
|
||||
|
||||
- bat, btop
|
||||
|
||||
- eza, zoxide, fzf
|
||||
|
||||
- starship
|
||||
|
||||
- atuin
|
||||
|
||||
- zellij
|
||||
|
||||
- lazygit
|
||||
|
||||
- gh (GitHub CLI)
|
||||
|
||||
- borgbackup
|
||||
|
||||
- yazi
|
||||
|
||||
- tealdeer (tldr)
|
||||
|
||||
- navi
|
||||
|
||||
- dua
|
||||
|
||||
- wl-clipboard, xclip
|
||||
|
||||
Build / dev dependencies
|
||||
|
||||
- neovim
|
||||
|
||||
- gcc, gnumake, unzip
|
||||
|
||||
- nodejs, python3, pynvim (Neovim providers)
|
||||
|
||||
Docs / LaTeX
|
||||
|
||||
- pandoc
|
||||
|
||||
- texlive (scheme-small)
|
||||
|
||||
- zathura (+ PDF backend as configured)
|
||||
|
||||
Neovim (config + plugins)
|
||||
|
||||
- Neovim config stored in repo: home/sam/nvim
|
||||
|
||||
- Plugin manager: lazy.nvim
|
||||
|
||||
- Plugins (from your lazy-lock.json, with Avante disabled):
|
||||
- which-key.nvim
|
||||
|
||||
- vimtex
|
||||
|
||||
- nvim-treesitter
|
||||
|
||||
- telescope.nvim + telescope-themes
|
||||
|
||||
- mason.nvim + mason-lspconfig.nvim
|
||||
|
||||
- nvim-lspconfig
|
||||
|
||||
- conform.nvim
|
||||
|
||||
- nvim-lint
|
||||
|
||||
- nvim-cmp + cmp-* + LuaSnip
|
||||
|
||||
- nvim-tree.lua + nvim-web-devicons
|
||||
|
||||
- gitsigns.nvim
|
||||
|
||||
- Comment.nvim
|
||||
|
||||
- nvim-dap + nvim-dap-ui + nvim-nio
|
||||
|
||||
- Themes: catppuccin, tokyonight, onedark, kanagawa, gruvbox, everforest, dracula
|
||||
|
||||
- Markdown: vim-markdown + tabular, live-preview.nvim
|
||||
|
||||
- lualine.nvim, plenary.nvim, dressing.nvim, nui.nvim (deps)
|
||||
0
results/configuration.nix
Normal file → Executable file
0
results/configuration.nix
Normal file → Executable file
@@ -1,145 +0,0 @@
|
||||
# NixOS Configuration Template
|
||||
# Path: results/configuration.nix.template
|
||||
#
|
||||
# This is a starting point for your new NixOS configuration.
|
||||
# Review and edit this file carefully.
|
||||
# You will use this file during the NixOS installation.
|
||||
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
{
|
||||
imports =
|
||||
[ # Include the results of the hardware scan.
|
||||
# Path to this file will be /mnt/etc/nixos/hardware-configuration.nix
|
||||
# after the installation script generates it.
|
||||
./hardware-configuration.nix
|
||||
];
|
||||
|
||||
# Bootloader.
|
||||
boot.loader.systemd-boot.enable = true;
|
||||
boot.loader.efi.canTouchEfiVariables = true;
|
||||
|
||||
# ---
|
||||
# NETWORKING ---
|
||||
# ---
|
||||
networking.hostName = "nixos-desktop"; # Define your hostname.
|
||||
networking.networkmanager.enable = true;
|
||||
# networking.wireless.enable = true; # Uncomment for WiFi support.
|
||||
|
||||
# Set your time zone.
|
||||
time.timeZone = "Australia/Sydney"; # CHANGE THIS to your time zone, e.g. "Europe/Berlin"
|
||||
|
||||
# ---
|
||||
# USER ACCOUNTS ---
|
||||
# ---
|
||||
users.users.sam = {
|
||||
isNormalUser = true;
|
||||
description = "Sam";
|
||||
extraGroups = [ "networkmanager" "wheel" "docker" ]; # "wheel" allows sudo
|
||||
packages = with pkgs;
|
||||
[ # You can install user-specific packages here, but it's often better
|
||||
# to manage them system-wide below unless you have multiple users.
|
||||
];
|
||||
};
|
||||
|
||||
# ---
|
||||
# SOFTWARE PACKAGES ---
|
||||
# ---
|
||||
# List packages you want to install system-wide.
|
||||
# Refer to docs/02_software_migration_plan.md
|
||||
environment.systemPackages = with pkgs;
|
||||
[ # ---
|
||||
# Essentials ---
|
||||
# ---
|
||||
git
|
||||
wget
|
||||
curl
|
||||
|
||||
# ---
|
||||
# Desktop & GUI ---
|
||||
# ---
|
||||
firefox
|
||||
thunderbird
|
||||
libreoffice
|
||||
flameshot
|
||||
|
||||
# ---
|
||||
# Terminal & CLI Tools ---
|
||||
# ---
|
||||
kitty
|
||||
neovim
|
||||
nushell
|
||||
btop
|
||||
eza
|
||||
bat
|
||||
fzf
|
||||
|
||||
# ---
|
||||
# Development ---
|
||||
# ---
|
||||
gcc
|
||||
gnumake
|
||||
nodejs # Consider specifying a version, e.g., nodejs-20_x
|
||||
rustc
|
||||
cargo
|
||||
python3
|
||||
];
|
||||
|
||||
# ---
|
||||
# SERVICES & VIRTUALIZATION ---
|
||||
# ---
|
||||
|
||||
# Enable the Docker daemon.
|
||||
virtualisation.docker.enable = true;
|
||||
|
||||
# Enable sound with PipeWire.
|
||||
sound.enable = true;
|
||||
hardware.pulseaudio.enable = false;
|
||||
security.rtkit.enable = true;
|
||||
services.pipewire = {
|
||||
enable = true;
|
||||
alsa.enable = true;
|
||||
alsa.support32Bit = true;
|
||||
pulse.enable = true;
|
||||
# If you want to use JACK applications, uncomment the following.
|
||||
#jack.enable = true;
|
||||
};
|
||||
|
||||
# Enable CUPS to print documents.
|
||||
services.printing.enable = true;
|
||||
|
||||
# ---
|
||||
# DESKTOP ENVIRONMENT ---
|
||||
# ---
|
||||
# Enable the GNOME Desktop Environment.
|
||||
services.xserver.enable = true;
|
||||
services.xserver.displayManager.gdm.enable = true;
|
||||
services.xserver.desktopManager.gnome.enable = true;
|
||||
|
||||
|
||||
# ---
|
||||
# DISK MOUNTS ---
|
||||
# ---
|
||||
# This is where we will mount your existing data drives.
|
||||
# The device paths (e.g., "/dev/disk/by-uuid/...") must be correct.
|
||||
# Use the output of `lsblk -f` on the live USB to get the right UUIDs.
|
||||
fileSystems."/mnt/ubuntu_storage_3TB" = {
|
||||
device = "/dev/disk/by-uuid/037a542c-6aa9-4b1f-ab2f-4b6922ab371f"; # This is sdd1 (ubuntu_storage_3)
|
||||
fsType = "ext4";
|
||||
};
|
||||
|
||||
fileSystems."/mnt/windows-storage" = { # Renaming `/mnt/storage` to be clearer
|
||||
device = "/dev/disk/by-uuid/063E316A3E315441"; # This is sdb2
|
||||
fsType = "ntfs-3g";
|
||||
options = [ "rw" "uid=1000" "gid=100" "umask=007" ]; # Makes files user-writable
|
||||
};
|
||||
|
||||
# Add more entries here for other disks like sdc2 if needed.
|
||||
|
||||
# Allow unfree packages
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
|
||||
# System state version.
|
||||
system.stateVersion = "24.05"; # Or whatever version you install.
|
||||
|
||||
}
|
||||
0
results/generated_configuration.nix
Normal file → Executable file
0
results/generated_configuration.nix
Normal file → Executable file
0
results/generated_home.nix
Normal file → Executable file
0
results/generated_home.nix
Normal file → Executable file
@@ -1,138 +0,0 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Phase 1: System Reconnaissance Script
|
||||
# This script gathers information about the system's hardware, software, and user files.
|
||||
# It is designed to be non-destructive. All output is logged to a file.
|
||||
|
||||
# ---
|
||||
# Configuration ---
|
||||
# ---
|
||||
LOG_FILE="logs/01_system_recon.log"
|
||||
USER_HOME=$(eval echo ~${SUDO_USER:-$USER})
|
||||
|
||||
# ---
|
||||
# Helper Functions ---
|
||||
# ---
|
||||
log() {
|
||||
echo "$1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
log_header() {
|
||||
log "\n"
|
||||
log "========================================================================"
|
||||
log "=== $1"
|
||||
log "========================================================================"
|
||||
}
|
||||
|
||||
run_and_log() {
|
||||
log "---
|
||||
Running command: $1 ---"
|
||||
eval "$1" 2>&1 | tee -a "$LOG_FILE"
|
||||
log "---
|
||||
Command finished ---"
|
||||
}
|
||||
|
||||
# ---
|
||||
# Main Execution ---
|
||||
# ---
|
||||
|
||||
# Initialize log file
|
||||
echo "System Reconnaissance Log - $(date)" > "$LOG_FILE"
|
||||
echo "----------------------------------------------------" >> "$LOG_FILE"
|
||||
|
||||
# 1. Disk and Filesystem Information
|
||||
log_header "DISK & FILESYSTEM INFORMATION"
|
||||
run_and_log "lsblk -f"
|
||||
run_and_log "df -hT"
|
||||
|
||||
# 2. Top-level User File Assessment
|
||||
log_header "USER FILE ASSESSMENT"
|
||||
log "Analyzing major directories in user home: $USER_HOME"
|
||||
log "This will show the total size of each main user folder."
|
||||
run_and_log "du -sh ${USER_HOME}/{Documents,Downloads,Music,Pictures,Videos,Desktop,dotfiles} 2>/dev/null"
|
||||
|
||||
# Note for the operator about deeper scans
|
||||
log "\n"
|
||||
log "NOTE: A full file listing is a long-running process."
|
||||
log "The following command can be used for a more detailed scan."
|
||||
log "It is recommended to run this in the background and review the output later."
|
||||
log "Example for a deeper scan (creates a separate log file):"
|
||||
log "# find ${USER_HOME}/Documents -type f > logs/documents_file_list.txt"
|
||||
log "\n"
|
||||
|
||||
|
||||
# 3. Software Inventory
|
||||
log_header "SOFTWARE INVENTORY"
|
||||
|
||||
# APT Packages
|
||||
log "---
|
||||
Checking for APT packages... ---"
|
||||
if command -v dpkg &> /dev/null; then
|
||||
run_and_log "dpkg --get-selections"
|
||||
else
|
||||
log "dpkg command not found. Skipping APT package scan."
|
||||
fi
|
||||
|
||||
# Snap Packages
|
||||
log "---
|
||||
Checking for Snap packages... ---"
|
||||
if command -v snap &> /dev/null; then
|
||||
run_and_log "snap list"
|
||||
else
|
||||
log "snap command not found. Skipping Snap package scan."
|
||||
fi
|
||||
|
||||
# Docker Information
|
||||
log_header "DOCKER INFORMATION"
|
||||
if command -v docker &> /dev/null; then
|
||||
log "---
|
||||
Docker Version ---"
|
||||
run_and_log "docker --version"
|
||||
log "---
|
||||
Docker Info (Configuration and Storage) ---"
|
||||
run_and_log "docker info"
|
||||
log "---
|
||||
Docker Containers (Running and Stopped) ---"
|
||||
run_and_log "docker ps -a"
|
||||
log "---
|
||||
Docker Images ---"
|
||||
run_and_log "docker images"
|
||||
log "---
|
||||
Docker Volumes ---"
|
||||
run_and_log "docker volume ls"
|
||||
else
|
||||
log "docker command not found. Skipping Docker scan."
|
||||
fi
|
||||
|
||||
|
||||
# 4. Development Environment & Servers
|
||||
log_header "DEV ENVIRONMENTS & SERVERS"
|
||||
|
||||
# Common Languages
|
||||
run_and_log "command -v rustc && rustc --version"
|
||||
run_and_log "command -v node && node --version"
|
||||
run_and_log "command -v python3 && python3 --version"
|
||||
run_and_log "command -v go && go version"
|
||||
run_and_log "command -v java && java --version"
|
||||
|
||||
# Common Servers
|
||||
log "---
|
||||
Checking for common server processes... ---"
|
||||
run_and_log "ps aux | grep -E 'apache2|nginx|httpd|snapcast' | grep -v grep"
|
||||
|
||||
log "---
|
||||
Checking for server config files... ---"
|
||||
run_and_log "ls -ld /etc/apache2 /etc/nginx /etc/snapserver.conf 2>/dev/null"
|
||||
|
||||
# Eclipse and Arduino/ESP-IDF
|
||||
log "---
|
||||
Searching for Eclipse Workspaces and Arduino/ESP-IDF projects... ---"
|
||||
log "This may take a moment..."
|
||||
# This find command is scoped to the user's home and looks for common markers of these dev environments.
|
||||
run_and_log "find ${USER_HOME} -maxdepth 4 \( -name '.project' -o -name 'platformio.ini' -o -name '*.ino' \) -print 2>/dev/null"
|
||||
|
||||
|
||||
log_header "RECONNAISSANCE COMPLETE"
|
||||
log "Log file saved to: $LOG_FILE"
|
||||
log "Please review the log file to plan the next phase of the migration."
|
||||
log "Remember to complete and verify your backups before proceeding."
|
||||
@@ -1,91 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# ---
|
||||
# Configuration ---
|
||||
# ---
|
||||
LOG_FILE="logs/02_file_migration.log"
|
||||
DRY_RUN=""
|
||||
SOURCE_HOME="/home/sam" # This should be the path where your old home is mounted
|
||||
TARGET_STAGING="/mnt/ubuntu_storage_3TB/migration_staging" # As per the guide
|
||||
|
||||
# Check for --dry-run flag
|
||||
if [ "$1" == "--dry-run" ]; then
|
||||
DRY_RUN="--dry-run"
|
||||
echo "---
|
||||
PERFORMING DRY RUN ---" | tee -a "$LOG_FILE"
|
||||
fi
|
||||
|
||||
# Safety check for root user
|
||||
if [ "$(id -u)" -eq 0 ] && [ "$2" != "--allow-root" ]; then
|
||||
echo "Running as root is not recommended. Use --allow-root to override."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ---
|
||||
# Helper Functions ---
|
||||
# ---
|
||||
log() {
|
||||
echo "$1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
run_rsync() {
|
||||
log "------------------------------------------------------------------------"
|
||||
log "Syncing $1..."
|
||||
# The --info=progress2 flag gives a cleaner total progress indicator.
|
||||
# The --exclude='/data' is critical to not re-copy existing data.
|
||||
rsync -avh --info=progress2 $DRY_RUN --exclude='/data' "$2" "$3"
|
||||
log "Finished syncing $1."
|
||||
log "------------------------------------------------------------------------"
|
||||
}
|
||||
|
||||
|
||||
# ---
|
||||
# Main Execution ---
|
||||
# ---
|
||||
# Initialize log file
|
||||
echo "File Migration Log - $(date)" > "$LOG_FILE"
|
||||
echo "----------------------------------------------------" >> "$LOG_FILE"
|
||||
|
||||
if [ -n "$DRY_RUN" ]; then
|
||||
log "Dry run mode enabled. No files will be changed."
|
||||
fi
|
||||
|
||||
log "Source directory: $SOURCE_HOME"
|
||||
log "Target staging directory: $TARGET_STAGING"
|
||||
|
||||
# Check if source directory exists
|
||||
if [ ! -d "$SOURCE_HOME" ]; then
|
||||
log "ERROR: Source directory $SOURCE_HOME does not exist. Mount your old home directory and try again."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create target directory
|
||||
log "Creating target staging directory (if it doesn\'t exist)..."
|
||||
if [ -z "$DRY_RUN" ]; then
|
||||
mkdir -p "$TARGET_STAGING"
|
||||
fi
|
||||
|
||||
# ---
|
||||
# Migration Commands ---
|
||||
# ---
|
||||
# These commands will copy your main user folders from your old Ubuntu home
|
||||
# into the staging area. The structure is kept simple for later organization.
|
||||
# Note the trailing slash on the source to copy the *contents* of the directory.
|
||||
|
||||
run_rsync "Documents" "${SOURCE_HOME}/Documents/" "${TARGET_STAGING}/Documents/"
|
||||
run_rsync "Pictures" "${SOURCE_HOME}/Pictures/" "${TARGET_STAGING}/Pictures/"
|
||||
run_rsync "Music" "${SOURCE_HOME}/Music/" "${TARGET_STAGING}/Music/"
|
||||
run_rsync "Videos" "${SOURCE_HOME}/Videos/" "${TARGET_STAGING}/Videos/"
|
||||
run_rsync "Desktop" "${SOURCE_HOME}/Desktop/" "${TARGET_STAGING}/Desktop/"
|
||||
run_rsync "Downloads" "${SOURCE_HOME}/Downloads/" "${TARGET_STAGING}/Downloads/"
|
||||
run_rsync "Dotfiles" "${SOURCE_HOME}/dotfiles/" "${TARGET_STAGING}/dotfiles/"
|
||||
|
||||
# Add any other specific project directories you know of here. For example:
|
||||
# run_rsync "Arduino Projects" "${SOURCE_HOME}/Arduino/" "${TARGET_STAGING}/Arduino/"
|
||||
|
||||
log "\n"
|
||||
log "---
|
||||
File migration script finished. ---"
|
||||
log "Review the output above. If everything looks correct, you can run the script"
|
||||
log "again without the --dry-run flag to perform the actual file copy."
|
||||
log "The log has been saved to $LOG_FILE"
|
||||
@@ -1,146 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# -- Configuration ---
|
||||
LOG_FILE="logs/03_find_and_sync_data.log"
|
||||
DRY_RUN=""
|
||||
SOURCE_WIN_DRIVE="/media/sam/8294CD2994CD2111"
|
||||
TARGET_DATA_DIR="/data"
|
||||
|
||||
# Check for --dry-run flag
|
||||
if [ "$1" == "--dry-run" ]; then
|
||||
DRY_RUN="--dry-run"
|
||||
echo "--- PERFORMING DRY RUN ---" | tee -a "$LOG_FILE"
|
||||
fi
|
||||
|
||||
# Helper function for logging
|
||||
log() {
|
||||
echo "$1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
run_rsync_dry_run() {
|
||||
local source_path="$1"
|
||||
local target_path="$2"
|
||||
local descriptive_name="$3"
|
||||
|
||||
log "------------------------------------------------------------------------"
|
||||
log "Preparing to sync: $descriptive_name"
|
||||
log "Source: $source_path"
|
||||
log "Target: $target_path"
|
||||
log "------------------------------------------------------------------------"
|
||||
|
||||
# Ensure target directory exists for rsync
|
||||
if [ ! -d "$target_path" ]; then
|
||||
log "Creating target directory: $target_path"
|
||||
if [ -z "$DRY_RUN" ]; then
|
||||
mkdir -p "$target_path"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Use rsync -a (archive mode) for comprehensive copying, and -n for dry run.
|
||||
# The trailing slash on source_path copies contents, not the directory itself.
|
||||
rsync -avh --info=progress2 $DRY_RUN "${source_path}/" "${target_path}/" 2>&1 | tee -a "$LOG_FILE"
|
||||
log "Finished dry run for $descriptive_name."
|
||||
}
|
||||
|
||||
# Initialize log file
|
||||
echo "Data Discovery and Sync Log - $(date)" > "$LOG_FILE"
|
||||
echo "----------------------------------------------------" >> "$LOG_FILE"
|
||||
|
||||
if [ -n "$DRY_RUN" ]; then
|
||||
log "Dry run mode enabled. No files will be changed."
|
||||
fi
|
||||
|
||||
log "Source Windows Drive: $SOURCE_WIN_DRIVE"
|
||||
log "Target Data Directory: $TARGET_DATA_DIR"
|
||||
log ""
|
||||
|
||||
# --- Mapping Configuration (Source on Windows Drive -> Target in /data) ---
|
||||
# Each entry is: "source_path" "target_subdirectory_in_data" "descriptive_name"
|
||||
|
||||
# Personal Documents, Pictures, Music, Videos
|
||||
declare -a PERSONAL_FOLDERS=(
|
||||
"Users/sam/Documents" "personal/Documents" "Personal Documents"
|
||||
"Users/sam/Pictures" "personal/Pictures" "Personal Pictures"
|
||||
"Users/sam/Music" "personal/Music" "Personal Music"
|
||||
"Users/sam/Videos" "personal/Videos" "Personal Videos"
|
||||
"Users/sam/Downloads" "personal/Downloads" "Personal Downloads"
|
||||
)
|
||||
|
||||
# Web/Work Projects
|
||||
declare -a WORK_PROJECTS=(
|
||||
"xampp/htdocs" "work/htdocs" "XAMPP htdocs projects"
|
||||
"frei0r" "work/frei0r" "Frei0r Projects"
|
||||
# Add other common workspace/project folders here if known
|
||||
# e.g., "Users/sam/workspace" "work/workspace" "General Workspaces"
|
||||
)
|
||||
|
||||
# IoT Projects
|
||||
declare -a IOT_PROJECTS=(
|
||||
"Arduino" "iot/Arduino" "Arduino Projects" # Assuming there's an Arduino folder
|
||||
)
|
||||
|
||||
# Generic project folders to search for
|
||||
declare -a GENERIC_PROJECT_NAMES=(
|
||||
"Projects"
|
||||
"Code"
|
||||
"Dev"
|
||||
)
|
||||
|
||||
# --- Execute mappings ---
|
||||
|
||||
log "--- Processing Personal Folders ---"
|
||||
for ((i=0; i<${#PERSONAL_FOLDERS[@]}; i+=3)); do
|
||||
SOURCE="${SOURCE_WIN_DRIVE}/${PERSONAL_FOLDERS[i]}"
|
||||
TARGET="${TARGET_DATA_DIR}/${PERSONAL_FOLDERS[i+1]}"
|
||||
DESC="${PERSONAL_FOLDERS[i+2]}"
|
||||
if [ -d "$SOURCE" ]; then
|
||||
run_rsync_dry_run "$SOURCE" "$TARGET" "$DESC"
|
||||
else
|
||||
log "WARNING: Source directory not found: $SOURCE"
|
||||
fi
|
||||
done
|
||||
|
||||
log ""
|
||||
log "--- Processing Work Projects ---"
|
||||
for ((i=0; i<${#WORK_PROJECTS[@]}; i+=3)); do
|
||||
SOURCE="${SOURCE_WIN_DRIVE}/${WORK_PROJECTS[i]}"
|
||||
TARGET="${TARGET_DATA_DIR}/${WORK_PROJECTS[i+1]}"
|
||||
DESC="${WORK_PROJECTS[i+2]}"
|
||||
if [ -d "$SOURCE" ]; then
|
||||
run_rsync_dry_run "$SOURCE" "$TARGET" "$DESC"
|
||||
else
|
||||
log "WARNING: Source directory not found: $SOURCE"
|
||||
fi
|
||||
done
|
||||
|
||||
log ""
|
||||
log "--- Processing IoT Projects ---"
|
||||
for ((i=0; i<${#IOT_PROJECTS[@]}; i+=3)); do
|
||||
SOURCE="${SOURCE_WIN_DRIVE}/${IOT_PROJECTS[i]}"
|
||||
TARGET="${TARGET_DATA_DIR}/${IOT_PROJECTS[i+1]}"
|
||||
DESC="${IOT_PROJECTS[i+2]}"
|
||||
if [ -d "$SOURCE" ]; then
|
||||
run_rsync_dry_run "$SOURCE" "$TARGET" "$DESC"
|
||||
else
|
||||
log "WARNING: Source directory not found: $SOURCE"
|
||||
fi
|
||||
done
|
||||
|
||||
log ""
|
||||
log "--- Searching for Generic Project Folders ---"
|
||||
# This section tries to find common project-like folders directly under the Windows user profile
|
||||
# and prompts the user for action. For automation, we'll try to guess.
|
||||
# For now, we'll just list them to avoid making assumptions without user confirmation.
|
||||
|
||||
log "Searching for additional project-like folders under ${SOURCE_WIN_DRIVE}/Users/sam/ and similar paths:"
|
||||
find "${SOURCE_WIN_DRIVE}/Users/sam" -maxdepth 3 -type d \( -name "Projects" -o -name "Code" -o -name "Dev" -o -name "*workspace*" -o -name "*repos*" \) 2>/dev/null | while read -r found_dir; do
|
||||
log "Found potential project directory: $found_dir"
|
||||
# In a real interactive session, we'd ask the user where to put this.
|
||||
# For now, we just log its existence.
|
||||
done
|
||||
|
||||
log ""
|
||||
log "--- Data discovery and dry run finished. ---"
|
||||
log "Please review the log file: $LOG_FILE"
|
||||
log "If the output looks correct, run this script again with '--live' (or no flag) to perform the actual copy."
|
||||
log "Example: scripts/03_find_and_sync_data.sh"
|
||||
@@ -1,110 +0,0 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Phase 2: Deep System Reconnaissance for NixOS Migration
|
||||
# This script gathers detailed information about installed software, services,
|
||||
# configurations, and development environments on the current Ubuntu system.
|
||||
# All output is logged to a file for later analysis.
|
||||
|
||||
# --- Configuration ---
|
||||
LOG_FILE="logs/04_nixos_recon.log"
|
||||
USER_HOME=$(eval echo ~${SUDO_USER:-$USER})
|
||||
|
||||
# --- Helper Functions ---
|
||||
log() {
|
||||
echo -e "$1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
log_header() {
|
||||
log "\n"
|
||||
log "========================================================================"
|
||||
log "=== $1"
|
||||
log "========================================================================"
|
||||
}
|
||||
|
||||
run_and_log() {
|
||||
log "--- Running command: $1 ---"
|
||||
eval "$1" 2>>"$LOG_FILE" | tee -a "$LOG_FILE"
|
||||
log "--- Command finished ---\n"
|
||||
}
|
||||
|
||||
# --- Main Execution ---
|
||||
|
||||
# Initialize log file
|
||||
echo "NixOS Migration - Deep Reconnaissance Log - $(date)" > "$LOG_FILE"
|
||||
echo "----------------------------------------------------" >> "$LOG_FILE"
|
||||
log "User Home Directory: $USER_HOME"
|
||||
|
||||
# 1. Software Inventory (APT & Snap)
|
||||
log_header "SOFTWARE INVENTORY"
|
||||
if command -v dpkg &> /dev/null; then
|
||||
run_and_log "dpkg --get-selections | grep -v deinstall"
|
||||
else
|
||||
log "dpkg command not found. Skipping APT package scan."
|
||||
fi
|
||||
|
||||
if command -v snap &> /dev/null; then
|
||||
run_and_log "snap list"
|
||||
else
|
||||
log "snap command not found. Skipping Snap package scan."
|
||||
fi
|
||||
|
||||
# 2. Systemd Services & Timers
|
||||
log_header "SYSTEMD SERVICES & TIMERS"
|
||||
log "--- Active System Services ---"
|
||||
run_and_log "systemctl list-units --type=service --state=running"
|
||||
log "--- All System Timers ---"
|
||||
run_and_log "systemctl list-timers --all"
|
||||
|
||||
log "\n--- Active User Services (if any) ---"
|
||||
# Check for user session bus to run user commands
|
||||
if [ -n "$XDG_RUNTIME_DIR" ]; then
|
||||
run_and_log "systemctl --user list-units --type=service --state=running"
|
||||
log "--- All User Timers (if any) ---"
|
||||
run_and_log "systemctl --user list-timers --all"
|
||||
else
|
||||
log "Could not connect to user session bus. Skipping user services/timers."
|
||||
fi
|
||||
|
||||
# 3. Docker Environment
|
||||
log_header "DOCKER ENVIRONMENT"
|
||||
if command -v docker &> /dev/null; then
|
||||
run_and_log "docker --version"
|
||||
run_and_log "docker info"
|
||||
run_and_log "docker ps -a --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}'"
|
||||
run_and_log "docker images --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}'"
|
||||
run_and_log "docker volume ls"
|
||||
log "--- Searching for docker-compose files in home directory ---"
|
||||
run_and_log "find \"$USER_HOME\" -name \"*docker-compose.yml\" -o -name \"*compose.yml\" 2>/dev/null"
|
||||
else
|
||||
log "docker command not found. Skipping Docker scan."
|
||||
fi
|
||||
|
||||
# 4. Command-Line Environment & Scripts
|
||||
log_header "COMMAND-LINE TOOLS & SCRIPTS"
|
||||
log "--- Top 50 Most Used Commands from History ---"
|
||||
# This gives an idea of frequently used, un-packaged CLI tools
|
||||
if [ -f "$USER_HOME/.bash_history" ]; then
|
||||
run_and_log "cat $USER_HOME/.bash_history | sed 's/sudo //g' | awk '{print $1}' | sort | uniq -c | sort -rn | head -n 50"
|
||||
elif [ -f "$USER_HOME/.zsh_history" ]; then
|
||||
run_and_log "cat $USER_HOME/.zsh_history | sed 's/sudo //g' | awk '{print $1}' | sort | uniq -c | sort -rn | head -n 50"
|
||||
else
|
||||
log "Shell history file not found."
|
||||
fi
|
||||
|
||||
log "--- User Cron Jobs (crontab) ---"
|
||||
run_and_log "crontab -l"
|
||||
|
||||
log "--- Manually Installed Scripts & Binaries ---"
|
||||
log "Searching in /usr/local/bin, ~/bin, and ~/.local/bin..."
|
||||
run_and_log "ls -lA /usr/local/bin"
|
||||
if [ -d "$USER_HOME/bin" ]; then
|
||||
run_and_log "ls -lA \"$USER_HOME/bin\""
|
||||
fi
|
||||
if [ -d "$USER_HOME/.local/bin" ]; then
|
||||
run_and_log "ls -lA \"$USER_HOME/.local/bin\""
|
||||
fi
|
||||
|
||||
log_header "RECONNAISSANCE COMPLETE"
|
||||
log "Log file saved to: $LOG_FILE"
|
||||
log "This file provides a detailed snapshot of the system's software and configuration."
|
||||
log "Review it carefully to plan your configuration.nix and home-manager setup."
|
||||
16
software_to_add.md
Normal file
16
software_to_add.md
Normal file
@@ -0,0 +1,16 @@
|
||||
Aider Command Line The Gold Standard. It writes code directly to your files with incredible accuracy and works perfectly with OpenRouter.
|
||||
|
||||
OpenCode Terminal UI Visual learners who want a "dashboard" feel inside their terminal.
|
||||
GEmini Cli
|
||||
|
||||
Goose Agentic / MCP The newest "power player." It uses the Model Context Protocol (MCP) to let the AI use your actual computer tools (terminal, browser, memory
|
||||
|
||||
|
||||
Aprise for letting people know - messaging.
|
||||
Obsidian
|
||||
|
||||
Tailscale
|
||||
RustDesk
|
||||
Telegram
|
||||
Thundirbird - is there an alternative?
|
||||
Flameshot
|
||||
Reference in New Issue
Block a user