Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Overview

License: GPL v3 Made with Rust

Joular Core Logo

Joular Core is a cross-platform power and energy monitoring tool that runs on Linux, Windows, macOS, Raspberry Pi, and inside virtual machines. It measures CPU and GPU power consumption in real time, and can break that down to individual processes or named applications.

Joular Core is part of the Joular project.

The official website is: https://www.noureddine.org/research/joular/joularcore.

It is written in Rust and compiles to a native binary with minimal overhead. It ships with both a command-line interface and a graphical user interface, and can export data to CSV files, a shared-memory ring buffer, and an HTTP/WebSocket API.

Joular Core is under active development and currently in beta quality. Expect rough edges and features still being worked on and polished.

Key Features

  • Real-time CPU and GPU power monitoring on PCs, servers, and single-board computers
  • Per-process power monitoring: track the energy consumption of a specific PID
  • Per-application power monitoring: track a named application across all its processes
  • Monitor power from inside virtual machines using data from the hypervisor or an external meter
  • Export power data to CSV files (append or overwrite mode)
  • Write power data to a shared-memory ring buffer for low-latency IPC with other programs
  • Expose power data over HTTP (GET /data) and WebSocket (/ws) endpoints
  • Filter output to only show CPU power, GPU power, or both
  • CPU idle baseline calibration: automatically or manually subtract idle CPU power before attributing it to a process or application
  • Command-line interface (CLI) with colored, human-readable output or a numeric-only mode
  • Graphical user interface (GUI) built with egui, with live power graphs and a 60-second rolling history
  • Linux systemd service for running as a background daemon

License

Joular Core is licensed under the GNU GPL 3 license only (GPL-3.0-only).

Copyright © 2025–2026, Adel Noureddine.
All rights reserved. This program and the accompanying materials are made available under the terms of the GNU General Public License v3.0 only (GPL-3.0-only).

Author: Adel Noureddine

Supported Platforms

Joular Core runs on all major desktop and server operating systems and a selection of single-board computers. The table below summarises what is supported on each platform and architecture.

Operating Systems and Architectures

CPU

OS / Architecturex86_64i686Apple Siliconarmarmv7aarch64
Linux (PC / servers)
Windows
macOS
SBC (Raspberry Pi, Asus)
Virtual Machines

GPU

OS / ArchitectureNvidiaAMDApple GPU
Linux (PC / servers)
Windows
macOS
SBC (Raspberry Pi)
Virtual Machines

Platform Details

PlatformOSPower sourceArchitectures
Linux PC / ServerLinuxIntel RAPL (sysfs), Nvidia via nvidia-smi, AMD via amd-smi / rocm-smix86, x86_64
Windows PC / ServerWindowsHubblo’s RAPL driver, Nvidia via nvidia-smi, AMD via amd-smix86, x86_64
macOS (Intel)macOSpowermetricsx86_64
macOS (Apple Silicon)macOSpowermetrics (CPU + GPU)aarch64
Raspberry PiLinuxRegression power modelsarm, armv7, aarch64
Asus Tinker Board SLinuxRegression power modelsarm
Virtual Machine (guest)Windows, Linux, macOSShared file from host power toolx86, x86_64, arm, aarch64

Supported Single-Board Computers

The SBC build includes built-in power models for the following devices:

Raspberry Pi (all revisions of each model):

  • Zero W (32-bit OS)
  • 1 B, 1 B+ (32-bit OS)
  • 2 B (32-bit OS)
  • 3 B, 3 B+ (32-bit OS)
  • 4 B (32-bit and 64-bit OS)
  • 400 (64-bit OS)
  • 5 B (64-bit OS)

Asus Tinker Board S

CPU Power Monitoring Details

Linux and Windows (x86 / x86_64)
CPU power is read from Intel’s Running Average Power Limit (RAPL) interface. RAPL is supported on Intel processors since Sandy Bridge (2011) and on AMD processors since Ryzen. On Linux, RAPL is exposed via the powercap sysfs interface (/sys/class/powercap/intel-rapl/). On Windows, a kernel driver is required (see Installation).

macOS
CPU (and GPU on Apple Silicon) power is read from Apple’s powermetrics command. This tool ships with macOS and covers both Intel and Apple Silicon hardware. It requires elevated access to read power data.

Raspberry Pi and SBC
Power is calculated from CPU utilization using polynomial regression models that were measured against each supported board at various load levels. No hardware interface or special permissions are needed.

Virtual Machines
Power is read from a shared file written by a monitoring tool running on the host OS. See Virtual Machines for the setup details.

GPU Power Monitoring Details

Nvidia (Linux and Windows)
GPU power is read by calling nvidia-smi --query-gpu=power.draw. Power values from all detected GPUs are summed. If nvidia-smi is not installed or no Nvidia GPUs are found, the GPU power reading is 0 and monitoring continues normally.

AMD (Linux and Windows)
GPU power is read by calling amd-smi or rocm-smi (whichever is available). As with Nvidia, if neither tool is available the GPU reading is 0.

Apple Silicon
GPU power is included in the output from powermetrics alongside CPU power.

SBC
GPU monitoring is not supported on single-board computers. GPU power is always reported as 0.

Installation

Joular Core is distributed as pre-built binaries and installers for the most common platforms. For anything else, you can compile it from source in a few minutes.

Pre-built Packages and Installers

Windows

  • MSI installer for 64-bit Windows (x86_64)

macOS

  • PKG installer for Apple Silicon (aarch64)

Linux

  • DEB packages for Debian-based systems (Debian, Ubuntu, Raspberry Pi OS, etc.) — available for x86_64 and arm64 (aarch64)
  • RPM packages for Red Hat / Fedora-based systems (RHEL, Fedora, etc.) — x86_64
  • AUR package for Arch-based systems — x86_64

For other architectures or distributions (32-bit Windows, Intel macOS, other Linux setups), compile from source. See Compilation.

Binaries

Compiled binaries for all supported targets are attached to each release on the GitHub repository. Download the binary for your platform, place it somewhere on your PATH, and you’re ready to go. You can also run it from any directory using its full path.

There are two binaries per release:

  • joularcore / joularcore.exe — the command-line interface
  • joularcoregui / joularcoregui.exe — the graphical user interface (opens a window directly, no terminal needed)

Platform-specific Requirements

Linux (PC / servers)

CPU power is read via the RAPL sysfs interface. On Linux kernel 5.10 and newer, RAPL files are only readable by root. You have two options:

  • Run Joular Core with sudo
  • Grant read access to the RAPL files for your user (see this GitHub issue for instructions)

GPU monitoring (Nvidia/AMD) requires nvidia-smi or amd-smi / rocm-smi to be installed separately if needed.

Windows

CPU power monitoring requires a RAPL kernel driver. The supported driver is Hubblo’s windows-rapl-driver. The simplest way to install a signed version is via the Scaphandre installer, which installs the driver as a side effect.

Once the driver is installed, Joular Core itself runs without administrator rights.

GPU monitoring (Nvidia/AMD) requires nvidia-smi or amd-smi to be installed separately if needed.

macOS

No additional software is required. Power data is read via powermetrics, which ships with macOS. Because powermetrics requires elevated access to read hardware counters, Joular Core will ask for elevated access on start (it can also be run with sudo).

Raspberry Pi and SBC

No dependencies and no sudo required. Just download the binary for your architecture (arm, armv7, or aarch64) and run it.

Quick Usage — Command-Line Interface

Starting Joular Core

Run joularcore (Linux / macOS) or joularcore.exe (Windows). The program starts monitoring immediately and updates the terminal display once per second.

Linux (PC/server) — requires elevated access to read RAPL power files:

sudo joularcore

macOS — requires elevated access for powermetrics or run with sudo:

joularcore

Windows:

joularcore.exe

Raspberry Pi and SBC — no sudo needed:

joularcore

Press Ctrl+C to stop.

Default Output

With no arguments, Joular Core displays total power, CPU power, GPU power, and CPU usage — updated every second in place on the same terminal line:

⚡ Total 18.45 W | CPU 15.20 W | GPU 3.25 W | CPU Usage 24.60%

Common Use Cases

Monitor a specific process by PID

joularcore -p 1234

Output includes the power attributed to that process:

⚡ Total 18.45 W | CPU 15.20 W | GPU 3.25 W | CPU Usage 24.60% | PID 1.84 W

Monitor an application by name

joularcore -a firefox

Joular Core finds all processes whose name matches firefox and sums their attributed power:

⚡ Total 18.45 W | CPU 15.20 W | GPU 3.25 W | CPU Usage 24.60% | App 3.12 W (4 PIDs)

Write output to a CSV file

joularcore -f power.csv

A new row is appended every second. See Exporting Power Data for the exact CSV format.

Run silently and only write to CSV

joularcore -s -f power.csv

The -s flag suppresses terminal output while keeping all other outputs (CSV, ring buffer, API) active.

Show only CPU or only GPU power

joularcore -c cpu
joularcore -c gpu

Numeric-only output (for scripting and piping)

joularcore -i

Prints a single float (total power in watts) per line with no labels or formatting. Combined with -c, it prints only the selected component:

joularcore -c cpu -i

Expose power data over HTTP and WebSocket

joularcore --api-port 8080

Starts an HTTP server. GET http://localhost:8080/data returns the latest reading as JSON, and ws://localhost:8080/ws streams a new JSON reading every second.

Expose power data over a shared memory buffer ring

joularcore -r

Subtract idle CPU baseline for process attribution

# Auto-calibrate: measure idle power over 5 seconds, then start monitoring
joularcore -p 1234 --calibrate-cpu-idle-baseline

# Or supply a known baseline manually
joularcore -p 1234 --cpu-idle-baseline 4.5

Full Options Reference

See the Command Line Options page for a complete list of all flags and their descriptions.

Quick Usage — Graphical User Interface

Starting the GUI

There are two ways to open the GUI:

  • Dedicated binary: launch joularcoregui (Linux/macOS) or joularcoregui.exe (Windows) directly — double-click in a file manager or run from a terminal.
  • CLI flag: add -g or --gui to any joularcore command, for example:
    joularcore --gui
    joularcore -g --api-port 8080
    

On RAPL-based Linux systems, sudo is still required:

sudo joularcoregui
sudo joularcore --gui

On macOS, it’ll ask for elevated access to read powermetrics.

Interface Overview

The GUI has two screens: Options and Monitor. Switch between them using the buttons at the top of the window.

Options Screen

Before starting a monitoring session, the Options screen lets you configure:

  • Monitor mode: choose between monitoring the whole system, a specific PID, or a named application
  • Application / PID: type a process name or PID number directly; a dropdown lists running processes or applications
  • CSV export: toggle CSV file output on or off, and choose the file path using the native file picker

Once configured, click Start Monitoring to begin.

Monitor Screen

The Monitor screen shows live power readings updated every second:

  • Total power (watts)
  • CPU power (watts)
  • GPU power (watts)
  • CPU usage (%)
  • Process / application power (watts, if a PID or app was selected)

A rolling graph plots each of these values over the last 60 seconds. Click Stop to return to the Options screen.

Compilation

Joular Core is written in Rust and uses Cargo. A stable Rust toolchain is the only requirement.

Default Build

cargo build --release

Produces two binaries in target/release/:

  • joularcore / joularcore.exe — command-line interface
  • joularcoregui / joularcoregui.exe — graphical user interface

The default build includes virtual machine support (vm), the HTTP/WebSocket API (api), and the GUI (gui). SBC support is not included by default.

SBC (Raspberry Pi) Build

cargo build --release --features sbc

Adds single-board computer support on top of the defaults. The sbc feature replaces RAPL-based CPU monitoring with polynomial regression models tuned for each supported SBC. See Supported Platforms for the full list of supported boards.

Feature Selection

Use --no-default-features to start from a minimal build and enable only what you need:

cargo build --release --no-default-features

This produces a CLI-only binary with no VM support, no API, and no GUI — useful for constrained environments where binary size matters.

Available Features

FeatureDefaultDescription
vmonEnables monitoring inside virtual machines. Joular Core reads power from a shared file written by the host.
apionEnables the HTTP and WebSocket API server. CSV export and ring buffer output work regardless of this feature.
guionCompiles the GUI binary (joularcoregui).
sbcoffEnables SBC support. Replaces RAPL-based monitoring with regression models for Raspberry Pi and Asus Tinker Board.

Mix-and-Match Examples

# CLI only — no VM, no API, no GUI
cargo build --release --no-default-features

# CLI with VM support, no GUI or API
cargo build --release --no-default-features --features vm

# CLI with VM and API, no GUI (good for headless servers)
cargo build --release --no-default-features --features vm,api

# SBC with GUI, no VM or API
cargo build --release --no-default-features --features gui,sbc

# SBC with everything
cargo build --release --features sbc

Cross-Compilation

Use cargo-make with the targets defined in Makefile.toml. The targets cover all supported architectures for each OS.

To build for all supported Raspberry Pi architectures (aarch64, arm, armv7) at once:

cargo make build-sbc

With the appropriate Rust cross-compilation targets installed (via rustup target add), you can cross-compile from any host to any supported target.

Release Profile

The release profile in Cargo.toml is configured for maximum optimization and smallest binary size:

  • LTO (link-time optimization) enabled
  • Single codegen unit (full cross-crate optimization)
  • panic = "abort" (no unwinding machinery)
  • Debug symbols stripped

These settings mean release builds can be slow to compile but produce fast, lean binaries.

Command-Line Options

Run joularcore --help for the full list of options. This page documents each option in detail.

Synopsis

joularcore [OPTIONS]

Options

Monitoring Target

OptionDescription
-p, --pid <PID>Monitor a specific process by its PID. The output will include the power attributed to that process.
-a, --app <APP>Monitor an application by name. Joular Core finds all running processes whose name matches <APP> and sums their attributed power.

--pid and --app are mutually exclusive — only one may be used at a time.

Output and Export

OptionDescription
-f, --file <FILE>Write power measurements to a CSV file. A header line is written once, then one row per second is appended.
-o, --overwriteWhen used with -f, overwrite the file on each write instead of appending. Only the latest measurement is kept in the file. Useful when another program is polling the file.
-i, --numericPrint only a bare numeric value (watts, two decimal places) with no labels or ANSI formatting. Useful for piping or scripting. The value printed is total power by default, or the selected component if -c is set.
-s, --silentSuppress all terminal output. CSV export, ring buffer, and API remain active. Useful when running in the background.
-r, --ringbufferWrite power data to a shared-memory ring buffer every second. See Exporting Power Data for paths and data layout.
--api-port <PORT>Start an HTTP and WebSocket API server on the given port. Requires the api feature. See Exporting Power Data for endpoint details.

Component Filter

OptionDescription
-c, --component <cpu|gpu>Restrict output to a single hardware component. Use cpu for CPU-only power, gpu for GPU-only power. When set, only that component’s power is shown on the terminal and written to CSV or numeric output.

CPU Idle Baseline

These options subtract idle CPU power before attributing energy to a process or application. This gives a more accurate picture of how much power the workload itself is consuming, rather than including the base system cost.

OptionDescription
--cpu-idle-baseline <WATTS>Subtract a fixed value (in watts) from CPU power before calculating process or application attribution.
--calibrate-cpu-idle-baselineAutomatically measure idle CPU power. Joular Core collects 5 samples at 1-second intervals before starting the main monitoring loop, averages them, and uses the result as the baseline.

--cpu-idle-baseline and --calibrate-cpu-idle-baseline are mutually exclusive.

Application Monitoring Refresh

OptionDefaultDescription
--app-refresh-interval <SECONDS>3How often (in seconds) to rescan running processes when monitoring an application by name with -a. Setting this to 0 disables caching and rescans on every second. Lower values catch short-lived processes faster at the cost of more frequent process enumeration.

Interface

OptionDescription
-g, --guiLaunch the graphical user interface instead of (or alongside) the terminal output.
-h, --helpPrint help and exit.
-V, --versionPrint the version number and exit.

Environment Variables

These environment variables control behaviour that cannot be set via CLI flags.

Virtual Machines

VariableDescription
VM_CPU_POWER_FILEPath to a file containing CPU power data written by the host
VM_CPU_POWER_FORMATFormat of that file: joularcore, powerjoular, or watts (default: watts)
VM_GPU_POWER_FILEPath to a file containing GPU power data written by the host
VM_GPU_POWER_FORMATFormat of that file: joularcore, powerjoular, or watts (default: watts)

See Virtual Machines for details.

Single-Board Computers

VariableDescription
SBC_POWER_MODEL_JSONPath to a JSON file containing a custom SBC power model. If unset, built-in models for supported boards are used. The file format must match the Joular Power Models Database schema.

Constraints and Combinations

  • -p and -a are mutually exclusive.
  • -o only has effect when -f is also set.
  • --cpu-idle-baseline and --calibrate-cpu-idle-baseline are mutually exclusive.
  • --api-port requires that Joular Core was compiled with the api feature (enabled by default).
  • -g / --gui requires the gui feature (enabled by default); alternatively, use the joularcoregui binary.

Examples

# Monitor whole system
joularcore

# Monitor a specific process
joularcore -p 1234

# Monitor application, write to CSV
joularcore -a firefox -f power.csv

# Run silently, write to CSV, overwrite mode
joularcore -s -f /tmp/power.csv -o

# Expose API, suppress terminal output
joularcore -s --api-port 8080

# Write ring buffer + CSV simultaneously
joularcore -r -f power.csv

# CPU-only numeric output (for scripting)
joularcore -c cpu -i

# Monitor process with auto-calibrated idle baseline
joularcore -p 1234 --calibrate-cpu-idle-baseline

# Monitor app, refresh PID list every second
joularcore -a myapp --app-refresh-interval 0

# Launch GUI with API enabled
joularcore -g --api-port 9000

Exporting Power Data

Joular Core can send power measurements to multiple destinations. All export mechanisms work on all supported platforms and operating systems.

Terminal Output (default)

By default, Joular Core writes a live display to the terminal. The line is updated in place every second using ANSI escape codes:

⚡ Total 18.45 W | CPU 15.20 W | GPU 3.25 W | CPU Usage 24.60%

When monitoring a process or application:

⚡ Total 18.45 W | CPU 15.20 W | GPU 3.25 W | CPU Usage 24.60% | PID 1.84 W
⚡ Total 18.45 W | CPU 15.20 W | GPU 3.25 W | CPU Usage 24.60% | App 3.12 W (4 PIDs)

When a component filter (-c cpu or -c gpu) is set, only that component is shown:

CPU 15.20 W
GPU 3.25 W

Use -s / --silent to suppress terminal output while keeping other export channels active.

Numeric-only mode (-i)

Adding -i prints a bare float with no labels or formatting, one value per line:

joularcore -i        # prints total power
joularcore -c cpu -i # prints CPU power only

This mode is useful for stdout redirection or piping into other tools.

CSV Files (-f)

Use -f <FILE> to write measurements to a CSV file. A header line is written once at the start, then one data row is appended every second.

CSV Formats

The columns depend on which options are active:

Default (no process/app/component filter)

Timestamp,Total Power (W),CPU Power (W),GPU Power (W),CPU Usage (%)
1712345678,18.45,15.20,3.25,24.60

Process monitoring (-p)

Timestamp,Total Power (W),CPU Power (W),GPU Power (W),CPU Usage (%),Process Power (W)
1712345678,18.45,15.20,3.25,24.60,1.84

Application monitoring (-a)

Timestamp,Total Power (W),CPU Power (W),GPU Power (W),CPU Usage (%),App Power (W),App PIDs
1712345678,18.45,15.20,3.25,24.60,3.12,4

CPU-only component filter (-c cpu)

Timestamp,CPU Power (W)
1712345678,15.20

GPU-only component filter (-c gpu)

Timestamp,GPU Power (W)
1712345678,3.25

The Timestamp column contains a Unix epoch timestamp in seconds.

Overwrite Mode (-o)

By default, rows are appended to the file. With -o, the file is truncated before each write so it always contains exactly one data row (plus the header). This is useful when another program is polling the file for the latest value.

Shared-Memory Ring Buffer (-r)

Use -r or --ringbuffer to write power data to a shared-memory region. Any process on the same machine can read from this region without file I/O or network overhead.

Ring Buffer Paths

OSPath
Linux/dev/shm/joularcorering
macOS/tmp/joularcorering
WindowsLocal\\JoularCoreRing

Ring Buffer Layout

The ring buffer holds 5 consecutive f64 (double-precision float) values:

IndexFieldDescription
0cpu_powerCPU power in watts
1gpu_powerGPU power in watts
2total_powerTotal (CPU + GPU) power in watts
3cpu_usageSystem CPU usage as a percentage (0–100)
4pid_or_app_powerPower attributed to the monitored PID or application in watts; 0.0 if no process/app is selected

The structure is C-compatible (repr C, packed), so it can be read directly from any language that supports memory-mapped files or shared memory.

Multiple Ring Buffer Entries

The buffer holds 5 slots. The writer advances a head index on each update. Readers should track the head index to detect new data.

HTTP and WebSocket API (--api-port)

When built with the api feature (on by default), Joular Core can expose power data over a local HTTP and WebSocket server. Start it with:

joularcore --api-port 8080

The server binds to 0.0.0.0:<PORT> and has CORS enabled, so it can be reached from browser-based dashboards.

Endpoints

EndpointProtocolDescription
/dataHTTP GETReturns the latest power reading as JSON
/wsWebSocketPushes a new JSON reading every second

JSON Format

Both endpoints use the same JSON schema:

{
  "timestamp": 1712345678,
  "cpu_power": 15.20,
  "gpu_power": 3.25,
  "total_power": 18.45,
  "cpu_usage": 24.60,
  "pid_or_app_power": 1.84
}
FieldTypeDescription
timestampintegerUnix timestamp in seconds
cpu_powerfloatCPU power in watts
gpu_powerfloatGPU power in watts
total_powerfloatTotal power (CPU + GPU) in watts
cpu_usagefloatSystem CPU usage as a percentage (0–100)
pid_or_app_powerfloatPower attributed to the monitored PID or application in watts; 0.0 if none selected

Example: Fetching via curl

curl http://localhost:8080/data

Example: WebSocket with websocat

websocat ws://localhost:8080/ws

Combining Export Channels

All export channels can be active at the same time. For example:

# CSV file + ring buffer + API, no terminal output
joularcore -s -f power.csv -r --api-port 8080
# Process monitoring, CSV, and API
joularcore -p 1234 -f power.csv --api-port 8080

Systemd Service (Linux)

Joular Core ships with a ready-to-use systemd unit file located at systemd/joularcore.service. It lets you run Joular Core as a background daemon that starts on boot and restarts automatically if it crashes.

Default Service Configuration

The included unit file looks like this:

[Unit]
Description=Joular Core service

[Service]
Type=simple
Restart=always
User=root
ExecStart=/usr/bin/joularcore -o -f /tmp/joularcore-service.csv

[Install]
WantedBy=multi-user.target

By default it runs Joular Core as root (required for RAPL access on Linux), writes the latest power reading to /tmp/joularcore-service.csv in overwrite mode (-o), and restarts automatically on failure.

Installation

Copy the unit file to the systemd directory and reload the daemon:

sudo cp systemd/joularcore.service /etc/systemd/system/
sudo systemctl daemon-reload

Enable automatic start on boot and then start the service immediately:

sudo systemctl enable joularcore
sudo systemctl start joularcore

Check that it is running:

sudo systemctl status joularcore

Customisation

You can edit the ExecStart line to pass any Joular Core arguments. For example, to also expose an HTTP API and monitor a specific application:

ExecStart=/usr/bin/joularcore -o -f /tmp/joularcore-service.csv --api-port 8080

Or to write to a ring buffer without any file output:

ExecStart=/usr/bin/joularcore -s -r

After editing the file, reload systemd for the change to take effect:

sudo systemctl daemon-reload
sudo systemctl restart joularcore

Reading the Output File

When using the default -o -f /tmp/joularcore-service.csv configuration, the file always contains exactly one data row — the most recent measurement. Any script or tool can poll this file at its own pace:

cat /tmp/joularcore-service.csv

Example output:

Timestamp,Total Power (W),CPU Power (W),GPU Power (W),CPU Usage (%)
1712345678,18.45,15.20,3.25,24.60

Stopping and Disabling

sudo systemctl stop joularcore
sudo systemctl disable joularcore

Integration with Systems and Tools

Joular Core is designed to be embedded in larger workflows. It exposes power data through multiple channels — terminal, CSV files, a shared-memory ring buffer, and an HTTP/WebSocket API — so it can feed into whatever downstream system you are working with.

See Exporting Power Data for the technical details of each channel. This page focuses on integration patterns.

CI/CD Pipelines

Run Joular Core alongside your build or test suite to measure the energy cost of a workload.

Example: measure energy consumed by a test run

# Start Joular Core silently, writing to CSV
joularcore -s -f build_power.csv &
JOULAR_PID=$!

# Run your workload
cargo test

# Stop monitoring
kill $JOULAR_PID

The CSV file will contain one row per second of the test run. Sum the power column and multiply by the sample interval (1 second) to get total energy in joules.

Dashboards and Monitoring Stacks

The HTTP/WebSocket API makes it straightforward to feed power data into any dashboard.

Example: stream to a Prometheus / Grafana stack via a scraper

joularcore --api-port 9001 -s &

A small scraper can poll GET http://localhost:9001/data on each Prometheus scrape interval and expose the fields as gauges.

Example: real-time browser dashboard

Connect a browser-based dashboard directly to the WebSocket endpoint:

const ws = new WebSocket("ws://localhost:8080/ws");
ws.onmessage = (event) => {
  const data = JSON.parse(event.data);
  updateChart(data.timestamp, data.total_power, data.cpu_power, data.gpu_power);
};

IDE and Profiler Integration

Joular Core can run as a background service alongside an IDE or profiler. The ring buffer channel provides the lowest latency option for IPC:

  1. Start Joular Core with joularcore -r -s
  2. In your IDE plugin or profiler, memory-map the ring buffer path (/dev/shm/joularcorering on Linux, /tmp/joularcorering on macOS, Local\\JoularCoreRing on Windows)
  3. Read the 5 f64 values: CPU power, GPU power, total power, CPU usage, PID/app power

The ring buffer is updated every second and is safe to read from multiple processes simultaneously.

Remote Power Databases and Time-Series Storage

Joular Core does not push to remote databases directly, but the CSV and API outputs make it easy to bridge to any time-series store.

Example: push to InfluxDB with a small shell loop

joularcore -s --api-port 8080 &

while true; do
  DATA=$(curl -s http://localhost:8080/data)
  TS=$(echo $DATA | jq '.timestamp')
  TOTAL=$(echo $DATA | jq '.total_power')
  curl -s -XPOST "http://influxdb:8086/write?db=energy" \
    --data-binary "power,host=$(hostname) total=${TOTAL} ${TS}000000000"
  sleep 1
done

Running as a Service

On Linux, use the included systemd unit file to have Joular Core run continuously in the background without manual intervention. See Systemd Service.

On Windows and macOS, you can register Joular Core as a service using the OS-native service managers (Windows Service Manager or launchd on macOS). The key is to use -s -o -f <path> or --api-port so data is accessible to other programs without a terminal attached.

Virtual Machine Monitoring

For monitoring inside VMs, see the dedicated Virtual Machines page, which covers how to bridge host-level power readings into the guest using the shared-file mechanism.

Virtual Machines

Joular Core works inside virtual machines with the same feature set as on bare metal: system-wide monitoring, per-process monitoring, per-application monitoring, and all export channels.

The challenge specific to virtual machines is that the guest OS has no direct access to RAPL or other hardware power interfaces — those are managed by the hypervisor. Joular Core solves this by reading power data from a file written by a tool running on the host.

How It Works

  1. On the host OS, run a power monitoring tool (Joular Core itself, or any other tool) and have it continuously write the power consumption of the VM process to a file that is shared between the host and the guest.
  2. On the guest OS, set environment variables telling Joular Core where that file is and what format it uses.
  3. Joular Core in the guest reads the file every second and treats the reported value as its total CPU (or GPU) power.
  4. From there, process and application attribution works normally — proportional to CPU utilisation within the guest.

Joular Core is agnostic to what tool runs on the host. Any tool that can write power data in one of the supported formats will work.

Environment Variables

VariableDescription
VM_CPU_POWER_FILEPath (inside the guest) to the file containing CPU power data
VM_CPU_POWER_FORMATFormat of that file: joularcore, powerjoular, or watts (default: watts)
VM_GPU_POWER_FILEPath (inside the guest) to the file containing GPU power data
VM_GPU_POWER_FORMATFormat of that file: joularcore, powerjoular, or watts (default: watts)

You only need to set the variables for the components you want to monitor. If VM_CPU_POWER_FILE is not set, the VM feature is inactive and Joular Core falls back to whatever hardware interface is available in the guest (which may return 0 if there is none).

Supported File Formats

watts

A plain text file containing a single numeric value in watts, with no header. Updated by the host on every measurement cycle.

18.45

powerjoular

A CSV file with at least three columns. The third column (index 2) contains the power value in watts.

...,18.45,...

joularcore

A CSV file in Joular Core’s standard output format, with a header row. The file should be written in overwrite mode (-o) so it always contains exactly one data row.

Timestamp,Total Power (W),CPU Power (W),GPU Power (W),CPU Usage (%)
1712345678,18.45,15.20,3.25,24.60

When using this format, Joular Core reads the CPU Power (W) column for CPU power and GPU Power (W) for GPU power.

Step-by-Step Example: Joular Core on Both Host and Guest

This example uses Joular Core itself on the host to monitor the VM process, and Joular Core inside the guest to attribute that power to individual processes.

Host Setup

Find the PID of the virtual machine process (for QEMU/KVM this is qemu-system-x86_64, for VirtualBox it is VirtualBoxVM, etc.).

Run Joular Core on the host, monitoring that process and writing to a shared file in overwrite mode:

joularcore -p <VM_PID> -o -f /shared/vm_power.csv -s

The file /shared/vm_power.csv must be accessible from inside the guest — mount the directory using VirtFS, a shared folder, or any other mechanism supported by your hypervisor.

Guest Setup

Inside the guest, set the environment variables so Joular Core knows where to find the power data. Assuming the shared file is mounted at /mnt/host/vm_power.csv inside the guest:

export VM_CPU_POWER_FILE=/mnt/host/vm_power.csv
export VM_CPU_POWER_FORMAT=joularcore

Then run Joular Core as usual. You can monitor any process or application inside the guest:

joularcore -a myapp

Joular Core reads the total VM power from the shared file and distributes it proportionally to processes based on their CPU utilisation within the guest.

Notes

  • The shared file must be on a file system visible to both host and guest. tmpfs (/dev/shm) on Linux is a good choice for low-latency sharing.
  • Write the file in overwrite mode (-o) on the host so Joular Core in the guest always sees the latest value rather than stale data from earlier rows.
  • If the file is temporarily unavailable or empty, Joular Core in the guest treats power as 0 for that sample and continues.
  • The VM_CPU_POWER_FILE and VM_GPU_POWER_FILE paths are interpreted inside the guest. They do not need to match the host path.

How Joular Core Works

This page describes the internal architecture of Joular Core: how it reads hardware power data, how it attributes that power to processes and applications, how it produces output, and how the different subsystems fit together.

High-Level Architecture

At startup, Joular Core:

  1. Parses command-line arguments
  2. Detects the platform and initialises the appropriate power readers and CPU utilisation trackers
  3. Optionally calibrates an idle CPU baseline
  4. Enters a monitoring loop that fires once per second
  5. On each iteration: reads power, reads CPU utilisation, attributes process/app power, and sends the result to all configured output channels
  6. On Ctrl+C: flushes outputs and exits cleanly
┌──────────────────────────────────────────────────────────┐
│                   Argument Parsing                       │
└──────────────────────────┬───────────────────────────────┘
                           │
┌──────────────────────────▼───────────────────────────────┐
│                  Platform Setup                          │
│  Detect OS → create CPU/GPU energy readers               │
│             → create CPU utilisation trackers            │
│             → set up ring buffer (if -r)                 │
│             → set up API server (if --api-port)          │
└──────┬──────────────────────┬──────────────────┬─────────┘
       │                      │                  │
       ▼                      ▼                  ▼
  Monitor Loop          Ring Buffer          API Server
  (1 Hz)                Writer               (async)
       │
       ├── CPU energy reader
       ├── GPU energy reader
       ├── CPU utilisation reader
       ├── Process tracker (optional)
       └── App tracker (optional)
            │
            ▼
       MonitorSample
            │
            ├── Terminal output
            ├── CSV file output
            ├── Ring buffer write
            └── API broadcast

Platform Abstraction

The codebase is built around three traits defined in src/energy.rs:

  • CPUEnergy — returns the current CPU power in watts via get_power()
  • GPUEnergy — returns the current GPU power in watts via get_power()
  • PlatformEnergy — a factory that creates the above readers and the CPU utilisation trackers, and provides the process/app power attribution formula

Each supported OS has a concrete implementation of these traits. The monitoring loop talks only to these trait objects, so it is identical on every platform.

Linux

CPU power is read from the Intel RAPL (Running Average Power Limit) sysfs interface at /sys/class/powercap/intel-rapl/. RAPL is supported on Intel processors since Sandy Bridge (2011) and on AMD processors since Ryzen. The interface exposes cumulative energy counters in microjoules; Joular Core reads two consecutive values a fixed time apart and converts the delta to watts.

CPU utilisation is computed from /proc/stat. Joular Core reads the user, nice, system, idle, iowait, irq, softirq, and steal tick counters on each sample, computes the delta from the previous sample, and calculates utilisation as:

utilisation = 1 − (Δidle / Δtotal)

Per-process utilisation is computed from /proc/<pid>/stat, which exposes cumulative user and kernel CPU time (in clock ticks) for each process. Joular Core computes the delta from the previous sample normalised by the total CPU time delta from /proc/stat.

GPU power on Linux is read by:

  • Nvidia: calling nvidia-smi --query-gpu=power.draw --format=csv,noheader,nounits and parsing the output. Power values from all GPUs are summed.
  • AMD: calling amd-smi or rocm-smi with JSON output and extracting the power fields.

Windows

CPU power is read from Hubblo’s RAPL Windows kernel driver via DeviceIoControl. The driver exposes RAPL data through a Windows device interface, returning power in watts directly.

CPU utilisation is read from the Win32 API via GetSystemTimes(), which returns idle, kernel, and user times as FILETIME structures. Utilisation is computed from the deltas between two successive calls.

Per-process CPU time is read via GetProcessTimes(). Application monitoring enumerates processes using the Windows Toolhelp32 API (CreateToolhelp32Snapshot, Process32First, Process32Next).

GPU power on Windows follows the same approach as Linux: nvidia-smi for Nvidia, amd-smi for AMD.

macOS

CPU, GPU, and overall system power are all read from Apple’s powermetrics tool, which ships with macOS. Joular Core spawns powermetrics as a subprocess with JSON output format and parses the result. On Apple Silicon, powermetrics reports CPU and GPU power separately; on Intel Macs it reports CPU power only.

Because powermetrics requires elevated privileges to access hardware counters, Joular Core must be run with elevated access on macOS.

CPU utilisation on macOS is read via the Mach kernel API (host_statistics64 with HOST_CPU_LOAD_INFO), which returns per-CPU user, system, and idle tick counters.

Process and application monitoring on macOS uses Mach task info APIs to read per-process CPU times.

Single-Board Computers (SBC)

SBC platforms (Raspberry Pi, Asus Tinker Board) do not have a hardware power interface accessible to software. Instead, Joular Core calculates CPU power from CPU utilisation using polynomial regression models:

power = c₀ + c₁·u + c₂·u² + … + cₙ·uⁿ

where u is the current CPU utilisation (0–100) and c₀…cₙ are model coefficients measured empirically for each specific board model and revision.

The built-in models cover all supported Raspberry Pi models. A custom model can be supplied via the SBC_POWER_MODEL_JSON environment variable; the file format must match the Joular Power Models Database schema.

GPU power is always 0 on SBC platforms.

Virtual Machines

In a VM, there is no direct hardware interface. Joular Core reads power from a shared file written by the host. The file is read on every sampling cycle. If the file is absent or empty, the power value for that sample is 0. The supported file formats are described in Virtual Machines.

The Monitoring Loop

The monitoring loop is in src/monitor.rs (JoularCoreMonitor::poll). It runs once per second and produces a MonitorSample:

#![allow(unused)]
fn main() {
pub struct MonitorSample {
    pub timestamp: u64,       // Unix epoch seconds
    pub cpu_power: f64,       // Watts
    pub gpu_power: f64,       // Watts
    pub total_power: f64,     // cpu_power + gpu_power
    pub cpu_usage: f64,       // Percentage (0–100)
    pub process_power: Option<f64>,       // Watts, if -p is set
    pub app_power: Option<(f64, usize)>,  // (Watts, PID count), if -a is set
}
}

Each field:

  • timestamp: wall-clock time at the moment of reading (SystemTime::now())
  • cpu_power / gpu_power: raw readings from the hardware interface
  • total_power: sum of CPU and GPU power
  • cpu_usage: system-wide CPU utilisation percentage
  • process_power / app_power: attributed power, computed as described below

Before the main loop starts, loop_init() takes one throwaway reading to warm up the energy counters. RAPL counts cumulative energy since boot; the first real delta needs a prior reference value.

Process and Application Power Attribution

Joular Core attributes CPU power to a process (or application) using a proportional model:

process_power = 100 × (process_cpu_utilisation × attributed_cpu_power) / system_cpu_utilisation

where:

  • process_cpu_utilisation is the fraction of total CPU time used by the process (or the sum of all PIDs for an application) in the last second
  • attributed_cpu_power is max(0, cpu_power − idle_baseline) — the raw CPU power minus the idle baseline (zero if no baseline is configured)
  • system_cpu_utilisation is the overall CPU utilisation percentage

If system_cpu_utilisation is zero (the CPU is completely idle), the attributed power is zero to avoid a division by zero.

This model assumes that a process’s share of CPU power is proportional to its share of CPU time. This is an approximation — it does not account for frequency scaling within a core, NUMA topology, or work done in kernel threads on behalf of the process — but it is a practical and widely used approach for software-level power attribution.

CPU Idle Baseline

When the --cpu-idle-baseline or --calibrate-cpu-idle-baseline option is set, the baseline is subtracted from the raw CPU power before attribution:

attributed_cpu_power = max(0, cpu_power − baseline)

This removes the base power the CPU consumes just running the operating system at rest, so that the attributed power more accurately reflects the energy consumed by the workload itself.

Auto-calibration (--calibrate-cpu-idle-baseline) collects 5 power samples at 1-second intervals before starting the main loop and uses their average as the baseline.

Application Monitoring and PID Refresh

When monitoring a named application (-a), Joular Core maintains a list of all PIDs whose process name matches the supplied string. This list is refreshed periodically (every --app-refresh-interval seconds, default 3) to pick up new processes spawned after monitoring started and drop ones that have exited. Setting the interval to 0 rescans on every second.

Output Pipeline

After each poll() call, the MonitorSample is sent to an OutputBundle, which dispatches it to all active output channels:

  1. Terminal (OutputWriter in Terminal mode): formats the reading with ANSI colour codes and writes it to stdout, overwriting the previous line using carriage return and ANSI erase-line sequences. In numeric mode (-i), a bare float is printed instead.

  2. CSV file (OutputWriter in CsvFile mode): appends one row to the CSV file. In overwrite mode (-o), the file is truncated to zero before each write so only the latest row is kept.

  3. Ring buffer (RingBufferWriter): writes 5 f64 values to a shared-memory region. On Linux and macOS this is a memory-mapped file (memmap2); on Windows it uses Win32 file mapping (CreateFileMapping / MapViewOfFile).

  4. API (when the api feature is enabled): broadcasts an ApiData struct to all connected HTTP and WebSocket clients via a Tokio broadcast channel. The HTTP handler (GET /data) reads the latest broadcast value; the WebSocket handler (/ws) streams each new broadcast as a JSON message.

API Server

The API server (src/api.rs) is built with the Axum web framework running on a Tokio async runtime. It runs in a separate async task alongside the synchronous monitoring loop. Communication between the two is done via a tokio::sync::broadcast::Sender<ApiData>.

  • GET /data: the handler subscribes to the broadcast channel and immediately returns the last received value as JSON.
  • /ws: the WebSocket handler subscribes to the broadcast channel and pushes each new value as a JSON message to the connected client.
  • CORS is enabled via tower-http’s CorsLayer, so the API can be consumed directly from browser-based dashboards.

Graceful Shutdown

Joular Core installs a Ctrl+C handler via the ctrlc crate. When the signal is received:

  1. A shared Arc<AtomicBool> flag is set to false, causing the monitoring loop to exit after the current iteration.
  2. The ANSI cursor (hidden at startup) is restored.
  3. All buffered output is flushed and file handles are dropped cleanly.

Code Layout

PathPurpose
src/main.rsCLI entry point: argument parsing, monitoring loop, output setup
src/maingui.rsGUI binary entry point
src/lib.rsLibrary root: re-exports all public modules
src/args.rsClap-derived argument struct
src/common.rsJoularContext: platform detection and component initialisation
src/monitor.rsJoularCoreMonitor: the sampling loop and MonitorSample
src/energy.rsCPUEnergy, GPUEnergy, PlatformEnergy traits
src/cpu.rsCPUUtilization, ProcessCPUUtilization, AppCPUUtilization traits and Linux implementations
src/output.rsOutputWriter, OutputBundle, OutputSink traits
src/ringbuffer.rsRingBufferWriter: shared-memory ring buffer for Linux/macOS/Windows
src/api.rsAxum-based HTTP and WebSocket API server
src/logging.rstracing-based structured logging helpers
src/platform/linux.rsLinux-specific power readers and process trackers
src/platform/windows.rsWindows-specific power readers and process trackers
src/platform/macos.rsmacOS-specific powermetrics integration and process trackers
src/platform/sbc.rsSBC regression model power calculation
src/platform/nvidia.rsNvidia GPU power via nvidia-smi
src/platform/amdgpu.rsAMD GPU power via amd-smi / rocm-smi
src/vm.rsVirtual machine shared-file power reader
src/gui/egui-based GUI: model, views, history, theme

Key Dependencies

CrateVersionPurpose
clap4.xCommand-line argument parsing
egui / eframe0.34Cross-platform GUI framework
axum0.8HTTP and WebSocket API server
tokio1.xAsync runtime for the API server
tower-http0.6CORS middleware
serde / serde_json1.xJSON serialisation for the API
sysinfo0.38System and process information (used for process enumeration)
memmap20.9Memory-mapped I/O for the Unix ring buffer
windows0.62Win32 API bindings
mach20.6Mach kernel API bindings (macOS)
libc0.2C library bindings (macOS)
ctrlc3.xCross-platform Ctrl+C handler
tracing0.1Structured logging
rfd0.17Native file dialog (GUI file picker)