šŸ Tjoskar's Blog

I write about stuff, mostly for myself

Use an E‑Ink display to show information about your house



I’m documenting how I set this up mostly for myself so I can look back at what I did, but hopefully you can take some inspiration for your own project. And I know, I could cut the passepartout better. More on that below.

What I’ve used for this project:

I got this idea pretty soon after we moved into our house and I set up Home Assistant. I loved the data HA gives you, but I disliked needing my phone every time I wanted to glance at energy use or toggle something. I wanted ambient, glanceable info in the kitchen. I also like the feeling of a physical button instead of tapping on a screen. I didn’t want a bright LCD, but rather E‑Ink. At first I was thinking about a color E‑Ink panel, but I couldn’t find any with a low refresh rate (everything I looked at was ~40 s to redraw the screen).

When I started this project I first googling around and found many home dashboards that used a eink display but all of them used a web page, spin up a browser (often headless Chrome), take a screenshot, then pipe that bitmap to the E‑Ink controller. Clever, but heavy. It looks like this:

  1. Event: a light turns on
  2. Headless Chrome process spins up
  3. Chrome loads a page
  4. Screenshot captured
  5. Script forwards image to controller

The latency can be up to a minutet due to startup overhead. Also the error handling becomes complex.

I opted to render directly with Pillow (PIL) on the Pi Zero W 2. It takes ~100 ms to compose the image and then the display’s own refresh time dominates (~1 s). Simpler, faster, fewer components to babysit.

I briefly considered a Pi Pico, but 264 KB RAM felt tight for image buffers and font rendering. So: bigger sibling it is.

Pi Zero setup

Using the Raspberry Pi Imager app I flashed Raspberry Pi OS (32‑bit) and entered Wi‑Fi credentials so I could SSH in immediately after first boot.

Important note! I first took the ā€œrecommendedā€ image and only later realized it included a full GUI desktop. Also: 32‑bit is recommended over 64‑bit (see https://github.com/raspberrypi/linux/issues/4876 or various forum threads). I spent several hours chasing unstable SSH before plugging in a monitor and discovering a window manager happily eating resources in the background 🤦

After installation you should be able to SSH into it (find the IP via your router UI or nmap).

Run sudo apt update && sudo apt upgrade -y to pull the latest packages.

Hardware setup

I’m working with the Waveshare 7.5ā€ e-Paper HAT, an 800Ɨ480 resolution E‑Ink display that includes a HAT for Raspberrys.

Display Config Switch: Set to A (3Ī©)

The HAT includes a small DIP switch labeled ā€œDisplay configā€, which toggles between 3Ī© (A) and 0.47Ī© (B) resistance settings. This setting adjusts the driving voltage for the E-Ink panel, and must match the specific model of display you’re using.

In my case, the back of the display is labeled V2, which according to Waveshare’s official documentation means:

Set ā€œDisplay configā€ to A (3Ī©)

Setting it incorrectly may result in poor image quality or a non-working display, but it’s unlikely to permanently damage anything.

Interface Config Switch: Set to 0 (4-line SPI)

Another switch is labeled ā€œInterface configā€, which selects between:

For Raspberry Pi setups, 4-line SPI is the standard and recommended setting:

Set ā€œInterface configā€ to 0

This configuration works out-of-the-box with Waveshare’s official Raspberry Pi libraries. If you accidentally set it to 1 (3-line SPI), the display likely won’t function at all unless your software explicitly supports that mode.

Connecting the FPC Cable Correctly

The display connects to the HAT using a flat flexible cable (FPC/FFC). FPC stands for Flexible Printed Circuit, and it consists of fine copper traces laminated in plastic, commonly used for displays and cameras.

Insert the cable with the exposed copper traces facing down, toward the PCB.

I first inserted it upside‑down and of course nothing showed and I got some werd error messages that was hard to backtrack.

Connect the Pi with the display

e-PaperBCMPi Zero Pin
VCCVCC1
GNDGND6
DIN10 (MOSI)19
CLK11 (SCLK)23
CS8 (CE0)24
DC2522
RST1711
BUSY2418
PWR1812

Software update

Open the Raspberry Pi terminal over SSH and enter the following command in the config interface:

// Choose Interfacing Options -> SPI -> Yes to enable SPI interface
sudo raspi-config

Then reboot your Raspberry Pi:

sudo reboot

Check that dtparam=spi=on was written to /boot/firmware/config.txt

cat /boot/firmware/config.txt | grep dtparam=spi=on

To make sure SPI is not occupied, it is recommended to close other drivers’ coverage. You can use ls /dev/spi* to check whether SPI is occupied. If the terminal outputs /dev/spidev0.0 and /dev/spidev0.1, SPI is not occupied.

I have no idea what to do if spi is occupied

Create a hello world project

First, install some system packages and clone waveshare/e-Paper to get drivers for the display.

sudo apt update
sudo apt install git python3-pip python3-pil python3-numpy python3-spidev

mkdir ~/hello-world
cd hello-world

git clone https://github.com/waveshare/e-Paper.git # this can take some time
mv e-Paper/RaspberryPi_JetsonNano/python/lib ./
touch lib/__init__.py
touch lib/waveshare_epd/__init__.py

Then create a file called hello-world.py with the following content (for example with vim: vi hello-world.py).

The file contains a few debug logs to see how far execution gets.

from lib.waveshare_epd.epd7in5_V2 import EPD
from PIL import Image, ImageDraw, ImageFont

print(1)
epd = EPD()
print(2)
epd.init()
print(3)
epd.Clear()
print(4)

image = Image.new('1', (epd.width, epd.height), 255)
draw = ImageDraw.Draw(image)
font = ImageFont.load_default()
draw.text((100, 100), "Hello World", font=font, fill=0)

print(5)

epd.display(epd.getbuffer(image))

print(6)

epd.sleep()

print(7)

Now, try it out:

python3 hello-world.py

Set up the frame

I used a picture frame from IKEA and cut the mat a bit so it would fit. Unfortunately I was a bit eager and cut it freehand so it ended up a little crooked. I’m planning to 3D print a custom passepartout on this model.

Then I drilled three holes: two for the buttons and one for an LED. I soldered everything together and then connected it all.

BCMPi Zero PinComment
2140Button 1
2038Button 2
23LED





Set up the application

https://github.com/tjoskar/eink-control-panel

The Python code for this project lives here. It’s very specific to my use cases (tightly coupled to my electricity provider). You’re welcome to open PRs, but forking and tailoring might be faster. If you do something cool, let me know!

Much of the code was AI‑generated. I set up the foundation, refined the edges, and let the middle be delightfully machine‑written. Some parts are a bit messy but it works.

I use an MQTT client to receive and send events to Home Assistant. When a device I care about (engine heater, bike charger, washing machine, dryer) turns on or off, an event is published that the program listens for. On receipt I do a ā€œfast renderā€ (~1 s refresh including display time). I tried partial updates targeting ~0.4 s but the panel got smeary/ghosty. Probably me missing something and ~1 s is fine for me.

One button sends an MQTT event to HA to start the engine heater. That triggers a small dialog (ā€œStarting the engine heaterā€) which auto‑dismisses after 5 s. When the heater is running (regardless of how it was started) the LED lights. From across the room you can instantly tell. In summer I may repurpose it to show if the bike charger is active or invent a new excuse for a tiny glowing light.

I fetch price and consumption data directly from my electricity provider (I love my provider that have a public GraphQL API for this).

Weather data comes from https://api.openweathermap.org

I also show upcoming garbage collection dates (hardcoded dates).

We plan meals in the iPhone Notes app. I eventually built a Shortcut that POSTs the list to a tiny Deno Deploy service backed by KV storage. The script is intentionally small: parse, clean, store. First iteration below (since then I’ve polished it a tad):

const kv = await Deno.openKv();

Deno.serve(async (req) => {
  if (req.method === "GET") {
    const entry = await kv.get(["menu"]);
    return new Response(JSON.stringify(entry.value || []), {
      headers: {
        "Content-Type": "application/json",
      },
    });
  }
  const body = await req.json();
  const content = (body.content as string).replace("Matsedel\n\n", "");
  const firstSection = content.split("\n\n")[0];
  const lines = firstSection.split("\n");
  const list = lines
    .values()
    .filter((line) => !line.includes("- [x] "))
    .map((line) => line.replaceAll("- [ ] ", "").trim())
    .take(10)
    .toArray();

  const result = await kv.set(["menu"], list);

  return new Response(result.ok ? "OK" : "Error");
});



What I did spend time on was making it possible to extract the data. The only workable way I found was to create a Shortcut that I run manually. It took a lot of trial and error to get it working, and I don’t have an easy way to share it without sharing my credentials, so ping me if you get stuck and I might be able to help.

Alternatives (a different notes app with an API) would mean convincing my partner to switch and that’s a harder engineering problem. I’ll probably try that at some point.



If you have any questions or comments, feel free to reach out: hello@tjoskar.dev

Dragon Plane