May 23, 2024

Introducing the GNOME Foundation’s Five-Year Strategic Plan Draft

We are thrilled to share the GNOME Foundation’s Draft Five-Year Strategic Plan proposal, a roadmap that sets the stage for our collective journey towards a brighter, more sustainable future. This comprehensive plan encompasses our goals, priorities, and strategies aimed at propelling the GNOME ecosystem forward in an exciting new direction. This draft was created over a six-month period through a process that involved research, individual interviews, and group discussions with staff, board, and members. This draft has been reviewed by the Board and is now ready to share with the greater community.

We invite all members of our community to review this strategic plan, which outlines our vision for the next five years. Your insights, perspectives, and expertise are crucial as we move forward together. Your feedback will play a pivotal role in shaping the future of GNOME, ensuring that our work continues to empower our users worldwide and drive open-source innovation.

Please take the time to review the strategic plan and share your thoughts with us. Whether you’re a developer, designer, user, or advocate, your voice matters. Your input will be collected ahead of GUADEC, where we will provide additional opportunities to ask questions, give feedback, and offer ideas. Together we can create a stronger open source ecosystem that meets the diverse needs of our global community.

2024-05-23 Thursday

  • Mail chew, read profiles, technical planning call. COOL community meeting and encouraging testing call. E. home on the bus.
  • Poked at profiles, and dug away at some nonsense tile description re-parsing to further accelerate 24.04; really fun to see the ever changing and improving flame-graphs go past.
  • We released Collabora Online 24.04 - ready for Enterprises to start upgrading - with lots of lovely new features and improvements both from Collabora, and the wider LibreOffice technology community.
    COOL 24.04 sample picture
    Really pleased to have a great marketing team that know about social platforms so I can stick with emacs + a python RSS generator: Mastodon, Linked-In, X, facebook and instagram.

Crosswords 0.3.13: Side Quests

It’s time for another Crosswords release.

I’ll keep this update short and sweet. I had grand plans last cycle to work on the word data and I did work a little on it — just not in the way I intended. Instead, a number of new contributors showed up which sent me in a different direction. I’m always happy to get new contributors and wanted to make sure they had a good experience. It ended up being a fun set of side quests before returning to the main set of features in the editor.

Cursor behavior

New contributor Adam filed a series of absolutely fantastic bug reports about the cursor behavior in the game. Adam fixed a couple bugs himself, and then pointed out that undo/redo behavior is uniquely weird with crosswords. Unlike text boxes, cursors have a natural direction associated with them that matters for the flow of editing.

In a nutshell, when you undo a word you want the cursor to be restored at the same orientation as where it was at the start of the guess. On the other hand, when redoing a guess, you want the cursor to advance normally, which might be in a different place or orientation. It’s subtle, but is the kind of user touch that you would normally never notice. It just feels “clunky” without a fix. With all these changes, the cursor behavior feels a lot more natural.

Can you spot the difference?

Selections and Intersections

Another side quest was to change the Autofill dialog to operate in-place. I foolishly thought that this would be a relatively quick change, but it ended up being a lot more work than expected. I’ll spare the details, but along the way, I also had to add three more features as dependencies.

First, I’ve wanted a way to leave pencil markings for a long time. These would transient markings that show possibilities for a square without marking. We use it to show the results of the in-place autofill operation.

Autofilling a section of the in place selection. Potential grids are written in pencil.

Second I fixed an old bug that I’ve wanted to fix for a long time. Previously, the word list showed all possible words independently. Now it only shows words that show in both directions. As an example, in the grid below we don’t show “ACID — (80)”  in the Down list as that final “D” would mean the Across clue would have “WD” as its prefix.

The acid test. WD-40 isn’t in our dictionary

This required writing code to efficiently calculate the intersection of two lists. It sounds easy enough, but the key word here is “efficient”. While I’m sure the implementation could be improved, it’s fast enough for now for it to be used synchronously.

Finally, I was able to use the intersection function to optimize the autofill algorithm itself. It’s significantly faster also more correct than the previous implementation, which means that the resulting boards will be more usable. It still can’t do a full 15×15 grid in a reasonable time, but it can solve about 1/3 of a grid.

Other

  • Federico and I are working with Pranjal as a GSOC student for the summer. He’s going to work on porting libipuz to rust, and we spent a good amount of time planning the approach for that as well as prepping the library.
  • Tanmay has continued to work on the acrostic generator as part of last summer’s GSoC project. I’m so proud of his continued efforts in this space. Check out his recent post!
  • Gwyneth showed up with support for UTF-8 embedding in puz files as well as support for loading .xd crossword files.
  • I updated our use of libadwaita widgets to the latest release, and enabled style settings per-cell in the editor.

Until next time!

Introducing the WebKit Container SDK

Developing WebKitGTK and WPE has always had challenges such as the amount of dependencies or it’s fairly complex C++ codebase which not all compiler versions handle well. To help with this we’ve made a new SDK to make it easier.

Current Solutions

There have always been multiple ways to build WebKit and its dependencies on your host however this was never a great developer experience. Only very specific hosts could be “supported”, you often had to build a large number of dependencies, and the end result wasn’t very reproducable for others.

The current solution used by default is a Flatpak based one. This was a big improvement for ease of use and excellent for reproducablity but it introduced many challenges doing development work. As it has a strict sandbox and provides read-only runtimes it was difficult to use complex tooling/IDEs or develop third party libraries in it.

The new SDK tries to take a middle ground between those two alternatives, isolating itself from the host to be somewhat reproducable, yet being a mutable environment to be flexible enough for a wide range of tools and workflows.

The WebKit Container SDK

At the core it is an Ubuntu OCI image with all of the dependencies and tooling needed to work on WebKit. On top of this we added some scripts to run/manage these containers with podman and aid in developing inside of the container. It’s intention is to be as simple as possible and not change traditional development workflows.

You can find the SDK and follow the quickstart guide on our GitHub: https://github.com/Igalia/webkit-container-sdk

The main requirements is that this only works on Linux with podman 4.0+ installed. For example Ubuntu 23.10+.

In the most simple case, once you clone https://github.com/Igalia/webkit-container-sdk.git, using the SDK can be a few commands:

source /your/path/to/webkit-container-sdk/register-sdk-on-host.sh
wkdev-create --create-home
wkdev-enter

From there you can use WebKit’s build scripts (./Tools/Scripts/build-webkit --gtk) or CMake. As mentioned before it is an Ubuntu installation so you can easily install your favorite tools directly like VSCode. We even provide a wkdev-setup-vscode script to automate that.

Advanced Usage

Disposibility

A workflow that some developers may not be familiar with is making use of entirely disposable development environments. Since these are isolated containers you can easily make two. This allows you to do work in parallel that would interfere with eachother while not worrying about it as well as being able to get back to a known good state easily:

wkdev-create --name=playground1
wkdev-create --name=playground2

podman rm playground1 # You would stop first if running.
wkdev-enter --name=playground2

Working on Dependencies

An important part of WebKit development is working on the dependencies of WebKit rather than itself, either for debugging or for new features. This can be difficult or error-prone with previous solutions. In order to make this easier we use a project called JHBuild which isn’t new but works well with containers and is a simple solution to work on our core dependencies.

Here is an example workflow working on GLib:

wkdev-create --name=glib
wkdev-enter --name=glib

# This will clone glib main, build, and install it for us. 
jhbuild build glib

# At this point you could simply test if a bug was fixed in a different versin of glib.
# We can also modify and debug glib directly. All of the projects are cloned into ~/checkout.
cd ~/checkout/glib

# Modify the source however you wish then install your new version.
jhbuild make

Remember that containers are isoated from each other so you can even have two terminals open with different builds of glib. This can also be used to test projects like Epiphany against your build of WebKit if you install it into the JHBUILD_PREFIX.

To Be Continued

In the next blog post I’ll document how to use VSCode inside of the SDK for debugging and development.

May 22, 2024

2024-05-22 Wednesday

  • Up super-early as J. took H. to the airport; worked through task backlog; dropped E. to school, helped by Mitch. Partner call, picked up E. after her exam.
  • Worked on release roadmap planning with Andras,Anna & Pedro. All Hands call, poked at marketing, weekly sales call, sync with Lily, contract bits for new staff.
  • All Saints band practice in the evening.

growing a bootie

Following on last week’s egregious discussion of the Hoot Scheme-to-WebAssembly compiler bootie, today I would like to examine another axis of boot, which is a kind of rebased branch of history: not the hack as it happened, but the logic inside the hack, the structure of the built thing, the history as it might have been. Instead of describing the layers of shims and props that we used while discovering what were building, let’s look at how we would build Hoot again, if we had to.

I think many readers of this blog will have seen Growing a Language, a talk / performance art piece in which Guy L. Steele—I once mentioned to him that Guy L. was one of the back-justifications for the name Guile; he did not take it well—in which Steele takes the set of monosyllabic words as primitives and builds up a tower of terms on top, bootstrapping a language as he goes. I just watched it again and I think it holds up, probably well enough to forgive the superfluous presence of the gender binary in the intro; ideas were different in the 1900s.

It is in the sense of that talk that I would like to look at growing a Hoot: how Hoot defines nouns and verbs in terms of smaller, more primitive terms: terms in terms of terms.

(hoot features) features (hoot primitives) primitives (ice-9 match) match (ice-9 match):s->(hoot primitives):n (hoot eq) eq (ice-9 match):s->(hoot eq):n (hoot pairs) pairs (ice-9 match):s->(hoot pairs):n (hoot vectors) vectors (ice-9 match):s->(hoot vectors):n (hoot equal) equal (ice-9 match):s->(hoot equal):n (hoot lists) lists (ice-9 match):s->(hoot lists):n (hoot errors) errors (ice-9 match):s->(hoot errors):n (hoot numbers) numbers (ice-9 match):s->(hoot numbers):n (fibers scheduler) scheduler (hoot ffi) ffi (fibers scheduler):s->(hoot ffi):n (guile) (guile) (fibers scheduler):s->(guile):n (fibers channels) channels (fibers channels):s->(ice-9 match):n (fibers waiter-queue) waiter-queue (fibers channels):s->(fibers waiter-queue):n (fibers operations) operations (fibers channels):s->(fibers operations):n (fibers channels):s->(guile):n (srfi srfi-9) srfi-9 (fibers channels):s->(srfi srfi-9):n (fibers waiter-queue):s->(ice-9 match):n (fibers waiter-queue):s->(fibers operations):n (fibers waiter-queue):s->(guile):n (fibers waiter-queue):s->(srfi srfi-9):n (fibers promises) promises (fibers promises):s->(fibers operations):n (hoot exceptions) exceptions (fibers promises):s->(hoot exceptions):n (fibers promises):s->(hoot ffi):n (fibers promises):s->(guile):n (fibers conditions) conditions (fibers conditions):s->(ice-9 match):n (fibers conditions):s->(fibers waiter-queue):n (fibers conditions):s->(fibers operations):n (fibers conditions):s->(guile):n (fibers conditions):s->(srfi srfi-9):n (fibers timers) timers (fibers timers):s->(fibers scheduler):n (fibers timers):s->(fibers operations):n (scheme time) time (fibers timers):s->(scheme time):n (fibers timers):s->(guile):n (fibers operations):s->(ice-9 match):n (fibers operations):s->(fibers scheduler):n (hoot boxes) boxes (fibers operations):s->(hoot boxes):n (fibers operations):s->(guile):n (fibers operations):s->(srfi srfi-9):n (hoot eq):s->(hoot primitives):n (hoot syntax) syntax (hoot eq):s->(hoot syntax):n (hoot strings) strings (hoot strings):s->(hoot primitives):n (hoot strings):s->(hoot eq):n (hoot strings):s->(hoot pairs):n (hoot bytevectors) bytevectors (hoot strings):s->(hoot bytevectors):n (hoot strings):s->(hoot lists):n (hoot bitwise) bitwise (hoot strings):s->(hoot bitwise):n (hoot char) char (hoot strings):s->(hoot char):n (hoot strings):s->(hoot errors):n (hoot strings):s->(hoot numbers):n (hoot match) match (hoot strings):s->(hoot match):n (hoot pairs):s->(hoot primitives):n (hoot bitvectors) bitvectors (hoot bitvectors):s->(hoot primitives):n (hoot bitvectors):s->(hoot bitwise):n (hoot bitvectors):s->(hoot errors):n (hoot bitvectors):s->(hoot match):n (hoot vectors):s->(hoot primitives):n (hoot vectors):s->(hoot pairs):n (hoot vectors):s->(hoot lists):n (hoot vectors):s->(hoot errors):n (hoot vectors):s->(hoot numbers):n (hoot vectors):s->(hoot match):n (hoot equal):s->(hoot primitives):n (hoot equal):s->(hoot eq):n (hoot equal):s->(hoot strings):n (hoot equal):s->(hoot pairs):n (hoot equal):s->(hoot bitvectors):n (hoot equal):s->(hoot vectors):n (hoot records) records (hoot equal):s->(hoot records):n (hoot equal):s->(hoot bytevectors):n (hoot not) not (hoot equal):s->(hoot not):n (hoot values) values (hoot equal):s->(hoot values):n (hoot hashtables) hashtables (hoot equal):s->(hoot hashtables):n (hoot equal):s->(hoot numbers):n (hoot equal):s->(hoot boxes):n (hoot equal):s->(hoot match):n (hoot exceptions):s->(hoot features):n (hoot exceptions):s->(hoot primitives):n (hoot exceptions):s->(hoot pairs):n (hoot exceptions):s->(hoot records):n (hoot exceptions):s->(hoot lists):n (hoot exceptions):s->(hoot syntax):n (hoot exceptions):s->(hoot errors):n (hoot exceptions):s->(hoot match):n (hoot cond-expand) cond-expand (hoot exceptions):s->(hoot cond-expand):n (hoot parameters) parameters (hoot parameters):s->(hoot primitives):n (hoot fluids) fluids (hoot parameters):s->(hoot fluids):n (hoot parameters):s->(hoot errors):n (hoot parameters):s->(hoot cond-expand):n (hoot records):s->(hoot primitives):n (hoot records):s->(hoot eq):n (hoot records):s->(hoot pairs):n (hoot records):s->(hoot vectors):n (hoot symbols) symbols (hoot records):s->(hoot symbols):n (hoot records):s->(hoot lists):n (hoot records):s->(hoot values):n (hoot records):s->(hoot bitwise):n (hoot records):s->(hoot errors):n (hoot ports) ports (hoot records):s->(hoot ports):n (hoot records):s->(hoot numbers):n (hoot records):s->(hoot match):n (hoot keywords) keywords (hoot records):s->(hoot keywords):n (hoot records):s->(hoot cond-expand):n (hoot dynamic-wind) dynamic-wind (hoot dynamic-wind):s->(hoot primitives):n (hoot dynamic-wind):s->(hoot syntax):n (hoot bytevectors):s->(hoot primitives):n (hoot bytevectors):s->(hoot bitwise):n (hoot bytevectors):s->(hoot errors):n (hoot bytevectors):s->(hoot match):n (hoot error-handling) error-handling (hoot error-handling):s->(hoot primitives):n (hoot error-handling):s->(hoot pairs):n (hoot error-handling):s->(hoot exceptions):n (hoot write) write (hoot error-handling):s->(hoot write):n (hoot control) control (hoot error-handling):s->(hoot control):n (hoot error-handling):s->(hoot fluids):n (hoot error-handling):s->(hoot errors):n (hoot error-handling):s->(hoot ports):n (hoot error-handling):s->(hoot numbers):n (hoot error-handling):s->(hoot match):n (hoot error-handling):s->(hoot cond-expand):n (hoot ffi):s->(hoot primitives):n (hoot ffi):s->(hoot strings):n (hoot ffi):s->(hoot pairs):n (hoot procedures) procedures (hoot ffi):s->(hoot procedures):n (hoot ffi):s->(hoot lists):n (hoot ffi):s->(hoot not):n (hoot ffi):s->(hoot errors):n (hoot ffi):s->(hoot numbers):n (hoot ffi):s->(hoot cond-expand):n (hoot debug) debug (hoot debug):s->(hoot primitives):n (hoot debug):s->(hoot match):n (hoot symbols):s->(hoot primitives):n (hoot symbols):s->(hoot errors):n (hoot assoc) assoc (hoot assoc):s->(hoot primitives):n (hoot assoc):s->(hoot eq):n (hoot assoc):s->(hoot pairs):n (hoot assoc):s->(hoot equal):n (hoot assoc):s->(hoot lists):n (hoot assoc):s->(hoot not):n (hoot procedures):s->(hoot primitives):n (hoot procedures):s->(hoot syntax):n (hoot write):s->(hoot primitives):n (hoot write):s->(hoot eq):n (hoot write):s->(hoot strings):n (hoot write):s->(hoot pairs):n (hoot write):s->(hoot bitvectors):n (hoot write):s->(hoot vectors):n (hoot write):s->(hoot records):n (hoot write):s->(hoot bytevectors):n (hoot write):s->(hoot symbols):n (hoot write):s->(hoot procedures):n (hoot write):s->(hoot bitwise):n (hoot write):s->(hoot char):n (hoot write):s->(hoot errors):n (hoot write):s->(hoot ports):n (hoot write):s->(hoot numbers):n (hoot write):s->(hoot keywords):n (hoot lists):s->(hoot primitives):n (hoot lists):s->(hoot pairs):n (hoot lists):s->(hoot values):n (hoot lists):s->(hoot numbers):n (hoot lists):s->(hoot match):n (hoot lists):s->(hoot cond-expand):n (hoot not):s->(hoot syntax):n (hoot syntax):s->(hoot primitives):n (hoot values):s->(hoot primitives):n (hoot values):s->(hoot syntax):n (hoot control):s->(hoot primitives):n (hoot control):s->(hoot parameters):n (hoot control):s->(hoot values):n (hoot control):s->(hoot cond-expand):n (hoot bitwise):s->(hoot primitives):n (hoot char):s->(hoot primitives):n (hoot char):s->(hoot bitvectors):n (hoot char):s->(hoot bitwise):n (hoot char):s->(hoot errors):n (hoot char):s->(hoot match):n (hoot dynamic-states) dynamic-states (hoot dynamic-states):s->(hoot primitives):n (hoot dynamic-states):s->(hoot vectors):n (hoot dynamic-states):s->(hoot debug):n (hoot dynamic-states):s->(hoot lists):n (hoot dynamic-states):s->(hoot values):n (hoot dynamic-states):s->(hoot errors):n (hoot dynamic-states):s->(hoot numbers):n (hoot dynamic-states):s->(hoot match):n (hoot read) read (hoot read):s->(hoot primitives):n (hoot read):s->(hoot eq):n (hoot read):s->(hoot strings):n (hoot read):s->(hoot pairs):n (hoot read):s->(hoot bitvectors):n (hoot read):s->(hoot vectors):n (hoot read):s->(hoot exceptions):n (hoot read):s->(hoot symbols):n (hoot read):s->(hoot lists):n (hoot read):s->(hoot not):n (hoot read):s->(hoot values):n (hoot read):s->(hoot char):n (hoot read):s->(hoot errors):n (hoot read):s->(hoot ports):n (hoot read):s->(hoot numbers):n (hoot read):s->(hoot match):n (hoot read):s->(hoot keywords):n (hoot hashtables):s->(hoot primitives):n (hoot hashtables):s->(hoot eq):n (hoot hashtables):s->(hoot pairs):n (hoot hashtables):s->(hoot vectors):n (hoot hashtables):s->(hoot procedures):n (hoot hashtables):s->(hoot lists):n (hoot hashtables):s->(hoot values):n (hoot hashtables):s->(hoot bitwise):n (hoot hashtables):s->(hoot errors):n (hoot hashtables):s->(hoot numbers):n (hoot fluids):s->(hoot primitives):n (hoot fluids):s->(hoot cond-expand):n (hoot errors):s->(hoot primitives):n (hoot atomics) atomics (hoot atomics):s->(hoot primitives):n (hoot ports):s->(hoot primitives):n (hoot ports):s->(hoot eq):n (hoot ports):s->(hoot strings):n (hoot ports):s->(hoot pairs):n (hoot ports):s->(hoot vectors):n (hoot ports):s->(hoot parameters):n (hoot ports):s->(hoot bytevectors):n (hoot ports):s->(hoot procedures):n (hoot ports):s->(hoot lists):n (hoot ports):s->(hoot not):n (hoot ports):s->(hoot values):n (hoot ports):s->(hoot bitwise):n (hoot ports):s->(hoot char):n (hoot ports):s->(hoot errors):n (hoot ports):s->(hoot numbers):n (hoot ports):s->(hoot boxes):n (hoot ports):s->(hoot match):n (hoot ports):s->(hoot cond-expand):n (hoot numbers):s->(hoot primitives):n (hoot numbers):s->(hoot eq):n (hoot numbers):s->(hoot not):n (hoot numbers):s->(hoot values):n (hoot numbers):s->(hoot bitwise):n (hoot numbers):s->(hoot errors):n (hoot numbers):s->(hoot match):n (hoot boxes):s->(hoot primitives):n (hoot match):s->(hoot primitives):n (hoot match):s->(hoot errors):n (hoot keywords):s->(hoot primitives):n (hoot cond-expand):s->(hoot features):n (hoot cond-expand):s->(hoot primitives):n (scheme lazy) lazy (scheme lazy):s->(hoot primitives):n (scheme lazy):s->(hoot records):n (scheme lazy):s->(hoot match):n (scheme base) base (scheme lazy):s->(scheme base):n (scheme load) load (scheme load):s->(hoot primitives):n (scheme load):s->(hoot errors):n (scheme load):s->(scheme base):n (scheme complex) complex (scheme complex):s->(hoot numbers):n (scheme time):s->(hoot primitives):n (scheme time):s->(scheme base):n (scheme file) file (scheme file):s->(hoot primitives):n (scheme file):s->(hoot errors):n (scheme file):s->(hoot ports):n (scheme file):s->(hoot match):n (scheme file):s->(scheme base):n (scheme write) write (scheme write):s->(hoot write):n (scheme eval) eval (scheme eval):s->(hoot errors):n (scheme eval):s->(scheme base):n (scheme inexact) inexact (scheme inexact):s->(hoot primitives):n (scheme inexact):s->(hoot numbers):n (scheme char) char (scheme char):s->(hoot primitives):n (scheme char):s->(hoot bitwise):n (scheme char):s->(hoot char):n (scheme char):s->(hoot numbers):n (scheme char):s->(scheme base):n (scheme process-context) process-context (scheme process-context):s->(hoot primitives):n (scheme process-context):s->(hoot errors):n (scheme process-context):s->(scheme base):n (scheme cxr) cxr (scheme cxr):s->(hoot pairs):n (scheme read) read (scheme read):s->(hoot read):n (scheme base):s->(hoot features):n (scheme base):s->(hoot primitives):n (scheme base):s->(hoot eq):n (scheme base):s->(hoot strings):n (scheme base):s->(hoot pairs):n (scheme base):s->(hoot vectors):n (scheme base):s->(hoot equal):n (scheme base):s->(hoot exceptions):n (scheme base):s->(hoot parameters):n (scheme base):s->(hoot dynamic-wind):n (scheme base):s->(hoot bytevectors):n (scheme base):s->(hoot error-handling):n (scheme base):s->(hoot symbols):n (scheme base):s->(hoot assoc):n (scheme base):s->(hoot procedures):n (scheme base):s->(hoot write):n (scheme base):s->(hoot lists):n (scheme base):s->(hoot not):n (scheme base):s->(hoot syntax):n (scheme base):s->(hoot values):n (scheme base):s->(hoot control):n (scheme base):s->(hoot char):n (scheme base):s->(hoot read):n (scheme base):s->(hoot errors):n (scheme base):s->(hoot ports):n (scheme base):s->(hoot numbers):n (scheme base):s->(hoot match):n (scheme base):s->(hoot cond-expand):n (scheme base):s->(srfi srfi-9):n (scheme repl) repl (scheme repl):s->(hoot errors):n (scheme repl):s->(scheme base):n (scheme r5rs) r5rs (scheme r5rs):s->(scheme lazy):n (scheme r5rs):s->(scheme load):n (scheme r5rs):s->(scheme complex):n (scheme r5rs):s->(scheme file):n (scheme r5rs):s->(scheme write):n (scheme r5rs):s->(scheme eval):n (scheme r5rs):s->(scheme inexact):n (scheme r5rs):s->(scheme char):n (scheme r5rs):s->(scheme process-context):n (scheme r5rs):s->(scheme cxr):n (scheme r5rs):s->(scheme read):n (scheme r5rs):s->(scheme base):n (scheme r5rs):s->(scheme repl):n (scheme case-lambda) case-lambda (scheme case-lambda):s->(hoot primitives):n (fibers) (fibers) (fibers):s->(fibers scheduler):n (fibers):s->(guile):n (guile):s->(hoot features):n (guile):s->(hoot primitives):n (guile):s->(ice-9 match):n (guile):s->(hoot eq):n (guile):s->(hoot strings):n (guile):s->(hoot pairs):n (guile):s->(hoot bitvectors):n (guile):s->(hoot vectors):n (guile):s->(hoot equal):n (guile):s->(hoot exceptions):n (guile):s->(hoot parameters):n (guile):s->(hoot dynamic-wind):n (guile):s->(hoot bytevectors):n (guile):s->(hoot error-handling):n (guile):s->(hoot symbols):n (guile):s->(hoot assoc):n (guile):s->(hoot procedures):n (guile):s->(hoot write):n (guile):s->(hoot lists):n (guile):s->(hoot not):n (guile):s->(hoot syntax):n (guile):s->(hoot values):n (guile):s->(hoot control):n (guile):s->(hoot bitwise):n (guile):s->(hoot char):n (guile):s->(hoot dynamic-states):n (guile):s->(hoot read):n (guile):s->(hoot fluids):n (guile):s->(hoot errors):n (guile):s->(hoot ports):n (guile):s->(hoot numbers):n (guile):s->(hoot boxes):n (guile):s->(hoot keywords):n (guile):s->(hoot cond-expand):n (guile):s->(scheme lazy):n (guile):s->(scheme time):n (guile):s->(scheme file):n (guile):s->(scheme char):n (guile):s->(scheme process-context):n (guile):s->(scheme base):n (guile):s->(srfi srfi-9):n (srfi srfi-9):s->(hoot primitives):n (srfi srfi-9):s->(hoot records):n

If you are reading this on the web, you should see above a graph of dependencies among the 50 or so libraries that are shipped as part of Hoot. (Somehow I doubt that a feed reader will plumb through the inline SVG, but who knows.) It’s a bit of a mess, but still I think it’s a useful illustration of a number of properties of how the Hoot language is grown from small to large. Click on any box to visit the source code for that module.

the root of the boot

Firstly, let us note that the graph is not a forest: it is a single tree. There is no module that does not depend (possibly indirectly) on (hoot primitives). This is because there are no capabilities that Hoot libraries can access without importing them, and the only way into the Hootosphere from outside is via the definitions in the primitives module.

So what are these definitions, you might ask? Well, these are the “well-known” bindings, for example + for which the compiler might have some special understanding, the sort of binding that gets translated to a primitive operation at the compiler IR level. They are used in careful ways by the modules that use (hoot primitives) to ensure that their uses are all open-coded by the compiler. (“Open coding” is inlining. But inlining to me implies that the whole implementation is inlined, with no slow-path callouts, whereas open coding implies to me that it’s the compiler that knows what the op does and may or may not inline the actual asm.)

But, (hoot primitives) also exposes some other definitions, for example define and let and lambda and all that. Scheme doesn’t have keywords in the sense that Python has def and with and such: there is no privileged way to associate a name with its meaning. It is in this sense that it is impossible to avoid (hoot primitives): the most simple (define x 42) depends on the lexical meaning of define, which is provided by the primitives module.

Syntax definitions are an expander construct; they are not present at run-time. Using a syntax definition causes the expander to invoke code, and the expander runs on the host system, which is Guile and not WebAssembly. So, syntax definitions belong to the host. This goes also for some first-order definitions such as syntax->datum and so on, which are only used in syntax expanders; these definitions are plumbed through (hoot primitives), but can only ever be used by macro definitions, which run on the meta-level.

(Is this too heavy? Allow me to lighten the mood: when I was 22 or so and working in Namibia, I somehow got an advance copy of Notes from the Metalevel. I was working on algorithmic music synthesis, and my chief strategy was knocking hubris together with itself, as one does. I sent the author a bunch of uninvited corrections to his book. I think it was completely unwelcome! Anyway, moral of the story, at 22 you get a free pass to do whatever you want, and come to think of it, now that I am 44 I think I should get some kind of hubris loyalty award or something.)

powerful primitives

So, there are expand-time primitives and run-time primitives. The expander knows about expand-time primitives and the compiler knows about run-time primitives. One particularly powerful primitive is %inline-wasm, which takes an inline snippet of WebAssembly as an s-expression and applies it to a number of arguments passed at run-time. Consider make-bytevector:

(define* (make-bytevector len #:optional (init 0))
  (%inline-wasm
   '(func (param $len i32) (param $init i32)
      (result (ref eq))
      (struct.new
       $mutable-bytevector
       (i32.const 0)
       (array.new $raw-bytevector
                  (local.get $init)
                  (local.get $len))))
   len init))

We have an inline snippet of wasm that makes a $mutable-bytevector. It passes 0 as the hash field, meaning that the hashq of this value will be lazily initialized, and the contents are a new array of a given size and initial value. Inputs will be unboxed to the appropriate type (two i32s in this case), and likewise with outputs; here we produce the universal (ref eq) representation.

The nice thing about %inline-wasm is that the compiler didn’t have to be taught about make-bytevector: this definition suffices, because %inline-wasm can access a number of lower-level capabilities.

dual denotations

But as we learned in my notes on whole-program compilation, any run-time definition is available at compile-time, if it is reachable from a syntax transformer. So this definition above isn’t quite sufficient; we can’t call make-bytevector as part of a procedural macro, which we might want to do. What we need instead is to provide one definition when residualizing wasm at run-time, and another when loading a module at expand-time.

In Hoot we do this with cond-expand, where we expand to %inline-wasm when targetting Hoot, and... what, precisely, at expand-time? Really we need to make a Guile bytevector, so in this sort of case, we end up having to include a run-time make-bytevector definition in the (hoot primitives) module. This happens whereever we end up using %inline-wasm.

building to guile

Returning to our graph, we see that there is a red-colored block for Hoot modules, a teal-colored layer on top for those modules that are defined by R7RS, a few oddballs, and then (guile) and Fibers built on top. The (guile) module provides a shim that implements Guile’s own default set of bindings, allowing Guile modules to be loaded on a Hoot system. (guile) is layered on top of the low-level Hoot libraries, and out of convenience, on top of the various R7RS libraries as well, because it was easiest to remember what was where in R7RS than our ad-hoc nest of Hoot internal libraries.

Having (guile) lets Guile hackers build on Hoot. It’s still incomplete but I think eventually it will be capital-G Good. Even for a library that needed more porting like Fibers (Hoot has no threads so much of the parallel concurrent ML implementation can be simplified, and we use an event loop from the Wasm run-time instead of an epoll-based scheduler), it was still pleasant to be able to use define-module and keyword arguments and all of that.

next layers

I mentioned that this tower of terms is incomplete, and so that is one of the next work items for Hoot: complete support for Guile’s run-time library. At that point we’d probably want to merge it into Guile, but that is another topic.

But let’s leave that for another day; until then, happy hacking!

May 21, 2024

Python 3.13 Beta 1

Python 3.13 beta 1 is out, and I've been working on the openSUSE Tumbleweed package to get it ready for the release.

Installing python 3.13 beta 1 in Tumbleweed

If you are adventurous enough to want to test the python 3.13 and you are using openSUSE Tumbleweed, you can give it a try and install the current devel package:

# zypper addrepo -p 1000 https://download.opensuse.org/repositories/devel:languages:python:Factory/openSUSE_Tumbleweed/devel:languages:python:Factory.repo
# zypper refresh
# zypper install python313

What's new in Python 3.13

Python interpreter is pretty stable nowadays and it doesn't change too much to keep code compatible between versions, so if you are writing modern Python, your code should continue working whit this new version. But it's actively developed and new versions have cool new functionalities.

  1. New and improved interactive interpreter, colorized prompts, multiline editing with history preservation, interactive help with F1, history browsing with F2, paste mode with F3.
  2. A set of performance improvements.
  3. Removal of many deprecated modules: aifc, audioop, chunk, cgi, cgitb, crypt, imghdr, mailcap, msilib, nis, nntplib, ossaudiodev, pipes, sndhdr, spwd, sunau, telnetlib, uu, xdrlib, lib2to3.

Enabling Experimental JIT Compiler

The python 3.13 version will arrive with an experimental functionality to improve performance. We're building with the --enable-experimental-jit=yes-off so it's disabled by default but it can be enabled with a virtualenv before launching:

$ PYTHON_JIT=1 python3.13

Free-threaded CPython

The python 3.13 has another build option to disable the Global Interpreter Lock (--disable-gil), but we're not enabling it because in this case it's not possible to keep the same behavior. Building with disabled-gil will break compatibility.

In any case, maybe it's interesting to be able to provide another version of the interpreter with the GIL disabled, for specific cases where the performance is something critical, but that's something to evaluate.

We can think about having a python313-nogil package, but it's not something trivial to be able to have python313 and python313-nogil at the same time in the same system installation, so I'm not planning to work on that for now.

Black Python Devs Join the GNOME Foundation Nonprofit Umbrella

The GNOME Foundation and Black Python Devs are proud to announce that our organizations have entered into a fiscal sponsorship agreement for the mutual benefit of our communities and the greater open source world. We are thrilled to share that the GNOME Foundation will now serve as the nonprofit umbrella for Black Python Devs (BDP). The GNOME Foundation will hold BPD’s assets, accept and process donations, and perform administrative functions on behalf of BPDs, in exchange for a fee that supports the GNOME Foundation.

The GNOME Foundation, a 501(c)(3) nonprofit organization, envisions a world where everyone is empowered by technology they can trust. Since its inception as a project in August 1997 and its establishment as a foundation in August 2000, GNOME has been dedicated to creating a diverse and sustainable free software personal computing ecosystem. Our open source software guarantees certain freedoms for end users, ensuring they have control over their computing environments. With two annual releases, the GNOME desktop is the default environment for many major Linux distributions.

Black Python Devs(BPDs) is a global community hoping to increase the participation of Black and Colo(u)red Pythonistas in the greater Python Developer Community. Our goal is to become the largest community of Black Python Developers in the world and establish our community as a source for diverse leaders in local, regional, and global Python communities. The organization works to establish guidance, mentorship, and career support for Black Pythonistas around the world, and it also creates opportunities for the Python community to invest in local communities of Black Python Devs members. The organization aims to increase the participation of Black Python Devs members in existing Python community programs, events, and initiatives, and it also continues the development and growth of Black Python Devs members by establishing open-source programs.



This new partnership will support Black Python Devs in their fundraising efforts, membership growth, and program development while also providing fiscal support to the GNOME Foundation. By joining forces, we aim to foster a more inclusive tech community and empower more individuals through open source software.

Portrait of Holly Million

“I saw a post on the FOSS Foundation email list sharing that Black Python Devs was seeking a fiscal sponsor for their important work. I immediately reached out to Jay Miller to offer the GNOME Foundation as a place where BPDs could find a welcoming home. I was impressed with Jay’s leadership and his vision for BPDs. One of my key goals for the GNOME Foundation is to create more channels to nurture and include diverse groups in the GNOME community and in the OS world, in general, to create a more inclusive, more representative, more empowered community for our shared work. I am very enthusiastic about this fiscal sponsorship and look forward to seeing BPDs continue to grow and have a positive impact,” said Holly Million, executive director of the GNOME Foundation.

“We were pleasantly shocked when the GNOME foundation reached out to us!” said Jay Miller, Founder of Black Python Devs. “Our community leaders were excited and strongly supported our partnership plans. It’s Important that we push beyond our comfort in order to regularly make an impact. The guidance we’ve received in this process already has better prepared Black Python Devs for the journey ahead.”

Portrait of Jay Miller

This partnership allows Black Python Devs to accept donations as a US nonprofit. Those who want to help financially support the BPDs can now do so at https://blackpythondevs.com. For more information about Black Python Devs, contact leadership@blackpythondevs.com.

May 20, 2024

Fedora 40 Release Party in Prague

Last Friday I organized a Fedora 40 release party in Prague. A month ago I got a message from Karel Ziegler of Etnetera Core if we could do a Fedora release party in Prague again. Etnetera Core is a mid-sized company that does custom software development and uses Red Hat technologies. They have a really cool office in Prague which we used as a venue for release parties several times in pre-covid times.

We got a bit unlucky with the date this time. It was really terrible weather in Prague on Friday. It was pouring outside. Moreover the Ice Hockey World Championship is taking place in Prague now and the Czech team played against Austria at the time of the release party. These two things contributed to the less than expected attendance. But in the end roughly 15 people showed up.

A round table with Fedora swag.
Fedora swag for party attendees.

The talk part was really interesting. In the end it took almost 4 hours because there was a lot of discussion. The first talk was mine, traditionally on Fedora Workstation that switched to a long discussion about Distrobox vs Toolbx. As a result of that Luboš Kocman of SUSE got interested in Ptyxis saying that it’s something they may actually adopt, too.

Lukáš Kotek on stage.
Lukáš Kotek talking about his legacy computing machine.

The second talk was delivered by Lukáš Kotek who talked about building a retro-gaming machine based on Fedora and Atomic Pi. Several people asked us to provide his slides online, so here they are:

The third talk was delivered by Karel Ziegler who spoke on the new release of his favorite desktop environment – Plasma 6. The last talk was supposed to be delivered by Ondřej Kolín, but at the beginning of the party we were not sure if he’d make it because he was travelling from Berlin and was stuck in Friday traffic. The first three talks took so long due to interesting discussions that Ondřej arrived just on time for his talk.

He spoke about his experience building a simple app and distributing it on Flathub. This again started an interesting discussion about new and traditional models of Linux app distribution.

In the middle of the party we were joined by Andre Klapper, a long-time GNOME contributor living in Prague, and Keywan Tonekaboni, a German open source journalist who is currently on his holidays travelling on trains around the Czech Republic. We found out that we were taking the same train to Brno next day, so on Saturday we had another two hours for Linux software topics. 🙂

I’d like to thank the Fedora Project for sponsoring my travel to Prague to organize the event and also big thanks to Etnetera Core for providing just perfect venue for the party and sponsoring refreshment (They even had a beer tap!) and the party cake.

Fedora 40 cake
Fedora 40 cake.

May 19, 2024

Analysis of GNOME Foundation’s public economy: concerns and thoughts

Apart from Software Development, I also have an interest in governance and finances. Therefore, last July, I was quite happy to attend my first Annual General Meeting (AGM), taking place in  GUADEC in Riga. I was a bit surprised by the format, as I was expecting something closer to an assembly than to a presentation with a Q&A at the end. It was still interesting to witness, but I was even more shocked by the huge negative cash flow (difference between revenue and expenditures). With the numbers presented, the foundation had lost approximately 650 000 USD in the 2021 exercise, and 300 000 USD in the 2022 exercise. And nobody seemed worry about it. I would have expected that such difference would come consequence of a great investment aimed at improving the situation of the foundation long-term. However, nothing like that was part of the AGM. This left me thinking, and a bit worried about what was going on with the financials and organization of the foundation. After asking a member of the Board in private, and getting no satisfactory response, I started doing some investigations

Public information research

The GNOME Foundation (legally GNOME Foundation Inc) has the 501(c)3 status. It means it is tax exempt. As part of such status, the tax payments, economic status and whereabouts of the GNOME Foundation Inc are public. So I had some look at the tax filling declarations of the last years. These contain detailed information about income and expenses, net assets (e.g: money in bank accounts), retribution of the Board, Executive director, and key employees, amount of money spent on fulfilling the goals of the foundation, and lots of other things. Despite their wide goal, the tax fillings are not very hard to read, and it’s easy to learn how much money the  foundation made or spent. Looking at the details, I found several worrying things, like the fact that revenue and expenses in the Annual Report presented in the AGM did not match those in the tax reports, that most expenses were aggregated in sections that required no explanation, or that there were some explanations for expenses required but missing. So I moved on to open a confidential issue in the Board Team in gitlab expressing my concerns.

The answer mostly covered an explanation of the big deficits in the previous year (that would have been great to have in the Annual Report), but was otherwise generally disappointing. Most of my concerns (all of which are detailed below) were answered with nicely-written variations of: “that’s a small problem, we are aware of it and working on it”, or “this is not common practice and you can find unrelated information in X place”. It has been 6 months, a new tax statement and annual report are available, but problems persist. So I am sharing publicly my concerns with several goals:

  • Make these concerns available to the general GNOME community. Even though everything I am presenting comes from public sources, it is burdensome to research, and requires some level of experience with bureaucracy.
  • Show my interest about the topic, as I plan to present myself to the Board of Directors in the next elections. My goal is to become part of the Finance Committee to help improve in the transparency and efficiency of accounting.
  • Make the Board aware of my concerns (and hopefully show that others also share them), so things can be improved regardless of whether I get or don’t get elected for the board

Analysis of data and concerns

The first analysis I did some months ago was not very detailed, and quite manual. This time, I gather information in more detailed, and compiled it in an spread sheet that I made publicly available. All the numbers are taken from GNOME’s Annual Reports and from the Tax declarations available in Pro Publica. I am very happy to get those values reviewed, as there could always be mistakes. I am still fairly certain that small errors won’t change my concerns, since those are based on patterns and not on one-time problems. So to my concerns:

  • Non-matching values between reports and taxes: in the last 3 years, for revenue and income, only the revenue in presented for Fiscal Year 2021/2022 matches what is actually declared. For the rest, differences vary, but go up to close to 9%. I was told that some difference is expected (as numbers these numbers are crafted a bit earlier than taxes), the Board had worked on it, and the last year (the only one with at least revenue matching) is certainly better. But there are still something like 18 000 USD of mismatch in expenses. For me, this is a clear sign, that something is going wrong with the accounting of the foundation, even if improved in the last year.
  • Non-matching values between reports from different years: each Annual Report contains not only the results for that year, but also from the previous one. However, numbers only match half of the time. This is still the case for the latest report in 2023, where suddenly 10 000 USD disappeared from 2022’s expenses, growing the difference from what was declared that year to 27 000 USD. This again shows accountability issues, as previous-years’ numbers should certainly not diverge even more from the tax declarations than initial numbers.
  • Impossibility to match tax declarations and Annual Reports: the way the annual reports are presented, makes it impossible to get a more detailed picture of how are expenses and revenue split. For example, more than 99% of the revenue in 2023 is grouped under a single tax category, while the previous year at least 3 where used. However, the split in the Annual Reports remains roughly the same. So either the accounting is wrong in one of those years, or the split of expenses for the Annual Report was crafted from different data sources. Another example is how “Staff” makes the greatest expense until it ceases to exist in the latest report. However, staff-related expenses in the taxes do not make up for the “Staff” expense in the repots. The chances are that part of that is due to subcontracting, and thus counted in “Fees for services, Other” in the taxes. Unfortunately that category has its own issues.
  • Missing information in the tax declaration: most remarkably, in the tax fillings of fiscal years 2020/2021 and 2021/2022, the category: “Fees for services, Other” represents more than 10% of the expense, which is clearly stated that it should be explained in a later part of the tax filling. However, it is not. I was told 6 months ago that might have to be with some problem with ProPublica not getting the data, and that they would try to fix it. But I was not provided with the information, and 6 months later the public tax fillings still have not been amended.
  • Lack of transparency on expenses:
    • First, in the last 2 tax fillings, more than 50% of expenses lay under “Other salaries and wages”, and “Fees for services, Other”. These fields do not provide enough transparency (maybe they would if the previous point was addressed), and means most of the expenses actually go unaccounted.
    • Second, in the Annual Reports. For the previous 2 years, the biggest expenses were by far “Staff”. There exists a website with the staff and their roles, but there is no clear explanation of which money goes to whom or why. This can be a great problem if some part of the community does not feel supported in its affairs by the foundation. Compare for example with Mastodon’s Annual Report, where everybody on a pay-slip of free-lancing is accounted and written down how much they earn. This is made worse since the current year’s Annual Report has completely removed that category in favor of others. Tax fillings (once available) will, however, provide more context if proper explanations regarding “Fees for services, Other” is finally available.
  • Different categories and reporting formats: the reporting format changed completely in 2021/2022 compared to previous years, and changed completely again this year. This is a severe issue for transparency, since continuously updating formats make it hard to compare between years (which as noted above, is useful!). One of course can understand that things need to be updated to improve things, but such drastic changes do not help with transparency.

There are certainly other small things that I noticed that caught my attention. However, I hope these examples are enough to get my point across. And there is no need to make this blog post even longer!

Conclusions

My main conclusion from the analysis is that the foundation accounting and decision-making regarding expenses has been sub-par in the last years. It is also a big issue that there is a huge lack in transparency regarding the economic status and decision-making of the foundation. I learned more about the economic status of the foundation by reading tax fillings than by reading Annual Reports. Unfortunately, opening an issue with the Board six months ago to share these concerns has not make it better. It could possibly be, that things are much better than they look from outside, but the lack of transparency is making it not appear as so. I hope that I can join the Finance Committee, and help address these issues in the short term!

Status update, 19/05/2024 – GNOME OS and more

Seems this is another of those months where I did enough stuff to merit two posts. (See Thursday’s post on async Rust). Sometimes you just can’t get out of doing work, no matter how you try. So here is part 2.

A few weeks ago I went to the USA for a week to meet a client team who I’ve been working with since late 2022. This was actually the first time I left Europe since 2016*. Its wild how a Euro is now pretty much equal value to a US dollar, but everything costs about double compared to Europe. It was fun though and good practice for another long trip to the Denver GUADEC in July.

* The UK is still part of Europe, it hasn’t physically moved, has it?

GNOME OS stuff

The GNOME OS project has at least 3 active maintainers and a busy Matrix room, which makes it fairly healthy as GNOME modules go. There’s no ongoing funding for maintenance though and everyone who contributes is doing so mostly as a volunteer — at least, as far as I’m aware. So there are plenty of plans and ideas for how it could develop, but many of them are incomplete and nobody has the free time to push them to completion.

We recently announced some exciting collaboration between Codethink, GNOME and the Sovereign Tech Fund. This stint of full time work will help complete several in-progress tasks. Particularly interesting to me is finishing the migration to systemd-sysupdate (issue 832), and creating a convenient developer workflow and supporting tooling (issue 819) so we can finally kill jhbuild. Plus, of course, making the openQA tests great again.

Getting to a point where the team could start work, took a lot of work, most of which isn’t visible to the outside world. Discussions go back at least to November 2023. Several people worked over months on scoping, estimates, contracts and resourcing the engineering team before any of the coding work started: Sonny Piers working to represent GNOME, and on the Codethink side, Jude Onyenegecha and Weyman Lo, along with Abderrahim Kitouni and Javier Jardón (who are really playing for both teams ;-).

I’m not working directly on the project, but I’m helping out where I can on the communications side. We have at least 3 IRC + Matrix channels where communication happens every day, each with a different subset of people and cocumentation is scattered all over the place. Some of the Codethink team are seasoned GNOME contributors, others are not, and the collaborative nature of the GNOME OS project – there is no “BDFL” figure who takes all the decisions – means it’s hard to get clear answers around how things should be implemented. Hopefully my efforts will mean we make the most of the time available.

You can read more about the current work here on the Codethink blog: GNOME OS and systemd-sysupdate, the team will hopefully be posting regular progress updates to This Week In GNOME, and Martín Abente Lahaye (who very recently joined the team on the Codethink side \o/) is opening public discussions around the next generation developer experience for GNOME modules – see the discussion here.

Tiny SPARQL, Twinql, Sqlite-SPARQL, etc.

We’re excited to welcome Demigod and Rachel to the GNOME community, working on a SPARQL web IDE as part of Google Summer of Code 2024.

Since this is going to hopefully shine a new light on the SPARQL database project, it seems like a good opportunity to start referring to it by a better name than “Tracker SPARQL”, even while we aren’t going to actually rename the whole API and release 4.0 any time soon.

There are a few name ideas already, the front runners being Tiny SPARQL or Twinql, which I still can’t quite decide which I prefer. The former is unique but rather utilitarian, while the latter is a nicer name but is already used by a few other (mostly abandoned) projects. Which do you prefer? Let me know in the comments..


Minilogues and Minifreaks

I picked up a couple of hardware synthesizers, the Minilogue XD and the Minifreak. I was happy for years with my OP-1 synth, but after 6 years of use it has so many faults to be unplayable, and replacing it would cost more than a second hand car, plus its a little too tiny for on-stage use.

The Minilogue XD is one of the only mainstream synths to have an open SDK for custom oscillators and effects, full respect to Korg for their forward thinking here … although their Linux tooling is a closed source binary with an critical bug that they won’t fix, so, still some way to go before they get 10/10 for openness.

The Minifreak, by contrast, has a terrible Windows-only firmware update system, which works so poorly that I already had to the return the synth once to Arturia after a firmware update caused it to brick itself. There’s a stark lesson here in having open protocols which hopefully Arturia can pick up on. This synth has absolutely incredible sound design capabilities though so I decided to keep it and just avoid ever updating the firmware.

Here’s a shot of the Minifreak next to another mini freak:

May 17, 2024

GNOME maintainers: here’s how to keep your issue tracker in good shape

One of the goals of the new GNOME project handbook is to provide effective guidelines for contributors. Most of the guidelines are based on recommendations that GNOME already had, which were then improved and updated. These improvements were based on input from others in the project, as well as by drawing on recommendations from elsewhere.

The best example of this effort was around issue management. Before the handbook, GNOME’s issue management guidelines were seriously out of date, and were incomplete in a number of areas. Now we have shiny new issue management guidelines which are full of good advice and wisdom!

The state of our issue trackers matters. An issue tracker with thousands of open issues is intimidating to a new contributor. Likewise, lots of issues without a clear status or resolution makes it difficult for potential contributors to know what to do. My hope is that, with effective issue management guidelines, GNOME can improve the overall state of its issue trackers.

So what magic sauce does the handbook recommend to turn an out of control and burdensome issue tracker into a source of calm and delight, I hear you ask? The formula is fairly simple:

  • Review all incoming issues, and regularly conduct reviews of old issues, in order to weed out reports which are ambiguous, obsolete, duplicates, and so on
  • Close issues which haven’t seen activity in over a year
  • Apply the “needs design” and “needs info” labels as needed
  • Close issues that have been labelled “need info” for 6 weeks
  • Issues labelled “needs design” get closed after 1 year of inactivity, like any other
  • Recruit contributors to help with issue management

To some readers this is probably controversial advice, and likely conflicts with their existing practice. However, there’s nothing new about these issue management procedures. The current incarnation has been in place since 2009, and some aspects of them are even older. Also, personally speaking, I’m of the view that effective issue management requires taking a strong line (being strong doesn’t mean being impolite, I should add – quite the opposite). From a project perspective, it is more important to keep the issue tracker focused than it is to maintain a database of every single tiny flaw in its software.

The guidelines definitely need some more work. There will undoubtedly be some cases where an issue needs to be kept open despite it being untouched for a year, for example, and we should figure out how to reflect that in the guidelines. I also feel that the existing guidelines could be simplified, to make them easier to read and consume.

I’d be really interested to hear what changes people think are necessary. It is important for the guidelines to be something that maintainers feel that they can realistically implement. The guidelines are not set in stone.

That said, it would also be awesome if more maintainers were to put the current issue management guidelines into practice in their modules. I do think that they represent a good way to get control of an issue tracker, and this could be a really powerful way for us to make GNOME more approachable to new contributors.

May 16, 2024

on hoot, on boot

I realized recently that I haven’t been writing much about the Hoot Scheme-to-WebAssembly compiler. Upon reflection, I have been too conscious of its limitations to give it verbal tribute, preferring to spend each marginal hour fixing bugs and filling in features rather than publicising progress.

In the last month or so, though, Hoot has gotten to a point that pleases me. Not to the point where I would say “accept no substitutes” by any means, but good already for some things, and worth writing about.

So let’s start today by talking about bootie. Boot, I mean! The boot, the boot, the boot of Hoot.

hoot boot: temporal tunnel

The first axis of boot is time. In the beginning, there was nary a toot, and now, through boot, there is Hoot.

The first boot of Hoot was on paper. Christine Lemmer-Webber had asked me, ages ago, what I thought Guile should do about the web. After thinking a bit, I concluded that it would be best to avoid compromises when building an in-browser Guile: if you have to pollute Guile to match what JavaScript offers, you might as well program in JavaScript. JS is cute of course, but Guile is a bit different in some interesting ways, the most important of which is control: delimited continuations, multiple values, tail calls, dynamic binding, threads, and all that. If Guile’s web bootie doesn’t pack all the funk in its trunk, probably it’s just junk.

So I wrote up a plan something to which I attributed the name tailification. In retrospect, this is simply a specific flavor of a continuation-passing-style (CPS) transmutation, late in the compiler pipeline. I’ll elocute more in a future dispatch. I did end up writing the tailification pass back then; I could have continued to target JS, but it was sufficiently annoying and I didn’t prosecute. It sat around unused for a few years, until Christine’s irresistable charisma managed to conjure some resources for Hoot.

In the meantime, the GC extension for WebAssembly shipped (woot woot!), and to boot Hoot, I filled in the missing piece: a backend for Guile’s compiler that tailified and then translated primitive operations to snippets of WebAssembly.

It was, well, hirsute, but cute and it did compute, so we continued to boot. From this root we grew a small run-time library, written in raw WebAssembly, used for slow-paths for the various primitive operations that are part of Guile’s compiler back-end. We filled out Guile primcalls, in minute commits, growing the WebAssembly runtime library and toolchain as we went.

Eventually we started constituting facilities defined in terms of those primitives, via a Scheme prelude that was prepended to all programs, within a nested lexical environment. It was never our intention though to drown the user’s programs in a sea of predefined bindings, as if the ultimate program were but a vestigial inhabitant of the lexical lake—don’t dilute the newt!, we would often say [ed: we did not]— so eventually when the prelude became unmanageable, we finally figured out how to do whole-program compilation of a set of modules.

Then followed a long month in which I would uproot the loot from the boot: take each binding from the prelude and reattribute it into an appropriate module. User code could import all the modules that suit, as long as they were known to Hoot, but no others; it was only until we added the ability for users to programmatically consitute an environment from their modules that Hoot became a language implementation of any repute.

Which brings us to the work of the last month, about which I cannot be mute. When you have existing Guile code that you want to distribute via the web, Hoot required you transmute its module definitions into the more precise R6RS syntax. Precise, meaning that R6RS modules are static, in a way that Guile modules, at least in absolute terms, are not: Guile programs can use first-class accessors on the module systems to pull out bindings. This is yet another example of what I impute as the original sin of 1990s language development, that modules are just mutable hash maps. You see it in Python, for example: because you don’t know for sure to what values global names are bound, it is easy for any discussion of what a particular piece of code means to end in dispute.

The question is, though, are the semantics of name binding in a language fixed and absolute? Once your language is booted, are its aspects definitively attributed? I think some perfection, in the sense of becoming more perfect or more like the thing you should be, is something to salute. Anyway, in Guile it would be coherent with Scheme’s lexical binding heritage to restitute some certainty as to the meanings of names, at least in a default compilation node. Lexical binding is, after all, the foundation of the Macro Writer’s Statute of Rights. Of course if you are making a build for development purposes, not to distribute, then you might prefer a build that marks all bindings as dynamic. Otherwise I think it’s reasonable to require the user to explicitly indicate which definitions are denotations, and which constitute locations.

Hoot therefore now includes an implementation of the static semantics of Guile’s define-module: it can load Guile modules directly, and as a tribute, it also has an implementation of the ambient (guile) module that constitutes the lexical soup of modules that aren’t #:pure. (I agree, it would be better if all modules were explicit about the language they are written in—their imported bindings and so on—but there is an existing corpus to accomodate; the point is moot.)

The astute reader (whom I salute!) will note that we have a full boot: Hoot is a Guile. Not an implementation to substitute the original, but more of an alternate route to the same destination. So, probably we should scoot the two implementations together, to knock their boots, so to speak, merging the offshoot Hoot into Guile itself.

But do I circumlocute: I can only plead a case of acute Hoot. Tomorrow, we elocute on a second axis of boot. Until then, happy compute!

Status update, 16/05/2024 – Learning Async Rust

This is another month where too many different things happened to stick them all in one post together. So here’s a ramble on Rust, and there’s more to come in a follow up post.

I first started learning Rust in late 2020. It took 3 attempts before I could start to make functional commandline apps, and the current outcome of this is the ssam_openqa tool, which I work on partly to develop my Rust skills. This month I worked on some intrusive changes to finally start using async Rust in the program.

How it started

Out of all the available modern languages I might have picked to learn, I picked Rust partly for the size and health of its community: every community has its issues, but Rust has no “BDFL” figure and no one corporation that employs all the core developers, both signs of a project that can last a long time. Look at GNOME, which is turning 27 this year.

Apart from the community, learning Rust improved the way I code in all languages, by forcing more risks and edge cases to the surface and making me deal with them explicitly in the design. The ecosystem of crates has most of what you could want (although there is quite a lot of experimentation and therefore “churn”, compared to older languages). It’s kind of addictive to know that when you’ve resolved all your compile time errors, you’ll have a program that reliably does what you want.

There are still some blockers to me adopting Rust everywhere I work (besides legacy codebases). The “cycle time” of the edit+compile+test workflow has a big effect on my happiness as a developer. The fastest incremental build of my simple CLI tool is 9 seconds, which is workable, and when there are compile errors (i.e. most of the time) its usually even faster. However, a release build might take 2 minutes. This is 3000 lines of code with 18 dependencies. I am wary of embarking on a larger project in Rust where the cycle time could be problematically slow.

Binary size is another thing, although I’ve learned several tricks to keep ssam_openqa at “only” 1.8MB. Use a minimal arg parser library instead of clap. Use minreq for HTTP. Follow the min-size-rust guidelines. Its easy to pull in one convenient dependency that brings in a tree of 100 more things, unless you are careful. (This is a problem for C programmers too, but dependency handling in C is traditionally so horrible that we are already conditioned to avoid too many external helper libraries).

The third thing I’ve been unsure about until now is async Rust. I never immediately liked the model used by Rust and Python of having a complex event loop hidden in the background, and a magic async keyword that completely changes how a function is executed, and requires all other functions to be async such as you effectively have two *different* languages: the async variant, and the sync variant; and when writing library code you might need to provide two completely different APIs to do the same thing, one async and one sync.

That said, I don’t have a better idea for how to do async.

Complicating matters in Rust is the error messages, which can be mystifying if you hit an edge case (see below for where this bit me). So until now I learned to just use thread::spawn for background tasks, with a std::sync::mpsc channel to pass messages back to the main thread, and use blocking IO everywhere. I see other projects doing the same.

How it’s going

My blissful ignorance came to an end due to changes in a dependency. I was using the websocket crate in ssam_openqa, which embeds its own async runtime so that callers can use a blocking interface in a thread. I guess this is seen as a failed experiment, as the library is now “sluggishly” maintained, the dependencies are old, and the developers recommend tungstenite instead.

Tungstenite seems unusable from sync code for anything more than toy examples, you need an async wrapper such as async-tungstenite (shout out to slomo for this excellent library, by the way). So, I thought, I will need to port my *whole codebase* to use an async runtime and an async main loop.

I tried, and spent a few days lost in a forest of compile errors, but its never the correct approach to try and port code “in one shot” and without a plan. To make matters worse, websocket-rs embeds an *old* version of Rust’s futures library. Nobody told me, but there is “futures 0.1” and “futures 0.3.” Only the latter works with the await keyword; if you await a future from futures 0.1, you’ll get an error about not implementing the expected trait. The docs don’t give any clues about this, eventually I discovered the Compat01As03 wrapper which lets you convert types from futures 0.1 to futures 0.3. Hopefully you never have to deal with this as you’ll only see futures 0.1 on libraries with outdated dependencies, but, now you know.

Even better, I then realized I could keep the threads and blocking IO around, and just start an async runtime in the websocket processing thread. So I did that in its own MR, gaining an integration test and squashing a few bugs in the process.

The key piece is here:

use tokio::runtime;
use std::thread;

...

    thread::spawn(move || {
        let runtime = runtime::Builder::new_current_thread()
            .enable_io()
            .build()
            .unwrap();

        runtime.block_on(async move {
            // Websocket event loop goes here

This code uses the tokio new_current_thread() function to create an async main loop out of the current thread, which can then use block_on() to run an async block and wait for it to exit. It’s a nice way to bring async “piece by piece” into a codebase that otherwise uses blocking IO, without having to rewrite everything up front.

I have some more work in progress to use async for the two main loops in ssam_openqa: these currently have manual polling loops that periodically check various message queue for events and then call thread::sleep(250), which work fine in practice for processing low frequency control and status events, but it’s not the slickest nor most efficient way to write a main loop. The classy way to do it is using the tokio::select! macro.

When should you use async Rust?

I was hoping for a simple answer to this question, so I asked my colleagues at Codethink where we have a number of Rust experts.

The problem is, cooperative task scheduling is a very complicated topic. If I convert my main loop to async, but I use the std library blocking IO primitives to read from stdin rather than tokio’s async IO, can Rust detect that and tell me I did something wrong? Well no, it can’t – you’ll just find that event processing stops while you’re waiting for input. Which may or may not even matter.

There’s no way automatically detect “syscall which might wait for user input” vs “syscall which might take a lot of CPU time to do something”, vs “user-space code which might not defer to the main loop for 10 minutes”; and each of these have the same effect of causing your event loop to freeze.

The best advice I got was to use tokio console to monitor the event loop and see if any tasks are running longer than they should. This looks like a really helpful debugging tool and I’m definitely going to try it out.

Screenshot of tokio-console

So I emerge from the month a bit wiser about async Rust, no longer afraid to use it in practice, and best of all, wise enough to know that its not an “all or nothing” switch – its perfectly valid to mix and sync and async in different places, depending on what performance characteristics you’re looking for.

May 15, 2024

Ptyxis on Flathub

You can get Ptyxis on Flathub now if you would like to run the stable version rather than Nightly. Unless you’re interested in helping QA Ptyxis or contributing that is probably the Flatpak you want to have installed.

Nightly builds of Ptyxis use the GNOME Nightly SDK meaning GTK from main (or close to it). Living on “trunk” can be a bit painful when it goes through major transitions like is happening now with a move to Vulkan-by-default.

Enjoy!

May 14, 2024

Generative non-AI

In last week's episode of the Game Scoop podcast an idea was floated that modern computer game names are uninspiring and that better ones could be made by picking random words from existing NES titles. This felt like a fun programming challenge so I went and implemented it. Code and examples can be found in this GH repo.

Most of the game names created in this way are word salad gobbledigook or literally translated obscure anime titles (Prince Turtles Blaster Family). Running it a few times does give results that are actually quite interesting. They range from games that really should exist (Operation Metroid) to surprisingly reasonable (Gumshoe Foreman's Marble Stadium), to ones that actually made me laugh out loud (Punch-Out! Kids). Here's a list of some of my favourites:

  • Ice Space Piano
  • Castelian Devil Rainbow Bros.
  • The Lost Dinosaur Icarus
  • Mighty Hoops, Mighty Rivals
  • Rad Yoshi G
  • Snake Hammerin'
  • MD Totally Heavy
  • Disney's Die! Connors
  • Monopoly Ransom Manta Caper!
  • Revenge Marble
  • Kung-Fu Hogan's F-15
  • Sinister P.O.W.
  • Duck Combat Baseball

I emailed my findings back to the podcast host and they actually discussed it in this week's show (video here starting at approximately 35 minutes). All in all this was an interesting exercise. However pretty quickly after finishing the project I realized that doing things yourself is no longer what the cool kids are doing. Instead this is the sort of thing that is seemingly tailor-made for AI. All you have to do is to type in a prompt like "create 10 new titles for video games by only taking words from existing NES games" and post that to tiktokstagram.

I tried that and the results were absolute garbage. Since the prompt has to have the words "video game" and "NES", and LLMs work solely on the basis of "what is the most common thing (i.e. popular)", the output consists almost entirely of the most well known NES titles with maybe some words swapped. I tried to guide it by telling it to use "more random" words. The end result was a list of ten games of which eight were alliterative. So much for randomness.

But more importantly every single one of the recommendations the LLM created was boring. Uninspired. Bland. Waste of electricity, basically.

Thus we find that creating a list of game names with an LLM is easy but the end result is worthless and unusable. Doing the same task by hand did take a bit more effort but the end result was miles better because it found new and interesting combinations that a "popularity first" estimator seem to not be able to match. Which matches the preconception I had about LLMs from prior tests and seeing how other people have used them.

May 11, 2024

GSoC Introductory Post

My journey as a GNOME user started in 2020 when I first set up Ubuntu on my computer, dual-booting it with Windows. Although I wasn't aware of GNOME back then, what I found fascinating was that despite Ubuntu being open source, it's performance and UI was comparable to Windows. I switched to become a regular user of Ubuntu and loved the way the GNOME Desktop Environment seamlessly performed different tasks. I could run multiple instances of various applications at the same time without it lagging or crashing down, which was often a problem in Windows.

A beginning in open source

The first time I came across the term "open source" was while installing the MingW GCC Compiler for C++ from SourceForge. I had a rough idea of what the term meant but being a complete noob at the time, I didn't make a decision of whether to start contributing. When I felt I had enough skills to contribute, I was introduced to p5.js, which is a JavaScript library for creative coding. With my familiarity in JavaScript, the codebase of p5.js was easy to understand, and thus began my journey as an open source contributor. Opening my first PR in p5.js gave me a feeling of accomplishment that reminded me of the time I compiled my first C++ program. I started contributing more and began to learn about the GNOME environment and wanted to contribute to the desktop environment I had been a user of.

Contributing to GNOME



I learnt about the libraries GLib and GTK that empower programmers to build apps using modern programming techniques. I scrambled through documentation and watched some introductory videos about GLib, GObject, and GObject Introspection, and diving deeper into this repository of knowledge I found myself wanting to learn more about how GNOME apps are built. The GNOME Preparatory Bootcamp for GSoC & Outreachy conducted by GNOME Africa prepared me to become a better contributor. Thanks to Pedro Sader Azevedo and Olosunde Ayooluwa for teaching us about setting up the development environment and getting started with the contribution process. It was around this time that I found out about the programming language Vala and a prospective GSoC project that peaked my interest. I was always fascinated by the low-level implementation details of compilers and how compilers work, and this project was related to the Vala compiler.

Learning the Vala language



Vala is an object-oriented programming language built on top of the GObject type system. It contains many high-level abstractions which the native C ABI does not provide, thus making it an ideal language to build GNOME applications. Vala is not widely used, so there are few online resources to learn it, however the Vala tutorial provides a robust documentation and is a good starting point for beginners. The best way to learn something is learning by doing so I decided to learn Vala by building apps using GTK and Libadwaita. However, being completely new to the GNOME environment, this approach got me limited success. I haven't yet learnt GTK or Libadwaita but I did manage to understand Vala language constructs by reading through the source code of some Vala applications. I worked on some issues in the Vala repository and this gave me a sneak peek into the working of the Vala compiler. I got to learn about how it builds the Vala AST and compiles the Vala code into GObject C, although I still have a lot to learn to understand how it is put together.

My GSoC Project

As part of my GSoC project we have to add support for the latest GIR attributes to the Vala compiler and the Vala API generator. We can do this by including these attributes in the parsing and generation of GIR files, and linking them with Vala language constructs if needed. This also involves adding test cases for these attributes in the test suite to make sure that the .gir and .vapi files are generated correctly. Once this is done we need to work on Valadoc. Valadoc parses documentation in the Gtkdoc format, but this project involves making it parse documentation in the GI-Docgen format too. Adding this support will require creation of some new files and modifying the documentation parser in Valadoc. After implementing this support the plan is to modernise the appearance of valadoc.org. The website was clearly built a while ago and needs redesign to make it more interactive and user friendly. This will require changing some CSS styles and JavaScript code of the website. With the completion of this project, the look of the website will be changed to be at par with the online documentation of any other programming language.


Thanks to my mentor Lorenz Wildberg, I now have a coherent idea about what needs to be done in the project and we have a workable plan to achieve it. I'm very optimistic about the project, and I'm sure that we will be able to meet all the project goals within the stipulated timeline. In the coming few days I plan to read the Vala documentation and understand the codebase so that I can get started with achieving project objectives in the coding period.






May Maps

 

It's about time for the spring update of goings on in Maps!

There's been some changes going on since the release of 46.


Vector Map by Default

The vector map is now being used by default, and with it Maps supports dark mode (also the old raster tiles has been retired, though there still exists the hidden feature of running with a local tile directory. Which was never really intended for general use but more as a way to experiment with offline map support). The plan will be to eventually support proper offline map support with a way to download areas in a more user-friendly and organized way then to provide a raw path…).

Dark Improvements

Following the introduction of dark map support the default rendering of public transit routes and lines has been improved for the dark mode to give better contrast (something that trickier before when the map view was always light even when the rest of the UI, such as the sidebar itinerary was shown in dark mode).



More Transit Mode Icons

Jakub Steiner and Sam Hewitt has been working on designing icons for some additional modes of transit, such as trolley buses, taxi, and monorail.

Trolley bus routes

This screenshot was something I “mocked” by changing the icon for regular bus to temporarily use the newly designed trolley bus icon as we don't currently have any supported transit route provider in Maps currently that exposed trolley bus routes. I originally made this for an excursion with a vintage trolley bus I was going to attend, but that was cancelled in the last minute because of technical issues.

Showing a taxi station

And above we have the new taxi icon (this could be used both for showing on-demand communal taxi transit and for taxi stations on the map.

These icons have not yet been merged into Maps, as there's still some work going on finalizing their design. But I thought I still wanted to show them here…

Brand Logos

For a long time we have shown a title image from Wikidata or Wikipedia for places when available. Now we show a logo image (using the Wikidata reference for the brand of a venue) when available, and the place has no dedicated article).

Explaining Place Types

As sometimes it can be a bit hard to determine the exact type from the icons shown on the map. And especially for more generic types, such as shops where we have dedicated icons for some, and a generic icon. We now show the type also in the place bubble (using the translations extracted from the iD OSM editor).


Places with a name shows the icon and type description below the name, dimmed.


For unnamed places we show the icon and type instead of the name, in the same bold style as the name would normally use.

Additional Things

Another detail worth mentioning is that you can now clear the currently showing route from the context menu so you won't have to open the sidebar again and manually erase the filled in destinations.

 

Another improvement is that if you already enter a starting point with ”Route from Here“, or enter an address in the sidebar and then use the “Directions”  button from a place bubble, that starting point will now be used instead of the current location.

Besides this, also some old commented-out code was removed… but there's no screenshots of that, I'm afraid ☺

Acrostic Generator: Part one

It’s been a while since my last blog post, which was about my Google Summer of Code project. Even though it has been months since I completed GSoC, I have continued working on the project, increasing acrostic support in Crosswords.

We’ve added support for loading Acrostic Puzzles in Crosswords, but now it’s time to create some acrostics.

Now that Crosswords has acrostic support, I can use screenshots to help explain what an acrostic is and how the puzzle works.

Let’s load an Acrostic in Crosswords first.

Acrostic Puzzle loaded in Crosswords

The main grid here represents the QUOTE: “CARNEGIE VISITED PRINCETON…” and if we read out the first letter of each clue answer (displayed on the right) it forms the SOURCE. For example, in the image above, name of the author is “DAVID ….”.
Now, the interesting part is answers for the clues fit it in the SOURCE.

Let’s consider another small example:
QUOTE: “To be yourself in a world that is constantly trying to make you something else is the greatest accomplishment.”
AUTHOR: “Ralph Waldo Emerson”

If you see correctly, letters of SOURCE here are part of the QUOTE. One set of answers to clues could be:

Solutions generated using AcrosticGenerator. Read the first letter of each Answer from top to bottom. It forms ‘Ralph Waldo Emerson’.

Coding the Acrostic Generator

As seen above, to create an acrostic, we need two things: the QUOTE and the SOURCE string. These will be our inputs from the user.

Additionally, we need to set some constraints on the generated word size. By default, we have set MIN_WORD_SIZE to 3 and MAX_WORD_SIZE to 20.. The user is allowed to change it. However, users are allowed to change these settings.

Step 1: Check if we can create an acrostic from given input

You must have already guessed it. We check if the characters in the SOURCE are available in the QUOTE string. To do this, we utilize IPuzCharset data structure.
Without going in much detail, it simply stores characters and their frequencies.
For example, for the string “MAX MIN”, it’s charset looks like [{‘M’: 2}, {‘A’: 1}, {‘X’:1}, {‘I’: 1}, {’N’: 1}].

First, We build a charset of the source string and then iterate through it. For the source string to be valid, the count of every character in the source charset should be less than or equal to the count of that character in the quote charset.

for (iter = ipuz_charset_iter_first (source_charset);
iter;
iter = ipuz_charset_iter_next (iter))
{
IPuzCharsetIterValue value;
value = ipuz_charset_iter_get_value (iter);

if (value.count > ipuz_charset_get_char_count (quote_charset, value.c))
{
// Source characters are missing in the provided quote
return FALSE;
}
}

return TRUE;

Since, now we have a word size constraint, we need to add one more check.
Let’s understand through this an example.

QUOTE: LIFE IS TOO SHORT
SOURCE: TOLSTOI
MIN_WORD_SIZE: 3
MAX_WORD_SIZE: 20

Since, MIN_WORD_SIZE is set to 3, the generated answers should have a minimum of three letters in them.

Possible solutions considering every solution
has a length equal to the minimum word size:
T _ _
O _ _
L _ _
S _ _
T _ _
O _ _
O _ _

If we take the sum of the number of letters in the above solutions, It’s 21. That is greater than number of letters in the QUOTE string (14). So, we can’t create an acrostic from the above input.

if  ((n_source_characters * min_word_size) > n_quote_characters)
{
//Quote text is too short to accomodate the specificed minimum word size for each clue
return FALSE;
}

While writing this blog post, I found out we called this error “SOURCE_TOO_SHORT”. It should be “QUOTE_TOO_SHORT” / “SOURCE_TOO_LARGE”.

Stay tuned for further implementation in the next post!

Being a beginner open source contributor

In October 2023, with quite a bit of experience in web development and familiarity with programming concepts in general, I was in search of avenues where I could put this experience to good use. But where could a beginner programmer get the opportunity to work with experienced developers? And that too, in a real-world project with a user-base in millions...

Few people would hire a beginner! We all know the paradox of companies intent on hiring experienced people for entry-level roles. That's where it gets tricky because we can't really have an experience without being hired. Well... maybe we can :)

What is open source software?




Open source software is source code made available to the public, allowing anyone to view, modify, and distribute the software. It is free and the people who work to improve it are most often not paid. The source code of open source software provides a good opportunity for a beginner to understand, work on, and modify a project to improve its usability.


Why contribute to open source?


Open source contribution has many benefits. It is especially beneficial for a beginner; working on open source projects hones ones skills as a developer and provides a good foundation for a future career in the software industry. Here are some of the major benefits that open source contribution provides:


  • Experience of working on large projects
     Open source projects often have large and complex codebases. Working on such a project requires one to understand the ins and outs of the codebase, how things are put together and how they work to result in a fully functioning software.

  • Read code written by others
     The source code of an open source software can be read and modified by anyone, this allows hundreds of people to make code contributions ranging from fixing bugs, adding a new feature to just updating the documentation. To do any of this, we need to read and understand the code written by others and make implement our own changes. This allows the contributor to learn good programming practices like writing readable and well documented code, using version control like Git and Github correctly, etc.

  • Ability to work in a team
     An open source project is a joint endeavour and requires collaboration. Any code that is written in the project must be readable and understandable by every other contributor, and this team effort results in an efficient and correctly functioning software. Often, when enhancing the software by adding a new feature or updating legacy code, people need to reach a consensus on what features need to be implemented, how they should be implemented and what platforms/libraries should be used. This requires discussion with contributors and the users, and hones ones ability to work in a team which is an invaluable skill to have in software development.

  • Opportunity to work with experienced developers
     Since many projects are quite old and have been there for a long time, they have many people with tens of years of experience who are maintainers and have been writing and fixing the codebase since years. This is a good opportunity for a beginner to  learn the best programming practices from people with more experience. This helps them in becoming employable and gaining the "experience" that companies demand from prospective employees. 

  • Using programming skills to benefit end users
     Large projects often have millions of dedicated users that use the software on a daily basis. Just like VLC Media Player or Chromium, these softwares are quite popular and have a loyal fanbase. If anyone contributes to make the software better, it improves the user experience for millions of people. This contribution might be a small optimization that makes the software load faster, or a new feature that users have been requesting - in any case, it ends up improving the experience for its day to day users and makes a meaningful impact on the community.

  • A chance to network with others
     Contributing to open source is a fun and pleasant experience. It allows us to meet people from different backgrounds with different levels of experience. Contributors are often geographically distributed but have the same goal - to ensure the success of the project by benefiting its end users. This common goal allows us to connect and interact with people from diverse backgrounds and different opinions. This ends up being an enriching and learning journey, it broadens our perspectives, and makes us a better developer. 
Interested in contributing to open source? This article provides a step by step guide on how you can get started with open source contribution. In case of any doubts, please feel free to contact me on my email : sudhanshu.98t@gmail.com or connect with me on LinkedIn !                                                      

 





May 10, 2024

#147 Secure Keys

Update on what happened across the GNOME project in the week from May 03 to May 10.

Sovereign Tech Fund

Sonny says

As part of the GNOME STF (Sovereign Tech Fund) initiative, a number of community members are working on infrastructure related projects.

Here are the highlights for the past week

Andy is making progress on URL handling for apps. We are planning on advancing and using the freedesktop intent-apps proposal which Andy implemented in xdg-desktop-portal.

Felix completed the work to add keyring collections support to Key Rack.

Adrien worked on replacing deprecated and inaccessible GtkTreeView in Disk Usage Analyzer (Baobab)

Adrien worked on replacing deperecated and inaccessible GtkEntryCompletion in Files (Nautilus)

Dhanuka finalized the Keyring implementation in oo7.

Dhanuka landed rekeying support in oo7

Hubert made good progress on the USB portal and the portal is now able to display a permission dialog.

Julian added notifications spec v2 support to GLib GNotification

Julian created a draft merge request for new notification specs against xdg-desktop-portal-gtk

Antonio finished preparing nautilus components for reuse in a new FileChooser window. Ready for review

GNOME Core Apps and Libraries

GLib

The low-level core library that forms the basis for projects such as GTK and GNOME.

Philip Withnall announces

A series of fixes for a GDBus security issue with processes accepting spoofed signal senders has landed. Big thanks to Simon McVittie for putting together the fix (and an impressive set of regression tests), to Alicia Boya García for reporting the issue, and Ray Strode for reviews. https://discourse.gnome.org/t/security-fixes-for-signal-handling-in-gdbus-in-glib/20882

GNOME Circle Apps and Libraries

Ear Tag

Edit audio file tags.

knuxify says

Ear Tag 0.6.1 has been released, bringing a few minor quality-of-life improvements and a switch to the new AdwDialog widgets. You can get the latest release from Flathub.

(Sidenote - I am looking for contributors who would be willing to help with Ear Tag’s testing, bug-fixing and further development, with the goal of potentially finding co-maintainers - if you’re interested, see issue #132 for more details.)

Letterpress

Create beautiful ASCII art

Gregor Niehl says

A new minor release of Letterpress is out! No big UI changes in this 2.1 release, mostly small touches here and there:

  • Images can now be pasted from the clipboard
  • Zoom is now more consistent between different factors
  • The Drag-n-Drop overlay was stolen from Loupe redesigned
  • The GNOME runtime was updated to version 46, which means the Tips Dialog now uses the new Adw.Dialog, and the About Dialog is now truly an About Dialog

The app has also been translated to Simplified Chinese!

I’m happy to announce that, in the meantime, @FineFindus has joined the project as maintainer, so it’s no longer maintained by a single person.

Third Party Projects

Alain announces

Planify 4.7.2 is here!

We’re excited to announce the release of Planify version 4.7.2, with exciting new features and improvements to help you manage your tasks and projects even more efficiently!

1. Inbox as Independent Project: We’ve completely rebuilt the functionality of Inbox. Now, it’s an independent project with the ability to move your tasks between different synchronized services. The Inbox is the default place to add new tasks, allowing you to quickly get your ideas out of your head and then plan them when you’re ready.

2. Enhanced Task Duplication: When you duplicate a task now, all subtasks and labels are automatically duplicated, saving you time and effort in managing your projects.

3. Duplication of Sections and Projects: You can now easily duplicate entire sections and projects, making it easier to create new projects based on existing structures.

4. Improvements in Quick Add: We’ve improved the usability of Quick Add. Now, the “Create More” option is directly displayed in the user interface, making it easier to visualize and configure your new tasks.

5. Improvements on Small Screens: For those working on devices with small screens, we’ve enhanced the user experience to ensure smooth and efficient navigation.

6. Project Expiry Date: Your project’s expiry date now clearly shows the remaining days, helping you keep track of your deadlines more effectively.

7. Enhanced Tag Panel: The tag panel now shows the number of tasks associated with each tag, rather than just the number of tags, giving you a clearer view of your tagged tasks.

8. Archiving of Projects and Sections: You can now archive entire projects and sections! This feature helps you keep your workspace organized and clutter-free.

9. New Task Preferences View and Task Details Sidebar: Introducing a new task preferences view! You can now customize your task preferences with ease. Additionally, we’ve enabled the option to view task details using the new sidebar view, providing quick access to all the information you need.

10. Translation Updates: We thank @Scrambled777 for the Hindi translation update and @twlvnn for the Bulgarian translation update.

These are just some of the new features and improvements you’ll find in Planify version 4.7.2. We hope you enjoy using these new tools to make your task and project management even more efficient and productive.

Download the update today and take your productivity to the next level with Planify!

Thank you for being part of the Planify community!

Giant Pink Robots! says

Varia download and torrent manager got a pretty big update.

  • Powerful download scheduling feature that allows the user to specify what times in a week they would like to start or stop downloading, with an unlimited amount of custom timespans.
  • The ability to import a cookies.txt file exported from a browser to support downloads from restricted areas like many cloud storage services.
  • Support for remote timestamps if the user wants the downloaded file to have the original timestamp metadata.
  • Two new filtering options on the sidebar for seeding torrents and failed downloads.
  • An option to automatically quit the application when all downloads are completed.
  • An option to start the app in background mode whenever it’s started.
  • Support for Spanish, Persian and Hindi languages.

Mateus R. Costa announces

bign-handheld-thumbnailer (a Nintendo DS and 3DS files thumbnailer) version 0.9.0 was intended to appear on previous week’s TWIG but due to performance issues had to be postponed.

Version 0.9.0 is notable because it finally introduced CXI and CCI (which were deemed too hard to implement for the original 0.1.0 code) and there were a few misc improvements. However it was pointed that the thumbnailer was loading full games to the memory (official 3DS games can weight up to almost 4 GB) even though there were some suggestions on how to improve that I still failed to initiallly make it work. At that point a COPR repo also became available to help distributed the compiled RPM.

Version 1.0.0 was intended to fix the performance issue once for all, but for more details I recommend reading the blog post about this release at my personal blog: https://www.mateusrodcosta.dev/blog/bign-handheld-thumbnailer-what-i-learned-linux-thumbnailer-rust/

Gir.Core

Gir.Core is a project which aims to provide C# bindings for different GObject based libraries.

badcel reports

Gir.Core 0.5.0 was released. This is one of the biggest releases since the initial release:

  • A lot of of new APIs are supported (especially records)
  • Bugs got squashed
  • The library versions were updated to GNOME SDK 46 and target .NET 8 in addition to .NET 6 and 7
  • New samples were added
  • The homepage got updated

Anyone interested bringing C# / F# back into the Linux ecosystem is welcome to come by and try out the new version.

Miscellaneous

Dan Yeaw says

Gvsbuild, the GTK stack for Windows, version 2024.5.0 is out. Along with the latest GTK version 4.14.4, we also released for the first time a pre-built version of the binaries. To set up a development environment for a GTK app on Windows, you can unzip the package, set a couple of environmental variables, and start coding.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

May 09, 2024

libwacom and Huion/Gaomon devices

TLDR: Thanks to José Exposito, libwacom 2.12 will support all [1] Huion and Gaomon devices when running on a 6.10 kernel.

libwacom, now almost 13 years old, is a C library that provides a bunch of static information about graphics tablets that is not otherwise available by looking at the kernel device. Basically, it's a set of APIs in the form of libwacom_get_num_buttons and so on. This is used by various components to be more precise about initializing devices, even though libwacom itself has no effect on whether the device works. It's only a library for historical reasons [2], if I were to rewrite it today, I'd probably ship libwacom as a set of static json or XML files with a specific schema.

Here are a few examples on how this information is used: libinput uses libwacom to query information about tablet tools.The kernel event node always supports tilt but the individual tool that is currently in proximity may not. libinput can get the tool ID from the kernel, query libwacom and then initialize the tool struct correctly so the compositor and Wayland clients will get the right information. GNOME Settings uses libwacom's information to e.g. detect if a tablet is built-in or an external display (to show you the "Map to Monitor" button or not, if builtin), GNOME's mutter uses the SVGs provided by libwacom to show you an OSD where you can assign keystrokes to the buttons. All these features require that the tablet is supported by libwacom.

Huion and Gamon devices [3] were not well supported by libwacom because they re-use USB ids, i.e. different tablets from seemingly different manufacturers have the same vendor and product ID. This is understandable, the 16-bit product id only allows for 65535 different devices and if you're a company that thinks about more than just the current quarterly earnings you realise that if you release a few devices every year (let's say 5-7), you may run out of product IDs in about 10000 years. Need to think ahead! So between the 140 Huion and Gaomon devices we now have in libwacom I only counted 4 different USB ids. Nine years ago we added name matching too to work around this (i.e. the vid/pid/name combo must match) but, lo and behold, we may run out of unique strings before the heat death of the universe so device names are re-used too! [4] Since we had no other information available to userspace this meant that if you plugged in e.g. a Gaomon M106 and it was detected as S620 and given wrong button numbers, a wrong SVG, etc.

A while ago José got himself a tablet and started contributing to DIGIMEND (and upstreaming a bunch of things). At some point we realised that the kernel actually had the information we needed: the firmware version string from the tablet which conveniently gave us the tablet model too. With this kernel patch scheduled for 6.10 this is now exported as the uniq property (HID_UNIQ in the uevent) and that means it's available to userspace. After a bit of rework in libwacom we can now match on the trifecta of vid/pid/uniq or the quadrella of vid/pid/name/uniq. So hooray, for the first time we can actually detect Huion and Gaomon devices correctly.

The second thing Jose did was to extract all model names from the .deb packages Huion and Gaomon provide and auto-generate all libwacom descriptions for all supported devices. Which meant, in one pull request we added around 130 devices. Nice!

As said above, this requires the future kernel 6.10 but you can apply the patches to your current kernel if you want. If you do have one of the newly added devices, please verify the .tablet file for your device and let us know so we can remove the "this is autogenerated" warnings and fix any issues with the file. Some of the new files may now take precedence over the old hand-added ones so over time we'll likely have to merge them. But meanwhile, for a brief moment in time, things may actually work.

[1] fsvo of all but should be all current and past ones provided they were supported by Huions driver
[2] anecdote: in 2011 Jason Gerecke from Wacom and I sat down to and decided on a generic tablet handling library independent of the xf86-input-wacom driver. libwacom was supposed to be that library but it never turned into more than a static description library, libinput is now what our original libwacom idea was.
[3] and XP Pen and UCLogic but we don't yet have a fix for those at the time of writing
[4] names like "HUION PenTablet Pen"...

May 07, 2024

System Extensions from Flatpak

I write about Sysprof here quite often. Mostly in hopes of encouraging readers to use it to improve Linux as a whole.

An impediment to that is the intrusiveness to test out new features as they are developed. If only we had a Flatpak which you could install to test things right away.

One major hurdle is how much information a profiler needs to be useful. The first obvious “impossible to sandbox” API you run into is the perf subsystem. It provides information about all processes running on the system and their memory mappings which would make snooping on other processes trivial. Both perf and ptrace are disabled in the Flatpak sandbox.

After that, you still need unredacted access to the kernel symbols and their address mappings (kallsyms). You also need to be in a PID namespaces that allows you to see all the processes running on the system and their associated memory mappings which essentially means CAP_SYS_ADMIN.

Portable Services

Years ago, portable services were introduced into systemd through portablectl. I had high-hopes for this because it meant that I could perhaps ship a squashfs and inject it as a transient service on the host.

However, Sysprof needs more integration than could be provided by this because portable services are still rather isolated from the host. We need to own a D-Bus name, policy-kit action integration, in addition to the systemd service.

Even if that were all possible with portable services it wouldn’t get us access to some of the host information we need to properly decode symbols.

System Extensions

Then came along systemd-sysext. It provides a way to “layer” extensions on top of the host system’s /usr installation rather than in an isolated mount namespace.

This sounds much more promising because it would allow us to install .policy for policy-kit, .service files for Systemd and D-Bus, or even udev rules.

Though, with great power comes excruciating pain, or something like that.

So if you need to provide binaries that run on the host you need to either static-link (rust, go, zig perhaps?) or use something you can reasonably expect to be there (python?).

In the Sysprof case, everything is C so it can statically link almost everything by being clever with how it builds against glibc. Though this still requires glibc and quite frankly I’m fine with that. Potentially, one could use MUSL or ucLibc if they had high enough pain threshold for build tooling.

Bridging Flatpak and System Extensions

The next step would be to find a way to bridge system extensions and Flatpak.

In the wip/chergert/sysext branch of Sysprof I’ve made it build a number of things statically so that I can provide a system extension directory tree at /app/lib/extensions. We can of course choose a different path for this but that seemed similar to /var/lib/extensions.

Here we see the directory tree laid out. To do this right for systemd-sysext we also need to install an extension point file but I’ll save that for another day.

The Directory Tree

$ find /app/lib/extensions -type f
/app/lib/extensions/usr/lib/systemd/system/sysprof3.service
/app/lib/extensions/usr/share/polkit-1/actions/org.gnome.sysprof3.policy
/app/lib/extensions/usr/share/dbus-1/system-services/org.gnome.Sysprof3.service
/app/lib/extensions/usr/share/dbus-1/system.d/org.gnome.Sysprof3.conf
/app/lib/extensions/usr/libexec/sysprofd

Registering the Service

First we need to symlink our system extension into the appropriate place for systemd-sysext to pick it up. Typically /var/lib/extensions is used for transient services so if this were being automated we might use another directory for this.

# mkdir -p /var/lib/extensions
# ln -s /var/lib/flatpak/org.gnome.Sysprof.Devel/current/active/files/lib/extensions/ org.gnome.Sysprof.Devel

Now we need to merge the extension so it overlays into /usr. We must use --force because we didn’t yet provide an appropriate extension point file for systemd.

# systemd-sysext merge --force
Using extensions 'org.gnome.Sysprof.Devel'.
Merged extensions into '/usr'.

And now make sure our service was installed to the approriate location.

# ls /usr/lib/systemd/systemd/sysprof3.service
-rw-r--r-- 2 root root 115 Dec 31 1969 /usr/lib/systemd/system/sysprof3.service

Next we need to reload the systemd daemon, but newer versions of systemd do this automatically.

# systemctl daemon-reload

Here is where things are a bit tricky because they are somewhat specific to the system. I think we should make this better in the appropriate upstream projects to avoid this altogether but also easily handled with a flatpak installation trigger.

First make sure that policy-kit reloads our installed policy file.

# systemctl restart polkit.service

With dbus-broker, we also need to reload configuration to pick up our new service file. I’m not sure if dbus-daemon would require this, I haven’t tested that. Though I wouldn’t be surprised if this is related to inotify file-monitors and introducing a merged /usr.

# gdbus call -y -d org.freedesktop.DBus \
-o /org/freedesktop/DBus \
-m org.freedesktop.DBus.ReloadConfig

At this point, the service should be both systemd and D-Bus activatable. We can verify that with another gdbus call quick.

# gdbus call -y -d org.gnome.Sysprof3 \
-o /org/gnome/Sysprof3 \
-m org.freedesktop.DBus.Peer.Ping
()

Now I can run the Flatpak as normal and it should be able to use the system extension to get profiling and system data from the host as if it were package installed.

$ flatpak run org.gnome.Sysprof.Devel

The following screenshots come from GNOME OS using yesterdays build with what I’ve described in this post. However, it also works on Fedora Rawhide (and probably Fedora 40) if you boot with selinux=0. More on that in the FAQ below.

Flatpak Integration

So obviously nobody would want to do all the work above just to make their Flatpak work. The user-facing goal here would be for the appropriate triggers to be provided by Flatpak to handle this automatically.

Making this happen in an automated fashion from flatpak installation triggers on the --system installation does not seem terribly out-of-scope. It’s possible that we might want to do it from within the flatpak binary itself but I don’t think that is necessary yet.

FAQ

What about non-system installations?

It would be expected that system extensions require installing to a system installation.

It does not make sense to allow for a --user installation, controllable by an unprivileged user or application, to be merged onto the host.

Does SELinux affect this?

In fact it does.

While all of this works out-of-the-box on GNOME OS, systems like Fedora will need work to ensure their SELinux policy to not prevent system extentions from functioning. Of course you can boot with selinux=0 but that is not viable/advised on end-user installations.

In the Sysprof case, AVC denials would occur when trying to exec /usr/libexec/sysprofd.

Does /usr become read-only?

If you have systemd <= 255 then system-sysext will most definitely leave /usr read-only. This is problematic if you want to modify your system after merging but makes sense because sysext was designed for immutable systems.

For example, say you wanted to sudo dnf install a-package on Fedora. That would fail because /usr becomes read-only after systemd-sysext merge.

In systemd >= 256 there is effort underway to make /usr writable by redirecting writes to the top-most writable layer. Though my early testing of Fedora Rawhide with systemd 256~rc1 still shows this is not yet working.

So why not a Portal?

One could write a portal for profilers alone but that portal would essentially be sysprofd and likely to be extremely application specific.

Can I use this for udev rules?

You could.

Though you might be better served by using the new Device and/or USB portals which will both save you code and systems integration hassle.

Can I have different binaries per OS?

Yes.

The systemd-sysext subsystem has a directory layout which allows for matching on some specific information in /etc/os-release. You could, for example, have a different system extension for specific Debian or CentOS Stream versions.

Can they be used at boot?

If we choose to symlink into a persistent systemd-sysext location (perhaps /etc/extensions) then they would be available at boot.

Can services run independent of user app?

Yes.

It would be possible to have a system service that could run independently of the user facing application.

May 05, 2024

This website now has GPDR friendly statistics

Libre Counter logotype

Now this website uses a simple statistics system which is GDPR compatible and privacy friendly. It uses Libre Counter which not need user registration neither configuration beyond adding some code like this:

<a href="https://librecounter.org/referer/show" target="_blank">
    <img src="https://librecounter.org/unique.svg" alt="GPDR friendly statistics" width="14" style="filter: grayscale(1);" title="GPDR friendly statistics" referrerpolicy="unsafe-url title="/> 
</a>

No cookies also.

Thanks Pinchito!

May 03, 2024

#146 Editing Markdown

Update on what happened across the GNOME project in the week from April 26 to May 03.

Sovereign Tech Fund

Tobias Bernard announces

As part of the GNOME STF (Sovereign Tech Fund) initiative, a number of community members are working on infrastructure related projects.

Here are the highlights for the past two weeks:

Dorota created a standalone dialog in GNOME Control Center to let users choose/approve/reject when an app requests a Global Shortcuts

Dhanuka landed Add rekeying support for oo7::portal::Keyring in oo7.

Hub implemented the in-progress USB portal in ashpd to demo and test

Sophie landed libglycin: Add C/glib/gir API for glycin crate. This will let language bindings use Glycin. A first version of the C-API is for glycin is available https://sophie-h.pages.gitlab.gnome.org/glycin/c-api/. Via GObject introspections (https://developer.gnome.org/documentation/guidelines/programming/introspection.html) it is also usable with GJS, Python, and Vala.

Antonio is making great progress on using Nautilus as file picker.

Julian finalized the notification portal specs, reviews welcome!

Jonas landed a fix for a long-standing touch bug

Jonas opened a merge request to improve GNOME Shell layout on smaller displays

Adrien opened an MR to replace deprecated and inaccessible GtkEntryCompletion in Nautilus.

The recording for Matt’s talk from Open Source Summit North America is now available, you can watch it on Youtube.

GNOME Circle Apps and Libraries

Apostrophe

A distraction free Markdown editor.

Manu says

After two years of development, I’m glad to announce that Apostrophe 3.0 is here! Almost every aspect of the application has seen improvements, from the obvious ones like the port to GTK4 and the refined interface, to several improvements under the hood. Among the new features are:

  • A new toolbar so you don’t have to remember markdown syntax
  • A more secure approach when opening and rendering files
  • Autoindentation and autocompletion for lists and braces
  • An improved Hemingway mode
  • The document stats will also show stats for the selected text

You can download it in Flathub

Workbench

A sandbox to learn and prototype with GNOME technologies.

Sonny says

Workbench 46.1 is out!

See what’s new and details at https://blog.sonny.re/workbench-46-1

Railway

Travel with all your train information in one place.

schmiddi announces

Railway version 2.5.0 was released. It contains updates to the GNOME 46 runtime, as well as the addition of the PKP provider (and removal of the INSA provider due to the API failing to search for locations). It furthermore now tries to query the remarks of journeys in the system-language, fixes a crash for the Spanish translation of the app and provides a fix for the RMV provider throwing an error.

GNOME Core Apps and Libraries

Vala

An object-oriented programming language with a self-hosting compiler that generates C code and uses the GObject system.

lwildberg reports

Last month Reuben Thomas completed his port of Enchant to Vala! Enchant is a spellchecking library also used in GNOME. Read the blog post about it and also his experience on porting another project (Zile) to Vala here.

Tracker

A filesystem indexer, metadata storage system and search tool.

Sam Thursfield announces

The Tracker SPARQL developers are very happy to welcome rachle08 and Demigod who will be joining the team as part of Google Summer of Code, working on a project to add a web-based query editor and generally improve the developer experience.

Sam Thursfield reports

In Tracker SPARQL, Carlos Garnacho worked around a non-backwards compatible change released in SQLite 3.45.3. This change causes errors that look like ambiguous column name: ROWID (0). The fix will be in the next stable release - see Discourse for more details.

Software

Lets you install and update applications and system extensions.

Philip Withnall reports

Automeris naranja has made headway into porting gnome-software to the shiny new AdwDialog (and other related new libadwaita APIs)

Third Party Projects

José reports

I’ve released my first app on Flathub! Mingle is a simple app to play with Google’s Emoji Kitchen and copy them to your clipboard. It is written in Vala and has been my little pet project these past few months as a learning exercise.

slomo reports

The GStreamer GTK4 video sink got support for directly importing video frames as dmabufs on Linux when using GStreamer 1.24 / GTK 4.14, in addition to the already existing support for importing OpenGL textures or video frames in normal system memory. This new feature is available in version 0.12.4 of the sink plugin.

This is especially useful for video players using hardware decoders or applications that display a video stream from a camera (via v4l2 or pipewire), and allows side-stepping the GL/Vulkan rendering of the video frames inside GTK under certain conditions and let the composition be done by the Wayland compositor or even directly pass the dmabufs to the GPU kernel driver. Doing so would reduce GPU utilization and by that frees resources for other tasks and reduces power consumption. See Matthias' blog post on the GTK blog for more details about the dmabuf and graphics offloading support in GTK.

Alain reports

Planify 4.7 is Here! Discover the New Features and Improvements We’re thrilled to announce the arrival of Planify 4.7! This version brings a host of exciting enhancements and new features that will make your task and project management experience smoother and more efficient than ever. Let’s take a look at what we’ve added:

Advanced Filtering Function Now in Planify, you can filter your tasks within a project based on priority, due date, and assigned tags. Take control of your tasks like never before!

Custom Sorting in Today View Personalize the Today view by sorting your tasks the way you prefer. Make your day more productive by organizing tasks your way!

Instant Task Details With our new task detail view in the sidebar, you can quickly access all relevant task information while in the Board view. Keep your workflow uninterrupted!

Efficient Management of Completed Tasks Now, deleting completed tasks is easier than ever. Keep your workspace clean and organized with just a few clicks!

Attach Files to Your Tasks Never lose track of important files related to your tasks. With the file attachment feature, keep all relevant information in one place.

Celebrate Achievements with Sound Want a fun way to celebrate your achievements? Now you can play a sound when completing a task. Make every accomplishment even more satisfying!

Bug Fixes and Performance Improvements We’ve addressed a number of errors, from project duplication to issues with animation when adding subtasks. Additionally, we’ve updated translations in various languages, including Hindi, Bulgarian, Brazilian Portuguese, and Spanish.

Download the latest version of Planify on Flathub now and take your task management to the next level! For any feedback, suggestions or bug reports, please file an issue at the Github issue tracker.

Link Dupont says

Version 0.2.2 of Damask is here. This release has been in the works for a while, so contains a lot of bug fixes and UI improvements.

  • Wallhaven: correctly set aspect ratio in search query
  • Reset the refresh timer after a manual refresh
  • Wallhaven: refresh wallpaper when preferences change
  • Refresh wallpaper preview only when a preview is available
  • Set active source by selecting the row in the source list
  • NASA: rename row title to “NASA Astronomy”
  • Sort source list alphabetically
  • Add a setting to disable automatic refresh
  • Improve support for a default “no source” application state
  • Fix preview image dimensions
  • Remove the “manual” source (disable automatic refresh instead)
  • NASA: replace user-defined API key with a value supplied at compilation
  • Unsplash: replace user-defined API key with a value supplied at compilation
  • Wallhaven: add explanatory text for the API key field
  • EarthView: update photo source
  • Slideshow: allow any image type when filtering files

Download it on Flathub today!

Martín Abente Lahaye reports

Gameeky 0.6.4 is now available on Flathub. This new release brings minor fixes for running Gameeky on other platforms and it’s now fully available in Brazilian Portuguese 🇧🇷, thanks to Rafael Fontenelle. As a result of Rafael efforts, the offline documentation can now be translated using regular gettext-based tools and therefore much easier to do so.

Turtle

Manage git repositories in Nautilus.

Philipp reports

Turtle 0.8 has been released.

Retrieving the log commits and calculating the graph is now much faster. Opening up the log for, i.e. the gnome-shell repo, will now only take some seconds, if “Show all branches” is checked it will take roughly 15 seconds. Before it took roughly 1 minute 40 seconds, depending on your hardware of course.

There is now also a merge dialog available to merge a branch or commit into the current head. It is also possible to start a merge directly from the log context menu.

For easier usage, a help output has been added to both the turtle_cli and the turtlevcs python package, a bash completion file has been added and an emblem dialog has been added to the settings dialog.

And there are many more minor fixes and tweaks, see the full changelog.

Mahjongg

A solitaire version of the classic Eastern tile game.

Mat announces

Mahjongg has received a whole slew of improvements in the last few weeks:

  • Complete dark/light mode support with separate backgrounds for each tileset
  • Faster loading times (almost instant, compared to the previous ~5 seconds for some tile layouts)
  • Moved tile layout switcher to the main menu, for easier access
  • Ported to newer GTK/libadwaita widgets, such as Gtk.ColumnView and Adw.Dialog
  • All known bugs addressed (issue tracker is empty!)

These changes are not released yet, but are available for testing in the nightly Flatpak package:
flatpak install gnome-nightly org.gnome.Mahjongg.Devel

Fractal

Matrix messaging app for GNOME written in Rust.

Kévin Commaille reports

Here comes the bride Fractal 7, with extended encryption support and improved accessibility. Server-side key backup and account recovery have been added, bringing greater security. Third-party verification has received some bug fixes and improvements. Amongst the many accessibility improvements, navigability has increased, especially in the room history. But that’s not all we’ve been up to in the past three months:

  • Messages that failed to send can now be retried or discarded.
  • Messages can be reported to server admins for moderation.
  • Room details are now considered complete, with the addition of room address management, permissions, and room upgrade.
  • A new member menu appears when clicking on an avatar in the room history. It offers a quick way to do many actions related to that person, including opening a direct chat with them and moderating them.
  • Pills are clickable and allow to directly go to a room or member profile.

As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.

We want to address special thanks to the translators who worked on this version. We know this is a huge undertaking and have a deep appreciation for what you’ve done. If you want to help with this effort, head over to Damned Lies.

It is available right now on Flathub.

We are already hard at work for our next release, so if you want to give us a hand you can start by looking at our Newcomer issues or just come say hello in our Matrix room.

Miscellaneous

Sophie (she/her) announces

Glycin is gaining support for other programming languages. Glycin is a library that features sandboxed and extendable image loading and is used by Image Viewer. It is written in Rust and so far only provided a Rust API. As part of my work for GNOME STF, it has now gained initial support for being used with other languages. The basis for this is the C-API. Via GObject introspections it is now also usable with GJS, Python, and Vala (untested).

The advantages of Glycin over the well-proven GdkPixbuf are improved security, more reliable and dynamically adjusted memory usage limits, and reliable termination of loading processes. Currently, the drawbacks include a slightly increased overhead and missing support for anything but Linux.

Google Summer of Code

Pedro Sader Azevedo announces

We are happy to announce that GNOME was assigned eight slots for Google Summer of Code (GSoC) projects this year!

GSoC is a program focused on bringing new contributors into open source software development. A number of long term GNOME developers are former GSoC interns, making the program a very valuable entry point for new members in our community.

In 2024, we will be mentoring the following projects:

GNOME Foundation

Caroline Henriksen announces

The GNOME Asia 2024 Call for Locations is open! If you are interested in hosting this year’s conference in your city make sure to submit an intent to bid by May 15, and a final proposal by June 6. More details about how to submit a proposal can be found here: https://foundation.gnome.org/2024/04/30/call-for-gnome-asia-2024-location-proposals/

The GUADEC 2025 Call for Locations is also open! For next year’s conference, we’re accepting bids from anywhere in the world. If you would like to bring GUADEC to your city make sure to submit an intent to bid today (May 3) and your full proposal by May 31. More details can be found here: https://foundation.gnome.org/2024/04/18/call-for-guadec-2025-location-proposals/

Registration is open for GUADEC 2024. This year’s conference takes place on July 19-24 in Denver, Colorado, USA. Let us know if you’ll be attending, remotely or in person, by registering on guadec.org. For anyone attending in person, we’ve organized a social outing to a Colorado Rockies baseball game! You can learn more and register to attend here: https://events.gnome.org/event/209/page/331-colorado-rockies-baseball-game

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

May 02, 2024

Outreachy May 2024: A letter to Fedora applicants

The post Outreachy May 2024: A letter to Fedora applicants appeared first on /home/jwf/.

/home/jwf/ - Free & Open Source, technology, travel, and life reflections

To all Outreachy May 2024 applicants to the Fedora Project,

Today is May 2nd, 2024. The Outreachy May 2024 round results will be published in a few short hours. This year, the participation in Fedora for Outreachy May 2024 was record-breaking. Fedora will fund three internships this year. During the application and contribution phase, over 150 new contributors appeared in our Mentored Project contribution channels. For the project I am mentoring specifically, 38 applicants recorded contributions and 33 applicants submitted final applications. This is my third time mentoring, but this Outreachy May 2024 round has been a record-breaker for all the projects I have mentored until now.

But breaking records is not what this letter is about.

This day can be either enormously exciting and enormously disappointing. It is a tough day for me. There are so many Outreachy applicants who are continuing to contribute after the final applications were due. I see several applicants from my project who are contributing across the Fedora community, and actually leveling up to even bigger contributions than the application period. It is exciting to see people grow in their confidence and capabilities in an Open Source community like Fedora. Mentoring is a rewarding task for me, and I feel immensely proud of the applicants we have had in the Fedora community this round.

But the truth is difficult. Fedora has funding for three interns, hard and simple. Hard decisions have to be made. If I had unlimited funding, I would have hired so many of our applicants. But funding is not unlimited. Three people will receive great news today, and most people will receive sad news. Throughout this entire experience in the application phase, I wanted to design me and Joseph Gayoso’s project so that even folks who were not selected would have an enriching experience. We wanted to put something real in the hands of our applicants at the end. We also wanted to boost their confidence in showing up in a community and guide them on how to roll up your sleeves and get started. Looking at the portfolios that applicants to our project submitted, I admire how far our applicants came since the day that projects were announced. Most applicants never participated in an open source community before. And for some, you would never have known that either!

So, if you receive the disappointing news today, remember that it does not reflect badly on you. The Outreachy May 2024 round was incredibly competitive. Literally, record-breaking. We have to say no to many people who have proved that they have what it takes to be a capable Fedora Outreachy intern. I hope you can look at all the things you learned and built over these past few months, and use this as a step-up to the next opportunity awaiting you. Maybe it is an Outreachy internship in a future round, or maybe it is something else. If there is anything I have learned, it is that life takes us on the most unexpected journeys sometimes. And whatever is meant to happen, will happen. I believe that there is a reason for everything, but we may not realize what that reason is until much later in the future.

Thank you to all of the Fedora applicants who put in immense effort over the last several months. I understand if you choose to stop contributing to Fedora. I hope that you will not be discouraged from open source generally though, and that you will keep trying. If you do choose to continue contributing to Fedora, I promise we will find a place for you to continue on. Regardless of your choice in contributing, keep shining and be persistent. Don’t give up easily, and remember that what you learned in these past few months can give a leading edge on that next opportunity waiting around the corner for you.

Freedom, Friends, Features, First!

— Justin

GNOME will be mentoring 8 new contributors for Google Summer of Code 2024

We are happy to announce that GNOME was assigned eight slots for Google Summer of Code projects this year!

GSoC is a program focused on bringing new contributors into open source software development. A number of long term GNOME developers are former GSoC interns, making the program a very valuable entry point for new members in our project.

In 2024 we will mentoring the following projects:

  • “Add TypeScript Support to Workbench” by Angelo Verlain Shema, mentored by Sonny Piers
  • “Port Workbench demos to Vala, build a new Workbench Library, and replace the current code search” by Bharat Tyagi, mentored by Sonny Piers
  • “Improve Tracker SPARQL developer experience by creating a ‘web IDE’ for developing queries” by Demigod, mentored by Carlos Garnacho
  • “Papers’ small screen and touch support for mobile and tablet” by Markus Göllnitz, mentored by Pablo Correa Gomez
  • “More durable synching for FlatSync” by Mattia Formichetti, mentored by Rasmus Thomsen
  • “Port libipuz to Rust” by pranjal_, mentored by Jonathan Blandford
  • “Improve Tracker SPARQL developer experience by creating ‘web IDE’ for developing queries” by rachle08, mentored by Carlos Garnacho
  • “Add support for the latest GIR attributes and gi-docgen formatting to Valadoc” by sudhanshuv1, mentored by Lorenz Wildberg

As part of the contributor’s acceptance into GSoC they are expected to actively participate in the Community Bonding period (May 1 – 26). The Community Bonding period is intended to help prepare contributors to start contributing at full speed starting May 27.

The new contributors will soon get their blogs added to Planet GNOME making it easy for the GNOME community to get to know them and the projects that they will be working on.

We would like to also thank our mentors for supporting GSoC and helping new contributors enter our project.

If you have any doubts, feel free to reply to this Discourse topic or message us privately at soc-admins@gnome.org

 

May 01, 2024

Workbench 46.1

International Workers' Day marks the release of Workbench 46.1

Download on Flathub

This new release comes with

Save/restore window state and dimensions for each session/project. I couldn't use the method defined in Saving and restoring state into GSettings because resizing manually with the mouse triggers a lot of blocking disk writes and cause the user action to appear sluggish. So I debounce the events and write to gsettings manually.

Find text in the current editor. The keyboard shortcut is Ctrl+F. This is a feature started by Sriyansh Shivam during their GSoC 2023 internship and finished by UrtsiSantsi. Thanks!

SVG Library entry. Gtk (via gdk-pixbuf) draws SVGs at their intrisic sizes (what is defined in the svg document). If you use different dimensions for example using GtkImage.pixel-size then it is the pixmap that gets upscaled resulting in pixelated / blurry images. This new Library entry showcases how to render an SVG at arbitrary dimensions using librsvg. We may need a better primitive in the future, if you need an SVG Paintable; GTK Demo contains an example. See also this conversation.

A screenshot of the

Reveal In Files. This is a new option available in the menu that will reveal the session/project in Files. You can use it to add assets to your project and load them using workbench.resolve or as icons.

Import icons into your projects. Using the “Reveal In Files” option you can now add custom icons to your projects. It's just a matter of dropping the files in the right folder. See the “Using Icons” Library entry.

A screenshot of the

Workbench included an Icon Library and icon-development-kit for a while but in an effort to simplify Workbench and since we already have an Icon Library app, I decided to remove both in favor of per project icons.

I'm quite happy with the developer experience there and I hope we can provide something similar in the future to move beyond GtkIconTheme. We probably need to keep icon-name as suggested in Updates from inside GTK but I'd be very interested supporting relative paths in GTK Builder. Thankefully we already have GResource to allow for something like

Gtk.Image {
  src: "./smile-symbolic.svg";
}

7 new Library entries ported to Vala

  • Radio Buttons
  • Switch
  • Revealer
  • Styling with CSS
  • Separator
  • Level Bars
  • Link Button

Also

  • “Animation” Library entry ported to Python
  • Split “List View Widget” Library entry into “List View” and “Grid View”
  • Fix Vala and Rust extensions detection on “Run”
  • List editor shortcuts in Shortcuts

Thank you contributors and GSoC applicants.

April 29, 2024

Moving GPU drivers out of the initramfs

The firmware which drm/kms drivers need is becoming bigger and bigger and there is a push to move to generating a generic initramfs on distro's builders and signing the initramfs with the distro's keys for security reasons. When targetting desktops/laptops (as opposed to VMs) this means including firmware for all possible GPUs which leads to a very big initramfs.

This has made me think about dropping the GPU drivers from the initramfs  and instead make plymouth work well/better with simpledrm (on top of efifb). A while ago I discussed making this change for Fedora with the Red Hat graphics team spoiler: For now nothing is going to change.

Let me repeat that: For now there are no plans to implement this idea so if you believe you would be impacted by such a change: Nothing is going to change.

Still this is something worthwhile to explore further.

Advantages:

1. Smaller initramfs size:

* E.g. a host specific initramfs with amdgpu goes down from 40MB to 20MB
* No longer need to worry about Nvidia GSP firmware size in initrd
* This should also significantly shrink the initrd used in liveimages

2. Faster boot times:

* Loading + unpacking the initrd can take a surprising amount of time. E.g. on my old AMD64 embedded PC (with BobCat cores) the reduction of 40MB -> 20MB in initrd size shaves approx. 3 seconds of initrd load time + 0.6s seconds from the time it takes to unpack the initrd
*  Probing drm connectors can be slow and plymouth blocks the initrd -> rootfs transition while it is busy probing

3. Earlier showing of splash. By using simpledrm for the splash the splash can be shown earlier, avoiding the impression the machine is hanging during boot. An extreme example of this is my old AMD64 embedded PC, where the time to show the first frame of the splash goes down from 47 to 9 seconds.

4. One less thing to worry about when trying to create a uniform desktop pre-generated and signed initramfs (these would still need support for nvme + ahci and commonly used rootfs + lvm + luks).
 
Disadvantages:

Doing this will lead to user visible changes in the boot process:

1. Secondary monitors not lit up by the efifb will stay black during full-disk encryption password entry, since the GPU drivers will now only load after switching to the encrypted root. This includes any monitors connected to the non boot GPU in dual GPU setups.

Generally speaking this is not really an issue, the secondary monitors will light up pretty quickly after the switch to the real rootfs. However when booting a docked laptop, with the lid closed and the only visible monitor(s) are connected to the non boot GPU, then the full-disk encryption password dialog will simply not be visible at all.

This is the main deal-breaker for not implementing this change.

Note because of the strict version lock between kernel driver and userspace with nvidia binary drivers, the nvidia binary drivers are usually already not part of the initramfs, so this problem already exists and moving the GPU drivers out of the initramfs does not really make this worse.

2. With simpledrm plymouth does not get the physical size of the monitor, so plymouth will need to switch to using heuristics on the resolution instead of DPI info to decide whether or not to use hidpi (e.g. 2x size) rendering and even when switching to the real GPU driver plymouth needs to stay with its initial heuristics based decision to avoid the scaling changing when switching to the real driver which would lead to a big visual glitch / change halfway through the boot.

This may result in a different scaling factor for some setups, but I do not expect this really to be an issue.

3. On some (older) systems the efifb will not come up in native mode, but rather in 800x600 or 1024x768.

This will lead to a pretty significant discontinuity in the boot experience when switching from say 800x600 to 1920x1080 while plymouth was already showing the spinner at 800x600.

One possible workaround here is to add: 'video=efifb:auto' to the kernel commandline which will make the efistub switch to the highest available resolution before starting the kernel. But it seems that the native modes are simply not there on systems which come up at 800x600 / 1024x768 so this does not really help.

This does not actually break anything but it does look a bit ugly. So we will just need to document this as an unfortunate side-effect of the change and then we (and our users) will have to live with this (on affected hardware).

4. On systems where a full modeset is done the monitor going briefly black from the modeset will move from being just before plymouth starts to the switch from simpledrm drm to the real driver. So that is slightly worse. IMHO the answer here is to try and get fast modesets working on more systems.

5. On systems where the efifb comes up in the panel's native mode and a fast modeset can be done, the spinner will freeze for a (noticeable) fraction of a second as the switch to the real driver happens.

Preview:

To get an impression what this will look / feel like on your own systems, you can implement this right now on Fedora 40 with some manual configuration changes:

1. Create /etc/dracut.conf.d/omit-gpu-drivers.conf with:

omit_drivers+=" amdgpu radeon nouveau i915 "

And then run "sudo dracut -f" to regenerate your current initrd.

2. Add to kernel commandline: "plymouth.use-simpledrm"

3. Edit /etc/selinux/config, set SELINUX=permissive this is necessary because ATM plymouth has issues with accessing drm devices after the chroot from the initrd to the rootfs.

Note this all assumes EFI booting with efifb used to show the plymouth boot splash. For classic BIOS booting it is probably best to stick with having the GPU drivers inside the initramfs.

comment count unavailable comments

April 26, 2024

Update from the GNOME board

It’s been around 6 months since the GNOME Foundation was joined by our new Executive Director, Holly Million, and the board and I wanted to update members on the Foundation’s current status and some exciting upcoming changes.

Finances

As you may be aware, the GNOME Foundation has operated at a deficit (nonprofit speak for a loss – ie spending more than we’ve been raising each year) for over three years, essentially running the Foundation on reserves from some substantial donations received 4-5 years ago. The Foundation has a reserves policy which specifies a minimum amount of money we have to keep in our accounts. This is so that if there is a significant interruption to our usual income, we can preserve our core operations while we work on new funding sources. We’ve now “hit the buffers” of this reserves policy, meaning the Board can’t approve any more deficit budgets – to keep spending at the same level we must increase our income.

One of the board’s top priorities in hiring Holly was therefore her experience in communications and fundraising, and building broader and more diverse support for our mission and work. Her goals since joining – as well as building her familiarity with the community and project – have been to set up better financial controls and reporting, develop a strategic plan, and start fundraising. You may have noticed the Foundation being more cautious with spending this year, because Holly prepared a break-even budget for the Board to approve in October, so that we can steady the ship while we prepare and launch our new fundraising initiatives.

Strategy & Fundraising

The biggest prerequisite for fundraising is a clear strategy – we need to explain what we’re doing and why it’s important, and use that to convince people to support our plans. I’m very pleased to report that Holly has been working hard on this and meeting with many stakeholders across the community, and has prepared a detailed and insightful five year strategic plan. The plan defines the areas where the Foundation will prioritise, develop and fund initiatives to support and grow the GNOME project and community. The board has approved a draft version of this plan, and over the coming weeks Holly and the Foundation team will be sharing this plan and running a consultation process to gather feedback input from GNOME foundation and community members.

In parallel, Holly has been working on a fundraising plan to stabilise the Foundation, growing our revenue and ability to deliver on these plans. We will be launching a variety of fundraising activities over the coming months, including a development fund for people to directly support GNOME development, working with professional grant writers and managers to apply for government and private foundation funding opportunities, and building better communications to explain the importance of our work to corporate and individual donors.

Board Development

Another observation that Holly had since joining was that we had, by general nonprofit standards, a very small board of just 7 directors. While we do have some committees which have (very much appreciated!) volunteers from outside the board, our officers are usually appointed from within the board, and many board members end up serving on multiple committees and wearing several hats. It also means the number of perspectives on the board is limited and less representative of the diverse contributors and users that make up the GNOME community.

Holly has been working with the board and the governance committee to reduce how much we ask from individual board members, and improve representation from the community within the Foundation’s governance. Firstly, the board has decided to increase its size from 7 to 9 members, effective from the upcoming elections this May & June, allowing more voices to be heard within the board discussions. After that, we’re going to be working on opening up the board to more participants, creating non-voting officer seats to represent certain regions or interests from across the community, and take part in committees and board meetings. These new non-voting roles are likely to be appointed with some kind of application process, and we’ll share details about these roles and how to be considered for them as we refine our plans over the coming year.

Elections

We’re really excited to develop and share these plans and increase the ways that people can get involved in shaping the Foundation’s strategy and how we raise and spend money to support and grow the GNOME community. This brings me to my final point, which is that we’re in the run up to the annual board elections which take place in the run up to GUADEC. Because of the expansion of the board, and four directors coming to the end of their terms, we’ll be electing 6 seats this election. It’s really important to Holly and the board that we use this opportunity to bring some new voices to the table, leading by example in growing and better representing our community.

Allan wrote in the past about what the board does and what’s expected from directors. As you can see we’re working hard on reducing what we ask from each individual board member by increasing the number of directors, and bringing additional members in to committees and non-voting roles. If you’re interested in seeing more diverse backgrounds and perspectives represented on the board, I would strongly encourage you consider standing for election and reach out to a board member to discuss their experience.

Thanks for reading! Until next time.

Best Wishes,
Rob
President, GNOME Foundation

Update 2024-04-27: It was suggested in the Discourse thread that I clarify the interaction between the break-even budget and the 1M EUR committed by the STF project. This money is received in the form of a contract for services rather than a grant to the Foundation, and must be spent on the development areas agreed during the planning and application process. It’s included within this year’s budget (October 23 – September 24) and is all expected to be spent during this fiscal year, so it doesn’t have an impact on the Foundation’s reserves position. The Foundation retains a small % fee to support its costs in connection with the project, including the new requirement to have our accounts externally audited at the end of the financial year. We are putting this money towards recruitment of an administrative assistant to improve financial and other operational support for the Foundation and community, including the STF project and future development initiatives.

(also posted to GNOME Discourse, please head there if you have any questions or comments)

April 25, 2024

Small GLAM Slam Pilot 1 project update

Wikibase logotype

This is a project update for the SGS Pilot 1 project. This is a WMF funded project  (ID: 22444585).

They have been created two relevant places:

You can observe que are now using the coined term «Very Small GLAM». This will be the activity scope from now as we consider it short and and precise. The Small GLAM Slam denomination will be kept for the ID: 22444585 project.

What the Very Small GLAM term refers to? It identifies GLAM entities of very small size. How small? We got the inspiration from the concept of VSE (very small entities) coined for the ISO/IEC 29110 Series for software development entities up to 25 members. To set a focus we, a bit arbitrarily, chose the number 5 as «up to 5» members an institution or team working in GLAM. Very Small GLAM non circunscribes to Open GLAM, but Open GLAM would probably be the best approach or complement for this teams with not so much resources.

Wikimedia LEADS logotype

Wikimedia LEADS spin-off

An unexpected spin-off has been the conceptualization of the new initiative, Wikimedia LEADS (Learning Ecosystems and Ameliorating Data Space) when attending EU’s Next Generation Internet (NGI) funding call. The goal is to develop an advanced learning free/open data space and software ecosystem for the Wikimedia Movement.

Wikimedia LEADS first goal is to attend the GLAM Wiki learning needs. GLAM Wiki also shares a lot of commonalities with the Europeana community.

By extension, all practices, tools and many of the specific contents will be applicable to all other areas of the human knowledge.

Project’s activity areas

The SGS Pilot 1 is now structured in these work-packages:

  • WP1—IT system developed with the NAS killer concept with a free software stack for GLAM;
  • WP2—configuration for a locally installed Wikibase suite;
  • WP3—GLAM ontologies and vocabularies;
  • WP4—GLAM practices.
  • WP5—data import.

WP1—IT system, update

The selected operating system is UNRAID. The technical justifications are:

  • it’s a Linux distribution;
  • it features the ZFS file system, probably the best alternative for data preservation;
  • has a graphic administration interface
  • and run on any PC compatible hardware.

At this point the most important update about UNRAID is, since the grant approval, it changed their licensing model and fees.

Current project hardware details are:

  • a received a donated HP Z400 system, which happily includes an SSD disk;
  • procured 24 GB of ECC RAM, PC3-10600E, the maximum supported by the board.

In the next days we’ll buy the three HGST hard drives and other minor components for the firsts tests.

For the system configuration development phase, we’ll use another lend computer as a test server.

Software systems

This is a proposal of software architecture for a local installation of Wikibase in a GLAM context:

There is not advances to report about software.

WP2—Wikibase suite configuration, update

As we are not still familiar with the Wikibase ecosystem we are practicing setting up some instances in Wikibase.cloud. Also, we are starting to identify relevant Wikibase features. We are using wikibase.world as inventory of:

WP3—GLAM ontologies and vocabularies, update

Here we have the most juicy results for the moment. After a papers research we identified the CIDOC Conceptual Reference Model (CIDOC-CRM) as an international reference model for museums. It’s more relevant when you find it’s being used as a reference for mapping or extending to other domains like CRMdig (digitalization) and CRMsoc (social phenomena and constructs), which are relevant for the «Memorias del Cine» archive. Very relevant is the availability of a CRM OWL ontology (non official, but apparently up to date) and some minor Wikidata mapping (https://w.wiki/9r$s, 24 items) Also, we identified the Records in Contexts–Conceptual Model (RiC-CM), whose ontology is also published in OWL format. RiC-CM is a reference for archival and we found initial works for CRM <-> RiC-CM mapping. The current mapping with Wikidata is anecdotal (https://w.wiki/9sAh, 4 items).

In the context of models of practices we are learning about SEMAT Essence. The formal specifications are expressed in text and in a UML metamodel file (.xmi). The concept of metamodel is practically equivalent to the LOD ontology. It took a while but now we know more about how to manage this XMI formats using Magic Draw. The plan is to import the ontology to Wikibase using the same tools than for CRM and related. We found the Essence «Package Competency» model is relevant for populating a map of competences/abilities for the Movement, as Wikimedia LEADS proposes.

For managing this information we are getting familiar with tools like Protégé, Fuseki, Magic Draw and some others.

A very happily discovery has been a couple set of tools for mapping and importing to Wikibase ontologies based in CIDOC-CRM. They are output of the projects SAF-Lux and GeoKB. We expect we’ll make intensive use of them or their derivatives.

An open question is, do we need to create a new Wikidata property for CRM identifier? Probably yes. I’m keeping some notes about ontologies for archival in my Wikidata user page.

WP4—GLAM practices, update

There is not so much real work on this side, since we are not ready to work modeling with Essence. But we are collecting some relevant bibliography for the project scope:

  • C. Matos, Manual práctico para la digitalización de colecciones para difusión digital, 2022.
  • A. Salvador Benítez, Ed., Patrimonio fotográfico: de la visibilidad a la gestión. en Biblioteconomía y administración cultural, no. 280. Gijón: Trea, 2015.
  • J. M. Sánchez Vigil, A. Salvador Benítez, y M. Olivera Zaldua, Colecciones y fondos fotográficos: criterios metodológicos, estrategias y protocolos de actuación, Primera edición. en Museología y patrimonio cultural. Gijón: Ediciones Trea, 2022.
  • Collections Trust, Spectrum 5.1: UK Collections Management Standard, 2022.
  • Centro de Fotografía de Motevideo, Guía del archivo fotográfico, 2017.
  • L. Bountouri, Archives in the digital age: standards, policies and tools. en Chandos Information Professional Series. Cambridge, MA: Chandos Publishing, an imprint of Elsevier, 2017.

WP5—data import, update

There have been not activity. We’ll start the data import when we have a first server operating.

Dissemination

Strictly focused in the SGS Pilot 1:

Related to Wikimedia LEADS:

What’s next

In the next days we’ll procure the pending hardware component to set up the server prototype. Then we’ll define the configuration and procedure to set up an UNRAID server instance ready for data preservation tasks. Then we’ll migrate the multimedia archive of «Memorias del Cine» to the server. The fun part of cataloging the archive in Wikibase would start as soon as we have an stable ontology model for digital archives.

Also, in May I’ll be attending the Wikimedia Hackathon in Tallinn and the AI Sauna in Helskinki. Reach me there in person if you are interested in our work.

PS: Adding references to WP5 (20240430).

April 24, 2024

23 Apr 2024

Embeddable Game Engine

Many years ago, when working at Xamarin, where we were building cross-platform libraries for mobile developers, we wanted to offer both 2D and 3D gaming capabilities for our users in the form of adding 2D or 3D content to their mobile applications.

For 2D, we contributed and developed assorted Cocos2D-inspired libraries.

For 3D, the situation was more complex. We funded a few over the years, and we contributed to others over the years, but nothing panned out (the history of this is worth a dedicated post).

Around 2013, we looked around, and there were two contenders at the time, one was an embeddable engine with many cute features but not great UI support called Urho, and the other one was a Godot, which had a great IDE, but did not support being embedded.

I reached out to Juan at the time to discuss whether Godot could be turned into such engine. While I tend to take copious notes of all my meetings, those notes sadly were gone as part of the Microsoft acquisition, but from what I can remember Juan told me, "Godot is not what you are looking for" in two dimensions, there were no immediate plans to turn it into an embeddable library, and it was not as advanced as Urho, so he recommended that I go with Urho.

We invested heavily in binding Urho and created UrhoSharp that would go into becoming a great 3D library for our C# users and worked not only on every desktop and mobile platform, but we did a ton of work to make it great for AR and VR headsets. Sadly, Microsoft's management left UrhoSharp to die.

Then, the maintainer of Urho stepped down, and Godot became one of the most popular open-source projects in the world.

Last year, @Faolan-Rad contributed a patch to Godot to turn it into a library that could be embedded into applications. I used this library to build SwiftGodotKit and have been very happy with it ever since - allowing people to embed Godot content into their application.

However, the patch had severe limitations; it could only ever run one Godot game as an embedded system and could not do much more. The folks at Smirk Software wanted to take this further. They wanted to host independent Godot scenes in their app and have more control over those so they could sprinkle Godot content at their heart's content on their mobile app (demo)

They funded some initial work to do this and hired Gergely Kis's company to do this work.

Gergely demoed this work at GodotCon last year. I came back very excited from GodotCon and I decided to turn my prototype Godot on iPad into a complete product.

One of the features that I needed was the ability to embed chunks of Godot in discrete components in my iPad UI, so we worked with Gergely to productize and polish this patch for general consumption.

Now, there is a complete patch under review to allow people to embed arbitrary Godot scenes into their apps. For SwiftUI users, this means that you can embed a Godot scene into a View and display and control it at will.

Hopefully, the team will accept this change into Godot, and once this is done, I will update SwiftGodotKit to get these new capabilities to Swift users (bindings for other platforms and languages are left as an exercise to the reader).

It only took a decade after talking to Juan, but I am back firmly in Godot land.

April 23, 2024

Notifications in 46 and beyond

One of the things we’re tackling as part of the STF infrastructure initiative is improving notifications. Other platforms have advanced significantly in this area over the past decade, while we still have more or less the same notifications we had since the early GNOME 3 days, both in terms of API and feature set. There’s plenty to do here 🙂

The notification drawer on GNOME 45

Modern needs

As part of the effort to port GNOME Shell to mobile Jonas looked into the delta between what we currently support and what we’d need for a more modern notification experience. Some of these limitations are specific to GNOME’s implementation, while others are relevant to all desktops.

Tie notifications to apps

As of GNOME 45 there’s no clear identification on notification bubbles which app they were sent by. Sometimes it’s hard to tell where a notification is coming from, which can be annoying when managing notifications in Settings. This also has potential security implications, since the lack of identification makes it trivial to impersonate other apps.

We want all notifications to be clearly identified as coming from a specific app.

Global notification sounds

GNOME Shell can’t play notification sounds in all cases, depending on the API the app is using (see below). Apps not primarily targeting GNOME Shell directly tend to play sounds themselves because they can’t rely on the system always doing it (it’s an optional feature of the XDG Notification API which different desktops handle differently). This works, but it’s messy for app developers because it’s hard to test and they have to implement a fallback sound played by the app. From a user perspective it’s annoying that you can’t always tell where sounds are coming from because they’re not necessarily tied to a notification bubble. There’s also no central place to manage the notification behavior and it doesn’t respect Do Not Disturb.

Notification grouping

Currently all notifications are just added to a single chronological list, which gets messy very quickly. In order to limit the length of the list we only keep the latest 3 notifications for every app, so notifications can disappear before you have a chance to act on them.

Other platforms solve this by grouping notifications by app, or even by message thread, but we don’t have anything like this at the moment.

Notifications grouped by app on the iOS lock screen

Expand media support

Currently each notification bubble can only contain one (small) image. It’s mostly used for user avatars (for messages, emails, and the like), but sometimes also for actual content (e.g. a thumbnail for the image someone sent).

Ideally what we want is to be able to show larger images in addition to avatars, as the actual content of the notification.

As of GNOME 45 we only have a single slot for images on notifications, and it’s too small for actual content.
Other platforms have multiple slots (app icon, user avatar, and content image), and media can be expanded to much larger sizes.

There’s also currently no way to include descriptive text for images in notifications, so they are inaccessible to screen readers. This isn’t as big a deal with the current icons since they’re small and mostly used for ornamental purposes, but will be important when we add larger images in the body.

Updating notification content

It’s not possible for apps to update the content inside notifications they sent earlier. This is needed to show progress bars in notifications, or updating the text if a chat message was modified.

How do we get there?

Unfortunately, it turns out that improving notifications is not just a matter of standardizing a few new features and implementing them in GNOME Shell. The way notifications work today has grown organically over the years and the status quo is messy. There are three different APIs used by apps today: XDG Notification, Gio.Notification, and XDG Portal.

How different notification APIs are used today

XDG Notification

This is the Freedesktop specification for a DBus interface for apps to send notifications to the system. It’s the oldest notification API still in use. Other desktops mostly use this API, e.g. KDE’s KNotification implements this spec.

Somewhat confusingly, this standard has never actually been finalized and is still marked as a draft today, despite not having seen significant changes in the past decade.

Gio.Notification

This is an API in GLib/Gio to send notifications, so it’s only used by GTK apps. It abstracts over different OS notification APIs, primarily the XDG one mentioned above, a private GNOME Shell API, the portal API, and Cocoa (macOS).

The primary one being used is the private DBus interface with GNOME Shell. This API was introduced in the early GNOME 3 days because the XDG standard API was deemed too complicated and was missing some features (in particular notifications were not tied to a specific app).

When using Gio.Notification apps can’t know which backend is used, and how a notification will be displayed or behave. For example, notifications can only persist after the app is closed if the private GNOME Shell API is used. These differences are specific to GNOME Shell, since the private API is only implemented there.

XDG Portal

XDG portals are secure, standardized system APIs for the Linux desktop. They were introduced as part of the push for app sandboxing around Flatpak, but can (and should) be used by non-sandboxed apps as well.

The XDG notification portal was inspired by the private GNOME Shell API, with some additional features from the XDG API mixed in.

XDG portals consist of a frontend and a backend. In the case of the notification portal, apps talk to the frontend using the portal API, while the backend talks to the system notification API. Backends are specific to the desktop environment, e.g. GNOME or KDE. On GNOME, the backend uses the private GNOME Shell API when possible.

The plan

From the GNOME Shell side we have the XDG API (used by non-GNOME apps), and the private API (used via Gio.Notification by GNOME apps). From the app side we additionally have the XDG portal API. Neither of these can easily supersede the others, because they all have different feature sets and are widely used. This makes improving our notifications tricky, because it’s not obvious which of the APIs we should extend.

After several discussions over the past few months we now have consensus that it makes the most sense to invest in the XDG portal API. Portals are the future of system APIs on the free desktop, and enable app sandboxing. Neither of the other APIs can fill this role.

Our plan for notification APIs going forward: Focus on the portal API

This requires work in a number of different modules, including the XDG portal spec, the XDG portal backend for GNOME, GNOME Shell, and client libraries such as Gio.Notification (in GLib), libportal, libnotify, and ashpd.

In the XDG portal spec, we are adding support for a number of missing features:

  • Tying notifications to apps
  • Grouping by message thread
  • Larger images in the notification body
  • Special notifications for e.g. calls and alarms
  • Clearing up some instances of undefined behavior (e.g. markup in the body, playing sounds, whether to show notifications on the lock screen, etc.)

This is the draft XDG desktop portal proposal for the spec changes.

On the GNOME Shell side, these are the primary things we’re doing (some already done in 46):

  • Cleanups and refactoring to make the code easier to work on
  • Improve keyboard navigation and screen reader accessibility
  • Header with app name and icon
  • Show full notification body and buttons in the drawer
  • Larger notification icons (e.g. user avatars on chat notifications)
  • Group notifications from the same app as a stack
  • Allow message threads to be grouped in a single notification bubbles
  • Larger images in the notification body
Mockups of what we’d ideally want, including grouping by app, threading, etc.

There are also animated mockups for some of this, courtesy of Jakub Steiner.

The long-term goal is for apps to switch to the portal API and deprecate both of the others as application-facing APIs. Internally we will still need something to communicate between the portal backend and GNOME Shell, but this isn’t public API so we’re much more flexible here. We might expand either the XDG API or the private GNOME Shell protocol for this purpose, but it has not been decided yet how we’ll do this.

What we did in GNOME 46

When we started the STF project late last year we thought we could just pull the trigger on a draft proposal Jonas for an API with the new capabilities needed for mobile. However, as we started discussing things in more detail we realized that this was the the wrong place to start. GNOME Shell already didn’t implement a number of features that are in the XDG notification spec, so standardizing new features was not the main blocker.

The code around notifications in GNOME Shell has grown historically and has seen multiple major UI redesigns since GNOME 3.0. Additional complexity comes from the fact that we try to avoid breaking extensions, which means it’s difficult to e.g. change function names or signatures. Over time this has resulted in technical debt, such as weird anachronistic structures and names. It was also not using many of the more recent GJS features which didn’t exist yet when this code was written originally.

Anyone remember that notifications used to be on the bottom? This is what they looked like in GNOME 3.6 (2012).

As a first step we restructured and cleaned up legacy code, ported it to the most recent GJS features, updated the coding style, and so on. This unfortunately means extensions need to be updated, but it puts us on much firmer ground for the future.

With this out of the way we added the first batch of features from our list above, namely adding notification headers, expanding notifications in the drawer, larger icons, and some style fixes to icons. We also fixed a very annoying issue with “App is ready” notifications not working as expected when clicking a notification (!3198 and !3199).

We also worked on a few other things that didn’t make it in time for 46, most notably grouping notifications by app (which there’s a draft MR for), and additionally grouping them by thread (prototype only).

Throughout the cycle we also continued to discuss the portal spec, as mentioned above. There are MRs against against XDG desktop portal and the libportal client library implementing the spec changes. There’s also a draft implementation for the GTK portal backend.

Future work

With all the groundwork laid in GNOME 46 and the spec draft mostly ready we’re in a good position to continue iterating on notifications in 47 and beyond. In GNOME 47 we want to add some of the first newly spec’d features, in particular notification sounds, markup support in the body, and display hints (e.g. showing on the lock screen or not).

We also want to continue work on the UI to unlock even more improvements in the future. In particular, grouping by app will allow us to drop the “only keep 3 notifications per app” behavior and will generally make notifications easier to manage, e.g. allowing to dismiss all notifications from a given app. We’re also planning to work on improving keyboard navigation and ensuring all content is accessible to screen readers.

Due to the complex nature of the UI for grouping by app and the many moving parts with moving forward on the spec it’s unclear if we’ll be able to do more than this in the scope of STF and within the 47 cycle. This means that additional features that require the new spec and/or lots of UI work, such as grouping by thread and custom UI for call or alarm notifications will probably be 48+ material.

Conclusion

As we hope this post has illustrated, notifications are way more complex than they might appear. Improving them requires untangling decades of legacy stuff across many different components, coordinating with other projects, and engaging with standards bodies. That complexity has made this hard to work on for volunteers, and there has not been any recent corporate interest in the area, which is why it has been stagnant for some time.

The Sovereign Tech Fund investment has allowed us to take the time to properly work through the problem, clean up technical debt, and make a plan for the future. We hope to leverage this momentum over the coming releases, for a best-in-class notification experience on the free desktop. Stay tuned 🙂

April 21, 2024

C is dead, long live C (APIs)

In the 80s and 90s software development landscape was quite different from today (or so I have been told). Everything that needed performance was written in C and things that did not were written in Perl. Because computers of the time were really slow, almost everything was in C. If you needed performance and fast development, you could write a C extension to Perl.

As C was the only game in town, anyone could use pretty much any other library directly. The number of dependencies available was minuscule compared to today, but you could use all of them fairly easily. Then things changed, as they have a tendency to do. First Python took over Perl. Then more and more languages started eroding C's dominant position. This lead to a duplication of effort. For example if you were using Java and wanted to parse XML (which was the coolness of its day), you'd need an XML parser written in Java. Just dropping libxml in your Java source tree would not cut it (you could still use native code libs but most people chose not to).

The number of languages and ecosystems kept growing and nowadays we have dozens of them. But suppose you want to provide a library that does something useful and you'd like it to be usable by as many people as possible. This is especially relevant for providing closed source libraries but the same applies to open source libs as well. You especially do not want to rewrite and maintain multiple implementations of the code in different languages. So what do you do?

Let's start by going through a list of programming languages and seeing what sort of dependencies they can use natively (i.e. the toolchain or stdlib provides this support out of the box rather than requiring an addon, code generator, IDL tool or the like)

  • C: C
  • Perl: Perl and C
  • Python: Python and C
  • C++: C++ and C
  • Rust: Rust and C
  • Java: Java and C
  • Lua: Lua and C
  • D: D, subset of C++ and C
  • Swift: Swift, Objective C, C++ (eventually?) and C
  • PrettyMuchAnyNewLanguage: itself and C
The message is quite clear. The only thing in common is C, so that is what you have to use. The alternative is maintaining an implementation per language leaving languages you explicitly do not support out in the cold.

So even though C as a language is (most likely) going away, C APIs are not. In fact, designing C APIs is a skill that might even see a resurgence as the language ecosystem fractures even further. Note that providing a library with a C API does not mean having to implement it in C. All languages have ways of providing libraries whose external API is compatible with C. As an extreme example, Visual Studio's C runtime libraries are nowadays written in C++.

CapyPDF's design and things picked up along the way

One of the main design goals of CapyPDF was that it should provide a C API and be usable from any language. It should also (eventually) provide a stable API and ABI. This means that the ground truth of the library's functionality is the C header. This turns out to have design implications to the library's internals that might be difficult to add in after the fact.

Hide everything

Perhaps the most important declaration in widely usable C headers is this.

typedef struct _someObject SomeObject;

In C parlance this means "there is a struct type _someObject somewhere, create an alias to it called SomeObjectType". This means that the caller can create pointers to structs of type SomeObject but do nothing else with them. This leads to the common "opaque structs" C API way of doing things:

SomeObject *o = some_object_new();
some_object_do_something(o, "hello");
some_object_destroy(o);

This permits you to change the internal representation of the object while still maintaining stable public API and ABI. Avoid exposing the internals of structs whenever possible, because once made public they can never be changed.

Objects exposed via pointers must never move in memory

This one is fairly obvious when you think about it. Unfortunately it means that if you want to give users access to objects that are stored in an std::vector, you can't do it with pointers, which is the natural way of doing things in C. Pushing more entries in the vector will eventually cause the capacity to be exceeded so the storage will be reallocated and entries moved to the new backing store. This invalidates all pointers.

There are several solutions to this, but the simplest one is to access those objects via type safe indices instead. They are defined like this:

typedef struct { int32_t id; } SomeObjectId;

This struct behaves "like an integer" in that you can pass it around as an int but it does not implicitly convert to any other "integer" type.

Objects must be destructable in any order

It is easy to write into documentation that "objects of type X must be destroyed before any object Y that they use". Unfortunately garbage collected languages do not read your docs and thus provide no guarantee whatsoever on object destruction order. When used in this way any object must be destructable at any time regardless of the state of any other object.

This is the opposite of how modern languages want to work. For the case of CapyPDF especially page draw contexts were done in an RAII style where they would submit their changes upon destruction. For an internal API this is nice and usable but for a public C API it is not. The implicit action had to be replaced with an explicit function to add the page that takes both object pointers (the draw context and document) as arguments. This ensures that they both must exist and be valid at the point of call.

Use transactionality whenever possible

It would be nice if all objects were immutable but sadly that would mean that you can't actually do anything. A library must provide ways for end users to create, mutate and destroy objects. When possible try to do this with a builder object. That is, the user creates a "transactional change" that they want to do. They can call setters and such as much as they want, but they don't affect the "actual document". All of this new state is isolated in the builder object. Once the user is finished they submit the change to the main object which is then validated and either rejected or accepted as a whole. The builder object then becomes an empty shell that can be either reused or discarded.

CapyPDF is an append only library. Once something has been "committed" it can never be taken out again. This is also something to strive towards, because removing things is a lot harder than adding them.

Prefer copying to sharing

When the library is given some piece of data, it makes a private copy of it. Otherwise it would need to coordinate the life cycle of the shared piece of data with the caller. This is where bugs lie. Copying does cost some performance but makes a whole class of difficult bugs just go away. In the case of CapyPDF the performance hit turned out not to be an issue since most of the runtime is spent compressing the output with zlib.

Every function call can fail, even those that can't

Every function in the library returns an error code. Even those that have no way of failing, because circumstances can change in the future. Maybe some input that could be anything somehow needs to be validated now and you can't change the function definition as it would break API. Thus every function returns an error code (except the function that converts an error code into an error string). Sadly this means that all "return values" must be handled via out parameters.

ErrorCode some_object_new(SomeObject **out_ptr);

This is not great, but such is life. 

Think of C APIs as "in-process RPC"

When designing the API of CapyPDF it was helpful to think of it like a call to a remote endpoint somewhere out there on the Internet. This makes you want to design functions that are as high level as possible and try to ignore all implementation details you can, almost as if the C API was a slightly cumbersome DSL. 

April 18, 2024

udev-hid-bpf: quickstart tooling to fix your HID devices with eBPF

For the last few months, Benjamin Tissoires and I have been working on and polishing a little tool called udev-hid-bpf [1]. This is the scaffolding required quickly and easily write, test and eventually fix your HID input devices (mouse, keyboard, etc.) via a BPF program instead of a full-blown custom kernel driver or a semi-full-blown kernel patch. To understand how it works, you need to know two things: HID and BPF [2].

Why BPF for HID?

HID is the Human Interface Device standard and the most common way input devices communicate with the host (HID over USB, HID over Bluetooth, etc.). It has two core components: the "report descriptor" and "reports", both of which are byte arrays. The report descriptor is a fixed burnt-in-ROM byte array that (in rather convoluted terms) tells us what we'll find in the reports. Things like "bits 16 through to 24 is the delta x coordinate" or "bit 5 is the binary button state for button 3 in degrees celcius". The reports themselves are sent at (usually) regular intervals and contain the data in the described format, as the devices perceives reality. If you're interested in more details, see Understanding HID report descriptors.

BPF or more correctly eBPF is a Linux kernel technology to write programs in a subset of C, compile it and load it into the kernel. The magic thing here is that the kernel will verify it, so once loaded, the program is "safe". And because it's safe it can be run in kernel space which means it's fast. eBPF was originally written for network packet filters but as of kernel v6.3 and thanks to Benjamin, we have BPF in the HID subsystem. HID actually lends itself really well to BPF because, well, we have a byte array and to fix our devices we need to do complicated things like "toggle that bit to zero" or "swap those two values".

If we want to fix our devices we usually need to do one of two things: fix the report descriptor to enable/disable/change some of the values the device pretends to support. For example, we can say we support 5 buttons instead of the supposed 8. Or we need to fix the report by e.g. inverting the y value for the device. This can be done in a custom kernel driver but a HID BPF program is quite a lot more convenient.

HID-BPF programs

For illustration purposes, here's the example program to flip the y coordinate. HID BPF programs are usually device specific, we need to know that the e.g. the y coordinate is 16 bits and sits in bytes 3 and 4 (little endian):

SEC("fmod_ret/hid_bpf_device_event")
int BPF_PROG(hid_y_event, struct hid_bpf_ctx *hctx)
{
	s16 y;
	__u8 *data = hid_bpf_get_data(hctx, 0 /* offset */, 9 /* size */);

	if (!data)
		return 0; /* EPERM check */

	y = data[3] | (data[4] << 8);
	y = -y;

	data[3] = y & 0xFF;
	data[4] = (y >> 8) & 0xFF;

	return 0;
}
  
That's it. HID-BPF is invoked before the kernel handles the HID report/report descriptor so to the kernel the modified report looks as if it came from the device.

As said above, this is device specific because where the coordinates is in the report depends on the device (the report descriptor will tell us). In this example we want to ensure the BPF program is only loaded for our device (vid/pid of 04d9/a09f), and for extra safety we also double-check that the report descriptor matches.

// The bpf.o will only be loaded for devices in this list
HID_BPF_CONFIG(
	HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 0x04D9, 0xA09F)
);

SEC("syscall")
int probe(struct hid_bpf_probe_args *ctx)
{
	/*
	* The device exports 3 interfaces.
	* The mouse interface has a report descriptor of length 71.
	* So if report descriptor size is not 71, mark as -EINVAL
	*/
	ctx->retval = ctx->rdesc_size != 71;
	if (ctx->retval)
		ctx->retval = -EINVAL;

	return 0;
}
Obviously the check in probe() can be as complicated as you want.

This is pretty much it, the full working program only has a few extra includes and boilerplate. So it mostly comes down to compiling and running it, and this is where udev-hid-bpf comes in.

udev-hid-bpf as loader

udev-hid-bpf is a tool to make the development and testing of HID BPF programs simple, and collect HID BPF programs. You basically run meson compile and meson install and voila, whatever BPF program applies to your devices will be auto-loaded next time you plug those in. If you just want to test a single bpf.o file you can udev-hid-bpf install /path/to/foo.bpf.o and it will install the required udev rule for it to get loaded whenever the device is plugged in. If you don't know how to compile, you can grab a tarball from our CI and test the pre-compiled bpf.o. Hooray, even simpler.

udev-hid-bpf is written in Rust but you don't need to know Rust, it's just the scaffolding. The BPF programs are all in C. Rust just gives us a relatively easy way to provide a static binary that will work on most tester's machines.

The documentation for udev-hid-bpf is here. So if you have a device that needs a hardware quirk or just has an annoying behaviour that you always wanted to fix, well, now's the time. Fixing your device has never been easier! [3].

[1] Yes, the name is meh but you're welcome to come up with a better one and go back in time to suggest it a few months ago.
[2] Because I'm lazy the terms eBPF and BPF will be used interchangeably in this article. Because the difference doesn't really matter in this context, it's all eBPF anyway but nobody has the time to type that extra "e".
[3] Citation needed

April 17, 2024

Graphics offload revisited

We first introduced support for dmabufs and graphics offload last fall, and it is included in GTK 4.14. Since then, some improvements have happened, so it is time for an update.

Improvements down the stack

The GStreamer 1.24 release has improved support for explicit modifiers, and the GStreamer media backend in GTK has been updated to request dmabufs from GStreamer.

Another thing that happens on the GStreamer side is that dmabufs sometimes come with padding: in that case GStreamer will give us a buffer with a viewport and expect us to only show that part of the buffer. This is sometimes necessary to accommodate stride and size requirements of hardware decoders.

GTK 4.14 supports this when offloading, and only shows the part of the dmabuf indicated by the viewport.

Improvements inside GTK

We’ve merged new GSK renderers for GTK 4.14. The new renderers support dmabufs in the same way as the old gl renderer. In addition, the new Vulkan renderer produces dmabufs when rendering to a texture.

In GTK 4.16, the GtkGLArea widget will also provide dmabuf textures if it can, so you can put it in a GtkGraphicsOffload widget to send its output directly to the compositor.

You can see this in action in the shadertoy demo in gtk4-demo in git main.

Shadertoy demo with golden outline around offloaded graphics

Improved compositor interaction

One nice thing about graphics offload is that the compositor may be able to pass the dmabuf to the KMS apis of the kernel without any extra copies or compositing. This is known as direct scanout and it helps reduce power consumption since large parts of the GPU aren’t used.

The compositor can only do this if the dmabuf is attached to a fullscreen surface and has the right dimensions to cover it fully. If it does not cover it fully, the compositor needs some assurance that it is ok to leave the outside parts black.

One way for clients to provide that assurance is to attach a specially constructed black buffer to a surface below the one that has the dmabuf attached. GSK will do this now if it finds black color node in the rendernode tree, and the GtkGraphicsOffload widget will put that color there if you set the “black-background” property. This should greatly increase the chances that you can enjoy the benefits of direct scanout when playing fullscreen video.

Developer trying to make sense of graphics offload
Offloaded content with fullscreen black background

In implementing this for GTK 4.16, we found some issues with mutter’s support for single-pixel buffers, but these have been fixed quickly.

To see graphics offload and direct scanout in action in a GTK4 video player, you can try the Light Video Player.

If you want to find out if graphics offload works on your system or debug why it doesn’t, this recent post by Benjamin is very helpful.

Summary

GTK 4 continues to improve for efficient video playback and drives improvements in this area up and down the stack.

A big thank you for pushing all of this forward goes to Robert Mader. ❤

April 16, 2024

From WebKit/GStreamer to rust-av, a journey on our stack’s layers

In this post I’ll try to document the journey starting from a WebKit issue and ending up improving third-party projects that WebKitGTK and WPEWebKit depend on.

I’ve been working on WebKit’s GStreamer backends for a while. Usually some new feature needed on WebKit side would trigger work on GStreamer. That’s quite common and healthy actually, by improving GStreamer (bug fixes or implementing new features) we make the whole stack stronger (hopefully). It’s not hard to imagine other web-engines, such as Servo for instance, leveraging fixes made in GStreamer in the context of WebKit use-cases.

Sometimes though we have to go deeper and this is what this post is about!

Since version 2.44, WebKitGTK and WPEWebKit ship with a WebCodecs backend. That backend leverages the wide range of GStreamer audio and video decoders/encoders to give low-level access to encoded (or decoded) audio/video frames to Web developers. I delivered a lightning talk at gst-conf 2023 about this topic.

There are still some issues to fix regarding performance and some W3C web platform tests are still failing. The AV1 decoding tests were flagged early on while I was working on WebCodecs, I didn’t have time back then to investigate the failures further, but a couple weeks ago I went back to those specific issues.

The WebKit layout tests harness is executed by various post-commit bots, on various platforms. The WebKitGTK and WPEWebKit bots run on Linux. The WebCodec tests for AV1 currently make use of the GStreamer av1enc and dav1ddec elements. We currently don’t run the tests using the modern and hardware-accelerated vaav1enc and vaav1dec elements because the bots don’t have compatible GPUs.

The decoding tests were failing, this one for instance (the ?av1 variant). In that test both encoding and decoding are tested, but decoding was failing, for a couple reasons. Rabbit hole starts here. After debugging this for a while, it was clear that the colorspace information was lost between the encoded chunks and the decoded frames. The decoded video frames didn’t have the expected colorimetry values.

The VideoDecoderGStreamer class basically takes encoded chunks and notifies decoded VideoFrameGStreamer objects to the upper layers (JS) in WebCore. A video frame is basically a GstSample (Buffer and Caps) and we have code in place to interpret the colorimetry parameters exposed in the sample caps and translate those to the various WebCore equivalents. So far so good, but the caps set on the dav1ddec elements didn’t have those informations! I thought the dav1ddec element could be fixed, “shouldn’t be that hard” and I knew that code because I wrote it in 2018 :)

So let’s fix the GStreamer dav1ddec element. It’s a video decoder written in Rust, relying on the dav1d-rs bindings of the popular C libdav1d library. The dav1ddec element basically feeds encoded chunks of data to dav1d using the dav1d-rs bindings. In return, the bindings provide the decoded frames using a Dav1dPicture Rust structure and the dav1ddec GStreamer element basically makes buffers and caps out of this decoded picture. The dav1d-rs bindings are quite minimal, we implemented API on a per-need basis so far, so it wasn’t very surprising that… colorimetry information for decoded pictures was not exposed! Rabbit hole goes one level deeper.

So let’s add colorimetry API in dav1d-rs. When working on (Rust) bindings of a C library, if you need to expose additional API the answer is quite often in the C headers of the library. Every Dav1dPicture has a Dav1dSequenceHeader, in which we can see a few interesting fields:

typedef struct Dav1dSequenceHeader {
...
    enum Dav1dColorPrimaries pri; ///< color primaries (av1)
    enum Dav1dTransferCharacteristics trc; ///< transfer characteristics (av1)
    enum Dav1dMatrixCoefficients mtrx; ///< matrix coefficients (av1)
    enum Dav1dChromaSamplePosition chr; ///< chroma sample position (av1)
    ...
    uint8_t color_range;
    ...
...
} Dav1dSequenceHeader;

After sharing a naive branch with rust-av co-maintainers Luca Barbato and Sebastian Dröge, I came up with a couple pull-requests that eventually were shipped in version 0.10.3 of dav1d-rs. I won’t deny matching primaries, transfer, matrix and chroma-site enum values to rust-avs Pixel enum was a bit challenging :P Anyway, with dav1d-rs fixed up, rabbit hole level goes up one level :)

Now with the needed dav1d-rs API, the GStreamer dav1ddec element could be fixed. Again, matching the various enum values to their GStreamer equivalent was an interesting exercise. The merge request was merged, but to this date it’s not shipped in a stable gst-plugins-rs release yet. There’s one more complication here, ABI broke between dav1d 1.2 and 1.4 versions. The dav1d-rs 0.10.3 release expects the latter. I’m not sure how we will cope with that in terms of gst-plugins-rs release versioning…

Anyway, WebKit’s runtime environment can be adapted to ship dav1d 1.4 and development version of the dav1ddec element, which is what was done in this pull request. The rabbit is getting out of his hole.

The WebCodec AV1 tests were finally fixed in WebKit, by this pull request. Beyond colorimetry handling a few more fixes were needed, but luckily those didn’t require any fixes outside of WebKit.

Wrapping up, if you’re still reading this post, I thank you for your patience. Working on inter-connected projects can look a bit daunting at times, but eventually the whole ecosystem benefits from cross-project collaborations like this one. Thanks to Luca and Sebastian for the help and reviews in dav1d-rs and the dav1ddec element. Thanks to my fellow Igalia colleagues for the WebKit reviews.

Retro v2

Retro; the customizable clock widget is now available on Flathub in v2

Download on Flathub

This new release comes with

Support both 12h and 24h clock format. It follows GNOME Date & Time preference while being sandboxed thanks to libportal new API for the settings portal.

Energy usage has been improved by using a more efficient method to get the time and by making use of the magic GtkWindow.suspended property to stop updating the clock when the window is not visible.

Better support for round clocks. The new GTK renderer fixed the visual glitch on transparent corners caused by large border radius. Retro now restores window dimensions and disables the border radius on maximize to make it look good, no matter the shape.

Controls have been moved to a floating header bar to stay out of the way and prevent interference with customizations.

There are further improvements to do, but I decided to publish early because Retro was using GNOME 43 runtime which is end-of-life and I have limited time to spend on it.

Help welcome https://github.com/sonnyp/Retro/issues

April 14, 2024

Making GTK graphics offloading work

(I need to put that somewhere because people ask about it and having a little post to explain it is nice.)

What’s it about?
GTK recently introduced the ability to offload graphics rendering, but it needs rather recent everything to work well for offloading video decoding.

So, what do you need to make sure this works?

First, you of course need a video to test. On a modern desktop computer, you want a 4k 60fps video or better to have something that pushes your CPU to the limits so you know when it doesn’t work. Of course, the recommendation has to be Big Buck Bunny at the highest of qualities – be aware that the most excellent 4000×2250 @ 60fps encoding is 850MB. On my Intel TigerLake, that occasionally drops frames when I play that with software decoding, and I can definitely hear the fan turn on.
When selecting a video file, keep in mind that the format matters.

Second, you need hardware decoding. That is provided by libva and can be queried using the vainfo tool (which comes in the `libva-utils` package in Fedora). If that prints a long list of formats (it’s about 40 for me), you’re good. If it doesn’t, you’ll need to go hunt for the drivers – due to the patent madness surrounding video formats that may be more complicated than you wish. For example, on my Intel laptop on Fedora, I need the intel-media-driver package which is hidden in the nonfree RPMFusion repository.
If you look at the list from vainfo, the format names give some hints – usually VP9 and MPEG2 exist. H264 and HEVC aka H265 are the patent madness, and recent GPUs can sometimes do AV1. The Big Buck Bunny video from above is H264, so if you’re following along, make sure that works.

Now you need a working video player. I’ll be using gtk4-demo (which is in the gtk4-devel-tools package, but you already have that installed of course) and its video player example because I know it works there. A shoutout goes out to livi which was the first non-demo video player to have a release that supports graphics offloading. You need GTK 4.14 and GStreamer 1.24 for this to work. At the time of writing, this is only available in Fedora rawhide, but hopefully Fedora 40 will gain the packages soon.

If you installed new packages above, now is a good time to check if GStreamer picked up all the hardware decoders. gst-inspect-1.0 va will list all the elements with libva support. If it didn’t pick up decoders for all the formats it should have (there should be a vah264dec listed for H264 if you want to decode the video above), then the easiest way to get them is to delete GStreamer’s registry cache in ~/.cache/gstreamer-1.0.

If you want to make sure GStreamer does the right thing, you can run the video player with GST_DEBUG=GST_ELEMENT_FACTORY:4. It will print out debug messages about all the elements it is creating for playback. If that includes a line for an element from the previous list (like `vah264dec` in our example) things are working. If it picks something else (like `avdec_h264` or `openh264dec`) then they are not.

Finally you need a compositor that supports YUV formats. Most compositors do – gnome-shell does since version 45 for example – but checking can’t hurt: If wayland-info (in the wayland-utils package in Fedora) lists the NV12 format, you’re good.

And now everything works.
If you have a 2nd monitor you can marvel at what goes on behind the scenes by running the video player with GDK_DEBUG=dmabuf,offload and GTK will tell you what it does for every frame, and you can see it dynamically switching between offloading or not as you fullscreen (or not), click on the controls (or not) and so on. Or you could have used it previously to see why things didn’t work.
You can also look at the top and gputop variant of your choice and you will see that the video player takes a bit of CPU to drive the video decoding engine and inform the compositor about new frames and the compositor takes a bit of CPU telling the 3D engine to composite things and send them to the monitor. With the video above it’s around 10% on my laptop for the CPU usage each and about 20% GPU usage.

And before anyone starts complaining that this is way too complicated: If you read carefully, all of this should work out of the box in the near future. This post just lists the tools to troubleshoot what went wrong while developing a fast video player.

April 07, 2024

Fragments 3.0

It has finally happened! The long awaited major update of Fragments is now available, which includes many exciting new features.

The most important addition is support for torrent files. It is now possible to select the files you want to download from a torrent. The files can be searched and sorted, individual files can be opened directly from Fragments.

Further new features

    • Added torrents can now be searched
    • In addition to magnet links, *.torrent links in the clipboard are now also recognized
    • Prevent system from going to sleep when torrents are active
    • New torrents can be added via drag and drop
    • Automatic trashing of *.torrent files after adding them
    • Stop downloads when a metered network gets detected

    Improvements

      • When controlling remote sessions, the local Transmission daemon no longer gets started
      • Torrents are automatically restarted if an incorrect location has been fixed
      • Torrents can now also be added via CLI
      • Clipboard toast notification is no longer displayed multiple times
      • Reduced CPU/resource consumption through adaptive polling interval
      • Improved accessibility of the user interface
      • Modernized user interface through the use of new Adwaita widgets
      • Update from Transmission 3.0.5 to 4.0.5

      Thanks to Maximiliano and Tobias for once again helping with this release. As usual this release contains many other improvements, fixes and new translations thanks to all the contributors and upstream projects.

      Also a big shoutout to the Transmission project, without which Fragments would not be possible, for their fantastic 4.0 release!

      The new Fragments release can be downloaded and installed from Flathub: