2024-12-15 12:29.33: New job: test ocaml-multicore/picos https://github.com/ocaml-multicore/picos.git#refs/heads/add-more-structured-run-ops (9dc05dc44b060e5562f5568220583b11fe0ae0e4) (linux-x86_64:(lint-fmt))
Base: ocaml/opam:debian-12-ocaml-4.08@sha256:b19f193fe7f326e3d37af7875868ea5e5a67c750ba4b7b2a04cd94a89cf6c031
ocamlformat version: version 0.27.0 (from opam)
To reproduce locally:
git clone --recursive "https://github.com/ocaml-multicore/picos.git" -b "add-more-structured-run-ops" && cd "picos" && git reset --hard 9dc05dc4
cat > Dockerfile <<'END-OF-DOCKERFILE'
FROM ocaml/opam:debian-12-ocaml-4.08@sha256:b19f193fe7f326e3d37af7875868ea5e5a67c750ba4b7b2a04cd94a89cf6c031
USER 1000:1000
RUN cd ~/opam-repository && (git cat-file -e 2ac5b4411dc6433623d35bbb1ad092b393d3174e || git fetch origin master) && git reset -q --hard 2ac5b4411dc6433623d35bbb1ad092b393d3174e && git log --no-decorate -n1 --oneline && opam update -u
RUN opam depext -i dune
WORKDIR /src
RUN opam depext -i ocamlformat=0.27.0
COPY --chown=1000:1000 . /src/
RUN opam exec -- dune build @fmt --ignore-promoted-rules || (echo "dune build @fmt failed"; exit 2)
END-OF-DOCKERFILE
docker build .
END-REPRO-BLOCK
2024-12-15 12:29.33: Using cache hint "ocaml-multicore/picos-ocaml/opam:debian-12-ocaml-4.08@sha256:b19f193fe7f326e3d37af7875868ea5e5a67c750ba4b7b2a04cd94a89cf6c031-debian-12-4.08_opam-2.3-ocamlformat-2ac5b4411dc6433623d35bbb1ad092b393d3174e"
2024-12-15 12:29.33: Using OBuilder spec:
((from ocaml/opam:debian-12-ocaml-4.08@sha256:b19f193fe7f326e3d37af7875868ea5e5a67c750ba4b7b2a04cd94a89cf6c031)
(user (uid 1000) (gid 1000))
(run (cache (opam-archives (target /home/opam/.opam/download-cache)))
(network host)
(shell "cd ~/opam-repository && (git cat-file -e 2ac5b4411dc6433623d35bbb1ad092b393d3174e || git fetch origin master) && git reset -q --hard 2ac5b4411dc6433623d35bbb1ad092b393d3174e && git log --no-decorate -n1 --oneline && opam update -u"))
(run (cache (opam-archives (target /home/opam/.opam/download-cache)))
(network host)
(shell "opam depext -i dune"))
(workdir /src)
(run (cache (opam-archives (target /home/opam/.opam/download-cache)))
(network host)
(shell "opam depext -i ocamlformat=0.27.0"))
(copy (src .) (dst /src/))
(run (shell "opam exec -- dune build @fmt --ignore-promoted-rules || (echo \"dune build @fmt failed\"; exit 2)"))
)
2024-12-15 12:29.33: Waiting for resource in pool OCluster
2024-12-15 12:29.33: Waiting for worker…
2024-12-15 12:29.33: Got resource from pool OCluster
Building on x86-bm-c20.sw.ocaml.org
HEAD is now at a095239 Tweak bundle representation
HEAD is now at 9dc05dc Add more structured `Run` operations
(from ocaml/opam:debian-12-ocaml-4.08@sha256:b19f193fe7f326e3d37af7875868ea5e5a67c750ba4b7b2a04cd94a89cf6c031)
2024-12-15 12:30.13 ---> saved as "0ccd3f5b4d657f0944073853d77008496ca475f3e6ca9e2160b53453ab7d01cf"
/: (user (uid 1000) (gid 1000))
/: (run (cache (opam-archives (target /home/opam/.opam/download-cache)))
(network host)
(shell "cd ~/opam-repository && (git cat-file -e 2ac5b4411dc6433623d35bbb1ad092b393d3174e || git fetch origin master) && git reset -q --hard 2ac5b4411dc6433623d35bbb1ad092b393d3174e && git log --no-decorate -n1 --oneline && opam update -u"))
From https://github.com/ocaml/opam-repository
* branch master -> FETCH_HEAD
11bdbee611..e0298ebc98 master -> origin/master
2ac5b4411d Merge pull request #26998 from Julow/release-ocamlformat-0.27.0
<><> Updating package repositories ><><><><><><><><><><><><><><><><><><><><><><>
[default] synchronised from file:///home/opam/opam-repository
default (at file:///home/opam/opam-repository):
[INFO] opam 2.1 and 2.2 include many performance and security improvements over 2.0; please consider upgrading (https://opam.ocaml.org/doc/Install.html)
Everything as up-to-date as possible (run with --verbose to show unavailable upgrades).
However, you may "opam upgrade" these packages explicitly, which will ask permission to downgrade or uninstall the conflicting packages.
Nothing to do.
# Run eval $(opam env) to update the current shell environment
2024-12-15 12:31.37 ---> saved as "3d509a5585cf344767c6ea96680469eb8b217d41433a78fe4b6200ef96a501ed"
/: (run (cache (opam-archives (target /home/opam/.opam/download-cache)))
(network host)
(shell "opam depext -i dune"))
# Detecting depexts using vars: arch=x86_64, os=linux, os-distribution=debian, os-family=debian
# No extra OS packages requirements found.
# All required OS packages found.
# Now letting opam install the packages
The following actions will be performed:
- install dune 3.17.0
<><> Gathering sources ><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
[dune.3.17.0] found in cache
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><><><><><><>
-> installed dune.3.17.0
Done.
# Run eval $(opam env) to update the current shell environment
2024-12-15 12:32.25 ---> saved as "34688717966235ebb15f3e60edc80a5d3b22b44cff0770feb510177f53294971"
/: (workdir /src)
/src: (run (cache (opam-archives (target /home/opam/.opam/download-cache)))
(network host)
(shell "opam depext -i ocamlformat=0.27.0"))
# Detecting depexts using vars: arch=x86_64, os=linux, os-distribution=debian, os-family=debian
# No extra OS packages requirements found.
# All required OS packages found.
# Now letting opam install the packages
The following actions will be performed:
- install sexplib0 v0.14.0 [required by base]
- install menhirLib 20240715 [required by ocamlformat-lib]
- install menhirCST 20240715 [required by menhir]
- install menhirSdk 20240715 [required by ocamlformat-lib]
- install ocamlbuild 0.15.0 [required by fpath, astring, uuseg]
- install either 1.0.0 [required by ocamlformat-lib]
- install ocamlfind 1.9.6 [required by ocp-indent, astring, fpath, uuseg]
- install cmdliner 1.3.0 [required by ocamlformat]
- install seq base [required by re]
- install csexp 1.5.2 [required by ocamlformat]
- install ocaml-version 3.7.1 [required by ocamlformat-lib]
- install dune-build-info 3.17.0 [required by ocamlformat-lib]
- install camlp-streams 5.0.1 [required by ocamlformat-lib]
- install fix 20230505 [required by ocamlformat-lib]
- install menhir 20240715 [required by ocamlformat-lib]
- install topkg 1.0.7 [required by fpath, astring, uuseg]
- install base-bytes base [required by ocp-indent]
- install re 1.11.0 [required by ocamlformat]
- install dune-configurator 3.17.0 [required by base]
- install uutf 1.0.3 [required by ocamlformat-lib]
- install astring 0.8.5 [required by ocamlformat-lib]
- install ocp-indent 1.8.1 [required by ocamlformat-lib]
- install base v0.14.3 [required by ocamlformat-lib]
- install uucp 15.0.0 [required by uuseg]
- install fpath 0.7.3 [required by ocamlformat-lib]
- install stdio v0.14.0 [required by ocamlformat-lib]
- install uuseg 15.0.0 [required by ocamlformat-lib]
- install ocamlformat-lib 0.27.0 [required by ocamlformat]
- install ocamlformat 0.27.0
===== 29 to install =====
<><> Gathering sources ><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
[astring.0.8.5] found in cache
[base.v0.14.3] found in cache
[camlp-streams.5.0.1] found in cache
[cmdliner.1.3.0] found in cache
[csexp.1.5.2] found in cache
[dune-build-info.3.17.0] found in cache
[dune-configurator.3.17.0] found in cache
[either.1.0.0] found in cache
[fix.20230505] found in cache
[fpath.0.7.3] found in cache
[menhir.20240715] found in cache
[menhirCST.20240715] found in cache
[menhirLib.20240715] found in cache
[menhirSdk.20240715] found in cache
[ocaml-version.3.7.1] found in cache
[ocamlbuild.0.15.0] found in cache
[ocamlfind.1.9.6] found in cache
[ocamlformat.0.27.0] found in cache
[ocamlformat-lib.0.27.0] found in cache
[ocp-indent.1.8.1] found in cache
[re.1.11.0] found in cache
[sexplib0.v0.14.0] found in cache
[stdio.v0.14.0] found in cache
[topkg.1.0.7] found in cache
[uucp.15.0.0] found in cache
[uuseg.15.0.0] found in cache
[uutf.1.0.3] found in cache
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><><><><><><>
-> installed seq.base
-> installed camlp-streams.5.0.1
-> installed csexp.1.5.2
-> installed cmdliner.1.3.0
-> installed either.1.0.0
-> installed fix.20230505
-> installed menhirCST.20240715
-> installed menhirLib.20240715
-> installed menhirSdk.20240715
-> installed ocaml-version.3.7.1
-> installed re.1.11.0
-> installed sexplib0.v0.14.0
-> installed dune-build-info.3.17.0
-> installed dune-configurator.3.17.0
-> installed ocamlfind.1.9.6
-> installed base-bytes.base
-> installed ocamlbuild.0.15.0
-> installed ocp-indent.1.8.1
-> installed base.v0.14.3
-> installed stdio.v0.14.0
-> installed topkg.1.0.7
-> installed uutf.1.0.3
-> installed astring.0.8.5
-> installed fpath.0.7.3
-> installed menhir.20240715
-> installed uucp.15.0.0
-> installed uuseg.15.0.0
-> installed ocamlformat-lib.0.27.0
-> installed ocamlformat.0.27.0
Done.
<><> ocp-indent.1.8.1 installed successfully ><><><><><><><><><><><><><><><><><>
=> This package requires additional configuration for use in editors. Install package 'user-setup', or manually:
* for Emacs, add these lines to ~/.emacs:
(add-to-list 'load-path "/home/opam/.opam/4.08/share/emacs/site-lisp")
(require 'ocp-indent)
* for Vim, add this line to ~/.vimrc:
set rtp^="/home/opam/.opam/4.08/share/ocp-indent/vim"
# Run eval $(opam env) to update the current shell environment
2024-12-15 12:34.04 ---> saved as "bac7f34598113e39bac8b0103f4ffb35d747211ead105a9651a51c2134578cfd"
/src: (copy (src .) (dst /src/))
2024-12-15 12:34.04 ---> saved as "f7b03f7b63cfc90d4012c81ec61710f9304d2c3e8525d9bca819c8591efa6754"
/src: (run (shell "opam exec -- dune build @fmt --ignore-promoted-rules || (echo \"dune build @fmt failed\"; exit 2)"))
File "lib/picos.domain/picos_domain.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos.domain/picos_domain.mli b/_build/default/lib/picos.domain/.formatted/picos_domain.mli
index 85a3f11..ff0763b 100644
--- a/_build/default/lib/picos.domain/picos_domain.mli
+++ b/_build/default/lib/picos.domain/.formatted/picos_domain.mli
@@ -6,12 +6,12 @@ val at_exit : (unit -> unit) -> unit
(** [at_exit action] registers [action] to be called when the current domain
exits.
- On OCaml 5 this calls {!Domain.at_exit}. On OCaml 4 this calls
+ On OCaml 5 this calls {!Domain.at_exit}. On OCaml 4 this calls
{!Stdlib.at_exit}. *)
val recommended_domain_count : unit -> int
(** [recommended_domain_count ()] returns [1] on OCaml 4 and calls
- {!Domain.recommended_domain_count} on OCaml 5. *)
+ {!Domain.recommended_domain_count} on OCaml 5. *)
val is_main_domain : unit -> bool
(** [is_main_domain ()] returns [true] on OCaml 4 and calls
@@ -23,14 +23,14 @@ module DLS : sig
ℹ️ On OCaml 4 there is always only a single domain. *)
type 'a key
- (** Represents a key for storing values of type ['a] in storage associated with
- domains. *)
+ (** Represents a key for storing values of type ['a] in storage associated
+ with domains. *)
val new_key : (unit -> 'a) -> 'a key
(** [new_key compute] allocates a new key for associating values in storage
- associated with domains. The initial value for each domain is [compute]d
+ associated with domains. The initial value for each domain is [compute]d
by calling the given function if the [key] is {{!get} read} before it has
- been {{!set} written}. The [compute] function might be called multiple
+ been {{!set} written}. The [compute] function might be called multiple
times per domain, but only one result will be used. *)
val get : 'a key -> 'a
File "lib/picos_aux.mpmcq/picos_aux_mpmcq.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_aux.mpmcq/picos_aux_mpmcq.mli b/_build/default/lib/picos_aux.mpmcq/.formatted/picos_aux_mpmcq.mli
index f8f280a..d069daa 100644
--- a/_build/default/lib/picos_aux.mpmcq/picos_aux_mpmcq.mli
+++ b/_build/default/lib/picos_aux.mpmcq/.formatted/picos_aux_mpmcq.mli
@@ -1,10 +1,10 @@
(** Lock-free multi-producer, multi-consumer queue.
🏎️ This data structure is optimized for use as a building block of the ready
- queue of a (mostly) fair (i.e. mostly FIFO) multi-threaded scheduler. For
+ queue of a (mostly) fair (i.e. mostly FIFO) multi-threaded scheduler. For
example, one could use a queue per thread, to reduce contention, and have
threads attempt to pop fibers from the queues of other threads when their
- local queues are empty. It is also possible to use only a single shared
+ local queues are empty. It is also possible to use only a single shared
queue, but that will result in very high contention as this queue is not
relaxed. *)
@@ -32,8 +32,7 @@ val pop_exn : 'a t -> 'a
@raise Empty in case the queue was empty. *)
val length : 'a t -> int
-(** [length queue] returns the length or the number of values
- in the [queue]. *)
+(** [length queue] returns the length or the number of values in the [queue]. *)
val is_empty : 'a t -> bool
(** [is_empty queue] is equivalent to {{!length} [length queue = 0]}. *)
File "lib/picos_aux.rc/intf.ml", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_aux.rc/intf.ml b/_build/default/lib/picos_aux.rc/.formatted/intf.ml
index d4097f8..403ad33 100644
--- a/_build/default/lib/picos_aux.rc/intf.ml
+++ b/_build/default/lib/picos_aux.rc/.formatted/intf.ml
@@ -23,10 +23,10 @@ module type S = sig
ℹ️ This is intended for cases where a resource needs to be safely shared
between multiple independent threads of control whether they are fibers,
- threads, or domains. In that use case you typically need to {{!incr}
- increment} the reference count before handing the resource from one
- independent thread of control to another and the other independent thread
- of control then becomes responsible for {{!decr} decrementing} the
+ threads, or domains. In that use case you typically need to
+ {{!incr} increment} the reference count before handing the resource from
+ one independent thread of control to another and the other independent
+ thread of control then becomes responsible for {{!decr} decrementing} the
reference count after being done with the resource. *)
module Resource : Resource
@@ -40,45 +40,48 @@ module type S = sig
count of [1] to the table for the resource and returns the resource as a
value of the {{!t} opaque alias type}.
- The optional [dispose] argument defaults to [true]. When explicitly
+ The optional [dispose] argument defaults to [true]. When explicitly
specified as [~dispose:false], the resource will not be
{{!module-Resource.dispose} disposed} when the reference count becomes
- zero. This is intended for special cases where a resource is e.g. managed
+ zero. This is intended for special cases where a resource is e.g. managed
outside of the control of the user program. *)
val unsafe_get : t -> Resource.t
(** [unsafe_get opaque_resource] casts the opaque alias type back to the
resource type.
- ⚠️ This should only be called and the resource used either after {{!create}
- creating} the reference counting entry or after {{!incr} incrementing} the
- reference count and before the matching {{!decr} decrement}. *)
+ ⚠️ This should only be called and the resource used either after
+ {{!create} creating} the reference counting entry or after
+ {{!incr} incrementing} the reference count and before the matching
+ {{!decr} decrement}. *)
val incr : t -> unit
(** [incr opaque_resource] tries to find the entry for the resource and
increment the reference count.
- @raise Invalid_argument in case no entry is found for the resource or the
- reference count was zero or the resource was marked as closed previously
- by a {{!decr} decrement} operation. *)
+ @raise Invalid_argument
+ in case no entry is found for the resource or the reference count was
+ zero or the resource was marked as closed previously by a
+ {{!decr} decrement} operation. *)
val decr : ?close:bool -> t -> unit
(** [decr opaque_resource] tries to find the entry for the resource and
- decrement the reference count. If the reference count becomes zero, the
+ decrement the reference count. If the reference count becomes zero, the
entry for the resource will be removed and the resource will be
{{!module-Resource.dispose} disposed}, unless [~dispose:false] was
specified for {!create}.
- The optional [close] argument defaults to [false]. When explicitly
+ The optional [close] argument defaults to [false]. When explicitly
specified as [~close:true] the resource will be marked as closed and
attempts to {{!incr} increment} the reference will fail.
- @raise Invalid_argument in case no entry is found for the resource or
- the reference count was zero. *)
+ @raise Invalid_argument
+ in case no entry is found for the resource or the reference count was
+ zero. *)
type info = {
resource : Resource.t; (** The resource. *)
- count : int; (** Reference count. This may be [0]. *)
+ count : int; (** Reference count. This may be [0]. *)
closed : bool; (** Whether the resource has been closed, see {!decr}. *)
dispose : bool; (** Whether to dispose the resource, see {!create}. *)
bt : Printexc.raw_backtrace; (** Backtrace captured at {!create}. *)
File "lib/picos_lwt/intf.ml", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_lwt/intf.ml b/_build/default/lib/picos_lwt/.formatted/intf.ml
index 35e0c65..5bca1a7 100644
--- a/_build/default/lib/picos_lwt/intf.ml
+++ b/_build/default/lib/picos_lwt/.formatted/intf.ml
@@ -18,9 +18,9 @@ module type System = sig
(** [signal trigger] resolves the promise that {{!await} [await trigger]}
returns.
- ℹ️ It must be safe to call [signal] from any thread or domain. As a
- special case this need not be thread-safe in case the system only allows a
- single thread. *)
+ ℹ️ It must be safe to call [signal] from any thread or domain. As a special
+ case this need not be thread-safe in case the system only allows a single
+ thread. *)
val await : trigger -> unit Lwt.t
(** [await trigger] returns a promise thet resolves, on the main thread, after
File "lib/picos_lwt.unix/picos_lwt_unix.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_lwt.unix/picos_lwt_unix.mli b/_build/default/lib/picos_lwt.unix/.formatted/picos_lwt_unix.mli
index 71a9817..4442236 100644
--- a/_build/default/lib/picos_lwt.unix/picos_lwt_unix.mli
+++ b/_build/default/lib/picos_lwt.unix/.formatted/picos_lwt_unix.mli
@@ -6,7 +6,7 @@ open Picos
val run_fiber : Fiber.t -> (Fiber.t -> unit) -> unit Lwt.t
(** [run_fiber fiber main] runs the [main] program as the specified [fiber] as a
promise with {!Lwt} as the scheduler using a {!Lwt_unix} based
- {{!Picos_lwt.System} [System]} module. In other words, the [main] program
+ {{!Picos_lwt.System} [System]} module. In other words, the [main] program
will be run as a {!Lwt} promise or fiber.
⚠️ This may only be called on the main thread on which {!Lwt} runs. *)
File "lib/picos_lwt/picos_lwt.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_lwt/picos_lwt.mli b/_build/default/lib/picos_lwt/.formatted/picos_lwt.mli
index d6a0d73..e63dc77 100644
--- a/_build/default/lib/picos_lwt/picos_lwt.mli
+++ b/_build/default/lib/picos_lwt/.formatted/picos_lwt.mli
@@ -1,10 +1,10 @@
(** Direct style {!Picos} compatible interface to {!Lwt} for OCaml 5.
This basically gives you an alternative direct style interface to
- programming with {!Lwt}. All the scheduling decisions will be made by
+ programming with {!Lwt}. All the scheduling decisions will be made by
{!Lwt}.
- ℹ️ This is a {{!System} system} independent interface to {!Lwt}. See
+ ℹ️ This is a {{!System} system} independent interface to {!Lwt}. See
{!Picos_lwt_unix} for a {!Unix} specific interface. *)
open Picos
@@ -20,11 +20,11 @@ include module type of Intf
val run_fiber : (module System) -> Fiber.t -> (Fiber.t -> unit) -> unit Lwt.t
(** [run_fiber (module System) fiber main] runs the [main] program as the
specified [fiber] as a promise with {!Lwt} as the scheduler using the given
- {!System} module. In other words, the [main] program will be run as a
- {!Lwt} promise or fiber.
+ {!System} module. In other words, the [main] program will be run as a {!Lwt}
+ promise or fiber.
ℹ️ Inside [main] you can use anything implemented in Picos for concurrent
- programming. In particular, you only need to call [run] with a {!System}
+ programming. In particular, you only need to call [run] with a {!System}
module implementation at the entry point of your application.
⚠️ This may only be called on the main thread on which {!Lwt} runs. *)
File "lib/picos/intf.ocaml5.ml", line 1, characters 0-0:
diff --git a/_build/default/lib/picos/intf.ocaml5.ml b/_build/default/lib/picos/.formatted/intf.ocaml5.ml
index 4900e68..75a1fcb 100644
--- a/_build/default/lib/picos/intf.ocaml5.ml
+++ b/_build/default/lib/picos/.formatted/intf.ocaml5.ml
@@ -10,12 +10,12 @@ module type Trigger = sig
Typically the scheduler calls {{!Fiber.try_suspend} [try_suspend]}, which
in turn calls {!on_signal}, to attach a scheduler specific [resume] action
- to the [trigger]. The scheduler must guarantee that the fiber will be
+ to the [trigger]. The scheduler must guarantee that the fiber will be
resumed after {!signal} has been called on the [trigger].
Whether being resumed due to cancelation or not, the trigger must be
- either {{!signal} signaled} outside of the effect handler, or {{!dispose}
- disposed} by the effect handler, before resuming the fiber.
+ either {{!signal} signaled} outside of the effect handler, or
+ {{!dispose} disposed} by the effect handler, before resuming the fiber.
In case the fiber permits propagation of cancelation and the computation
associated with the fiber has been canceled the scheduler is free to
@@ -37,9 +37,9 @@ module type Computation = sig
(** Schedulers must handle the {!Cancel_after} effect to implement the
behavior of {!cancel_after}.
- The scheduler should typically {{!try_attach} attach} a {{!Trigger}
- trigger} to the computation passed with the effect and arrange the
- timeout to be canceled upon signal to avoid space leaks.
+ The scheduler should typically {{!try_attach} attach} a
+ {{!Trigger} trigger} to the computation passed with the effect and arrange
+ the timeout to be canceled upon signal to avoid space leaks.
The scheduler should measure time using a monotonic clock.
@@ -80,15 +80,15 @@ module type Fiber = sig
('b, 'r) Effect.Shallow.handler ->
'r
(** [resume_with fiber k h] is equivalent to
- {{!Fiber.canceled} [Effect.Shallow.continue_with k (Fiber.canceled t) h]}. *)
+ {{!Fiber.canceled} [Effect.Shallow.continue_with k (Fiber.canceled t) h]}.
+ *)
val continue : t -> ('v, 'r) Effect.Deep.continuation -> 'v -> 'r
(** [continue fiber k v] is equivalent to:
{[
match Fiber.canceled fiber with
| None -> Effect.Deep.continue k v
- | Some (exn, bt) ->
- Effect.Deep.discontinue_with_backtrace k exn bt
+ | Some (exn, bt) -> Effect.Deep.discontinue_with_backtrace k exn bt
]} *)
val continue_with :
@@ -101,20 +101,19 @@ module type Fiber = sig
{[
match Fiber.canceled fiber with
| None -> Effect.Shallow.continue_with k v h
- | Some (exn, bt) ->
- Effect.Shallow.discontinue_with_backtrace k exn bt h
+ | Some (exn, bt) -> Effect.Shallow.discontinue_with_backtrace k exn bt h
]} *)
(** Schedulers must handle the {!Current} effect to implement the behavior of
{!current}.
⚠️ The scheduler must eventually resume the fiber without propagating
- cancelation. This is necessary to allow a fiber to control the
- propagation of cancelation through the {{!t} fiber}.
+ cancelation. This is necessary to allow a fiber to control the propagation
+ of cancelation through the {{!t} fiber}.
- The scheduler is free to choose which ready fiber to resume next.
- However, in typical use cases of {!current} it makes sense to give
- priority to the fiber performing {!Current}, but this is not required. *)
+ The scheduler is free to choose which ready fiber to resume next. However,
+ in typical use cases of {!current} it makes sense to give priority to the
+ fiber performing {!Current}, but this is not required. *)
type _ Effect.t += private Current : t Effect.t
(** Schedulers must handle the {!Yield} effect to implement the behavior of
@@ -125,7 +124,7 @@ module type Fiber = sig
discontinue the fiber immediately.
The scheduler should give priority to running other ready fibers before
- resuming the fiber performing {!Yield}. A scheduler that always
+ resuming the fiber performing {!Yield}. A scheduler that always
immediately resumes the fiber performing {!Yield} may prevent an otherwise
valid program from making progress. *)
type _ Effect.t += private Yield : unit Effect.t
@@ -139,7 +138,7 @@ module type Fiber = sig
⚠️ In case the fiber performing {!Spawn} permits propagation of cancelation
and the computation associated with the fiber has been canceled before it
performed {!Spawn}, the scheduler should discontinue the current fiber and
- not spawn a new fiber. If cancelation happens during the handling of
+ not spawn a new fiber. If cancelation happens during the handling of
{!Spawn} the scheduler is free to either spawn a new fiber, in which case
the current fiber must be continued normally, or not spawn a fiber, in
which case the current fiber must be discontinued, i.e. {!spawn} raises an
@@ -151,9 +150,9 @@ module type Fiber = sig
called.
In other words, spawn should (effectively) check cancelation at least once
- and be all or nothing. Furthermore, in case a newly spawned fiber is
+ and be all or nothing. Furthermore, in case a newly spawned fiber is
canceled before its [main] is called, the scheduler must still call the
- [main]. This allows a program to ensure, i.e. keep track of, that all
+ [main]. This allows a program to ensure, i.e. keep track of, that all
fibers it spawns are terminated properly and any resources transmitted to
spawned fibers will be disposed properly. *)
type _ Effect.t +=
File "lib/picos.thread/picos_thread.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos.thread/picos_thread.mli b/_build/default/lib/picos.thread/.formatted/picos_thread.mli
index 234c307..2ad0230 100644
--- a/_build/default/lib/picos.thread/picos_thread.mli
+++ b/_build/default/lib/picos.thread/.formatted/picos_thread.mli
@@ -8,7 +8,7 @@ module TLS : sig
(** Thread-local storage.
Note that here "thread" refers to system level threads rather than fibers
- or domains. In case a system level thread implementation, i.e. the
+ or domains. In case a system level thread implementation, i.e. the
[threads.posix] library, is not available, this will use
{!Picos_domain.DLS}. *)
@@ -30,7 +30,7 @@ module TLS : sig
current thread or raises {!Not_set} in case no value has been {!set} for
the key.
- ⚠️ The {!Not_set} exception is raised with no backtrace. Always catch the
+ ⚠️ The {!Not_set} exception is raised with no backtrace. Always catch the
exception unless it is known that a value has been set. *)
val set : 'a t -> 'a -> unit
File "lib/picos_lwt/picos_lwt.ml", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_lwt/picos_lwt.ml b/_build/default/lib/picos_lwt/.formatted/picos_lwt.ml
index 3846f3f..97b9822 100644
--- a/_build/default/lib/picos_lwt/picos_lwt.ml
+++ b/_build/default/lib/picos_lwt/.formatted/picos_lwt.ml
@@ -35,8 +35,7 @@ let await promise =
| Return value -> value
| Fail exn -> raise exn
-let[@alert "-handler"] rec go :
- type a r.
+let[@alert "-handler"] rec go : type a r.
Fiber.t ->
(module System) ->
(a, r) Effect.Shallow.continuation ->
File "lib/picos_std.event/picos_std_event.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_std.event/picos_std_event.mli b/_build/default/lib/picos_std.event/.formatted/picos_std_event.mli
index c54631c..2bd2cfa 100644
--- a/_build/default/lib/picos_std.event/picos_std_event.mli
+++ b/_build/default/lib/picos_std.event/.formatted/picos_std_event.mli
@@ -18,8 +18,8 @@ module Event : sig
type 'a event = 'a t
(** An alias for the {!Event.t} type to match the
- {{:https://ocaml.org/manual/5.2/api/Event.html} [Event]} module
- signature. *)
+ {{:https://ocaml.org/manual/5.2/api/Event.html} [Event]} module signature.
+ *)
val always : 'a -> 'a t
(** [always value] returns an event that can always be committed to resulting
@@ -33,8 +33,8 @@ module Event : sig
val wrap : 'b t -> ('b -> 'a) -> 'a t
(** [wrap event fn] returns an event that acts as the given [event] and then
- applies the given function to the value in case the event is committed
- to. *)
+ applies the given function to the value in case the event is committed to.
+ *)
val map : ('b -> 'a) -> 'b t -> 'a t
(** [map fn event] is equivalent to {{!wrap} [wrap event fn]}. *)
@@ -44,10 +44,10 @@ module Event : sig
the [thunk], and then behaves like the resulting event.
⚠️ Raising an exception from a [guard thunk] will result in raising that
- exception out of the {!sync}. This may result in dropping the result of
- an event that committed just after the exception was raised. This means
- that you should treat an unexpected exception raised from {!sync} as a
- fatal error. *)
+ exception out of the {!sync}. This may result in dropping the result of an
+ event that committed just after the exception was raised. This means that
+ you should treat an unexpected exception raised from {!sync} as a fatal
+ error. *)
(** {2 Consuming events} *)
@@ -56,16 +56,15 @@ module Event : sig
Synchronizing on an event executes in three phases:
- {ol
- {- In the first phase offers or requests are made to communicate.}
- {- One of the offers or requests is committed to and all the other
- offers and requests are canceled.}
- {- A final result is computed from the value produced by the event.}}
+ + In the first phase offers or requests are made to communicate.
+ + One of the offers or requests is committed to and all the other offers
+ and requests are canceled.
+ + A final result is computed from the value produced by the event.
⚠️ [sync event] does not wait for the canceled concurrent requests to
- terminate. This means that you should arrange for guaranteed cleanup
- through other means such as the use of {{!Picos_std_structured} structured
- concurrency}. *)
+ terminate. This means that you should arrange for guaranteed cleanup
+ through other means such as the use of
+ {{!Picos_std_structured} structured concurrency}. *)
val select : 'a t list -> 'a
(** [select events] is equivalent to {{!sync} [sync (choose events)]}. *)
@@ -73,8 +72,8 @@ module Event : sig
(** {2 Primitive events}
ℹ️ The {{!Picos.Computation} [Computation]} concept of {!Picos} can be seen
- as a basic single-shot atomic event. This module builds on that concept
- to provide a composable API to concurrent services exposed through
+ as a basic single-shot atomic event. This module builds on that concept to
+ provide a composable API to concurrent services exposed through
computations. *)
open Picos
@@ -87,16 +86,16 @@ module Event : sig
{{!Picos.Computation} computation}.
ℹ️ The computation passed to a request may be completed by some other event
- at any point. All primitive requests should be implemented carefully to
- take that into account. If the computation is completed by some other
+ at any point. All primitive requests should be implemented carefully to
+ take that into account. If the computation is completed by some other
event, then the request should be considered as canceled, take no effect,
and not leak any resources.
⚠️ Raising an exception from a [request] function will result in raising
- that exception out of the {!sync}. This may result in dropping the result
- of an event that committed just after the exception was raised. This
- means that you should treat an unexpected exception raised from {!sync} as
- a fatal error. In addition, you should arrange for concurrent services to
+ that exception out of the {!sync}. This may result in dropping the result
+ of an event that committed just after the exception was raised. This means
+ that you should treat an unexpected exception raised from {!sync} as a
+ fatal error. In addition, you should arrange for concurrent services to
report unexpected errors independently of the computation being passed to
the service. *)
@@ -108,6 +107,6 @@ module Event : sig
(** [from_computation source] creates an {{!Event} event} that can be
committed to once the given [source] computation has completed.
- ℹ️ Committing to some other event does not cancel the [source]
- computation. *)
+ ℹ️ Committing to some other event does not cancel the [source] computation.
+ *)
end
File "lib/picos_std.event/event.ml", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_std.event/event.ml b/_build/default/lib/picos_std.event/.formatted/event.ml
index b77cd01..952bc6a 100644
--- a/_build/default/lib/picos_std.event/event.ml
+++ b/_build/default/lib/picos_std.event/.formatted/event.ml
@@ -12,8 +12,8 @@ type 'a t =
type ('a, 'r) id = Yes : ('a, 'a) id | No : ('a, 'r) id
-let rec request_1_as :
- type a r. (_ -> r) Computation.t -> (a -> r) -> (a, r) id -> a t -> _ =
+let rec request_1_as : type a r.
+ (_ -> r) Computation.t -> (a -> r) -> (a, r) id -> a t -> _ =
fun target to_result id -> function
| Request { request } -> request target to_result
| Choose ts -> request_n_as target to_result id ts
@@ -23,8 +23,8 @@ let rec request_1_as :
in
request_1_as target to_result No event
-and request_n_as :
- type a r. (_ -> r) Computation.t -> (a -> r) -> (a, r) id -> a t list -> _ =
+and request_n_as : type a r.
+ (_ -> r) Computation.t -> (a -> r) -> (a, r) id -> a t list -> _ =
fun target to_result id -> function
| [] -> ()
| t :: ts ->
File "lib/picos_mux.fifo/picos_mux_fifo.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_mux.fifo/picos_mux_fifo.mli b/_build/default/lib/picos_mux.fifo/.formatted/picos_mux_fifo.mli
index 6a283d4..68e5d03 100644
--- a/_build/default/lib/picos_mux.fifo/picos_mux_fifo.mli
+++ b/_build/default/lib/picos_mux.fifo/.formatted/picos_mux_fifo.mli
@@ -2,7 +2,7 @@
5.
This scheduler uses a queue specifically optimized for a single-threaded
- scheduler to implement a basic FIFO scheduler. This scheduler also gives
+ scheduler to implement a basic FIFO scheduler. This scheduler also gives
priority to fibers woken up due to being canceled.
🐌 Due to FIFO scheduling this scheduler performs poorly on highly parallel
@@ -14,7 +14,7 @@
ℹ️ This scheduler implementation is mostly meant as an example and for use in
testing libraries implemented in {!Picos}.
- ⚠️ This scheduler uses {!Picos_io_select} internally. If running multiple
+ ⚠️ This scheduler uses {!Picos_io_select} internally. If running multiple
threads that each run this scheduler, {!Picos_io_select.configure} must be
called by the main thread before creating other threads. *)
File "lib/picos_mux.multififo/picos_mux_multififo.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_mux.multififo/picos_mux_multififo.mli b/_build/default/lib/picos_mux.multififo/.formatted/picos_mux_multififo.mli
index 465d3fb..c82b8f9 100644
--- a/_build/default/lib/picos_mux.multififo/picos_mux_multififo.mli
+++ b/_build/default/lib/picos_mux.multififo/.formatted/picos_mux_multififo.mli
@@ -4,7 +4,7 @@
This scheduler uses a queue per thread to implement a mostly FIFO scheduler.
If a thread runs out of fibers to run, it will try to take a fiber from the
queues of other threads, which means that fibers can move from one thread to
- another. This scheduler also gives priority to fibers woken up due to being
+ another. This scheduler also gives priority to fibers woken up due to being
canceled.
🐌 Due to mostly FIFO scheduling this scheduler performs poorly on highly
@@ -15,7 +15,7 @@
ℹ️ This scheduler implementation is mostly meant as an example and for use in
testing libraries implemented in {!Picos}.
- ⚠️ This scheduler uses {!Picos_io_select} internally. If running multiple
+ ⚠️ This scheduler uses {!Picos_io_select} internally. If running multiple
threads that each run this scheduler, {!Picos_io_select.configure} must be
called by the main thread before creating other threads. *)
@@ -25,7 +25,7 @@ type t
(** Represents a shared context for fifo runners. *)
val context : ?quota:int -> ?fatal_exn_handler:(exn -> unit) -> unit -> t
-(** [context ()] creates a new context for randomized runners. The context
+(** [context ()] creates a new context for randomized runners. The context
should be consumed by a call of {{!run} [run ~context ...]}.
The optional [quota] argument defaults to [Int.max_int] and determines the
@@ -34,13 +34,13 @@ val context : ?quota:int -> ?fatal_exn_handler:(exn -> unit) -> unit -> t
val runner_on_this_thread : t -> unit
(** [runner_on_this_thread context] starts a runner on the current thread to run
- fibers on the context. The runner returns when {{!run} [run ~context ...]}
+ fibers on the context. The runner returns when {{!run} [run ~context ...]}
returns. *)
val run_fiber : ?context:t -> Fiber.t -> (Fiber.t -> unit) -> unit
(** [run_fiber fiber main] runs the [main] program as the specified [fiber] and
- returns after [main] and all of the fibers spawned by [main] have
- returned. *)
+ returns after [main] and all of the fibers spawned by [main] have returned.
+*)
val run : ?context:t -> ?forbid:bool -> (unit -> 'a) -> 'a
(** [run main] is equivalent to calling {!run_fiber} with a freshly created
File "lib/picos_std.awaitable/picos_std_awaitable.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_std.awaitable/picos_std_awaitable.mli b/_build/default/lib/picos_std.awaitable/.formatted/picos_std_awaitable.mli
index b723db6..7fc68da 100644
--- a/_build/default/lib/picos_std.awaitable/picos_std_awaitable.mli
+++ b/_build/default/lib/picos_std.awaitable/.formatted/picos_std_awaitable.mli
@@ -9,8 +9,8 @@ module Awaitable : sig
(** An awaitable atomic location.
This module provides a superset of the Stdlib {!Atomic} API with more or
- less identical performance. The main difference is that a non-padded
- awaitable location takes an extra word of memory. Additionally a
+ less identical performance. The main difference is that a non-padded
+ awaitable location takes an extra word of memory. Additionally a
{{:https://en.wikipedia.org/wiki/Futex} futex}-like API provides the
ability to {!await} until an awaitable location is explicitly {!signal}ed
to potentially have a different value.
@@ -28,7 +28,8 @@ module Awaitable : sig
[initial] value. *)
val make_contended : 'a -> 'a t
- (** [make_contended initial] is equivalent to {{!make} [make ~padded:true initial]}. *)
+ (** [make_contended initial] is equivalent to
+ {{!make} [make ~padded:true initial]}. *)
val get : 'a t -> 'a
(** [get awaitable] is essentially equivalent to [Atomic.get awaitable]. *)
@@ -38,20 +39,24 @@ module Awaitable : sig
[Atomic.compare_and_set awaitable before after]. *)
val exchange : 'a t -> 'a -> 'a
- (** [exchange awaitable after] is essentially equivalent to [Atomic.exchange awaitable after]. *)
+ (** [exchange awaitable after] is essentially equivalent to
+ [Atomic.exchange awaitable after]. *)
val set : 'a t -> 'a -> unit
- (** [set awaitable value] is equivalent to {{!exchange} [exchange awaitable value |> ignore]}. *)
+ (** [set awaitable value] is equivalent to
+ {{!exchange} [exchange awaitable value |> ignore]}. *)
val fetch_and_add : int t -> int -> int
(** [fetch_and_add awaitable delta] is essentially equivalent to
[Atomic.fetch_and_add awaitable delta]. *)
val incr : int t -> unit
- (** [incr awaitable] is equivalent to {{!fetch_and_add} [fetch_and_add awaitable (+1) |> ignore]}. *)
+ (** [incr awaitable] is equivalent to
+ {{!fetch_and_add} [fetch_and_add awaitable (+1) |> ignore]}. *)
val decr : int t -> unit
- (** [incr awaitable] is equivalent to {{!fetch_and_add} [fetch_and_add awaitable (-1) |> ignore]}. *)
+ (** [incr awaitable] is equivalent to
+ {{!fetch_and_add} [fetch_and_add awaitable (-1) |> ignore]}. *)
(** {1 Futex API} *)
@@ -61,9 +66,9 @@ module Awaitable : sig
🐌 Generally speaking one should avoid calling [signal] too frequently,
because the queue of awaiters is stored separately from the awaitable
- location and it takes a bit of effort to locate it. For example, calling
+ location and it takes a bit of effort to locate it. For example, calling
[signal] every time a value is added to an empty data structure might not
- be optimal. In many cases it is faster to explicitly mark the potential
+ be optimal. In many cases it is faster to explicitly mark the potential
presence of awaiters in the data structure and avoid calling [signal] when
it is definitely known that there are no awaiters. *)
@@ -71,7 +76,7 @@ module Awaitable : sig
(** [broadcast awaitable] tries to wake up all fibers {!await}ing on the
awaitable location.
- 🐌 The same advice as with {!signal} applies to [broadcast]. In addition,
+ 🐌 The same advice as with {!signal} applies to [broadcast]. In addition,
it is typically a good idea to avoid potentially waking up large numbers
of fibers as it can easily lead to the
{{:https://en.wikipedia.org/wiki/Thundering_herd_problem} thundering herd}
@@ -82,11 +87,11 @@ module Awaitable : sig
explicitly {!signal}ed and has a value other than [before].
⚠️ This operation is subject to the
- {{:https://en.wikipedia.org/wiki/ABA_problem} ABA} problem. An [await]
- for value other than [A] may not return after the awaitable is signaled
- while having the value [B], because at a later point the awaitable has
- again the value [A]. Furthermore, by the time an [await] for value other
- than [A] returns, the awaitable might already again have the value [A].
+ {{:https://en.wikipedia.org/wiki/ABA_problem} ABA} problem. An [await] for
+ value other than [A] may not return after the awaitable is signaled while
+ having the value [B], because at a later point the awaitable has again the
+ value [A]. Furthermore, by the time an [await] for value other than [A]
+ returns, the awaitable might already again have the value [A].
⚠️ Atomic operations that change the value of an awaitable do not
implicitly wake up awaiters. *)
@@ -105,13 +110,13 @@ module Awaitable : sig
FIFO associated with the awaitable, and returns the awaiter. *)
val remove : t -> unit
- (** [remove awaiter] marks the awaiter as having been signaled and removes it
- from the FIFO associated with the awaitable.
+ (** [remove awaiter] marks the awaiter as having been signaled and removes
+ it from the FIFO associated with the awaitable.
ℹ️ If the associated trigger is used with only one awaiter and the
{!Trigger.await await} on the trigger returns [None], there is no need
- to explicitly remove the awaiter, because it has already been
- removed. *)
+ to explicitly remove the awaiter, because it has already been removed.
+ *)
end
end
@@ -149,23 +154,23 @@ end
]}
The above mutex outperforms most other mutexes under both no/low and high
- contention scenarios. In no/low contention scenarios the use of
- {{!Awaitable.fetch_and_add} [fetch_and_add]} provides low overhead. In high
+ contention scenarios. In no/low contention scenarios the use of
+ {{!Awaitable.fetch_and_add} [fetch_and_add]} provides low overhead. In high
contention scenarios the above mutex allows unfairness, which avoids
performance degradation due to the
{{:https://en.wikipedia.org/wiki/Lock_convoy} lock convoy} phenomena.
{2 [Condition]}
- Let's also implement a condition variable. For that we'll also make use of
+ Let's also implement a condition variable. For that we'll also make use of
low level abstractions and operations from the {!Picos} core library:
{[
# open Picos
]}
- To implement a condition variable, we'll use the {{!Awaitable.Awaiter}
- [Awaiter]} API:
+ To implement a condition variable, we'll use the
+ {{!Awaitable.Awaiter} [Awaiter]} API:
{[
module Condition = struct
@@ -196,5 +201,5 @@ end
]}
Notice that the awaitable location used in the above condition variable
- implementation is never mutated. We just reuse the signaling mechanism of
+ implementation is never mutated. We just reuse the signaling mechanism of
awaitables. *)
File "lib/picos_mux.thread/picos_mux_thread.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_mux.thread/picos_mux_thread.mli b/_build/default/lib/picos_mux.thread/.formatted/picos_mux_thread.mli
index 46e4519..4d00bcb 100644
--- a/_build/default/lib/picos_mux.thread/picos_mux_thread.mli
+++ b/_build/default/lib/picos_mux.thread/.formatted/picos_mux_thread.mli
@@ -3,12 +3,12 @@
ℹ️ This scheduler implementation is mostly meant as an example and for use in
testing libraries implemented in {!Picos}.
- ⚠️ This scheduler uses {!Picos_io_select} internally. If running multiple
+ ⚠️ This scheduler uses {!Picos_io_select} internally. If running multiple
threads that each run this scheduler, {!Picos_io_select.configure} must be
called by the main thread before creating other threads.
⚠️ This scheduler is probably suitable for simple applications that do not
- spawn a lot of fibers. If an application uses a lot of short lived fibers,
+ spawn a lot of fibers. If an application uses a lot of short lived fibers,
then a more sophisticated scheduler implementation using some sort of thread
pool will likely perform significantly better.
File "lib/picos_mux.random/picos_mux_random.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_mux.random/picos_mux_random.mli b/_build/default/lib/picos_mux.random/.formatted/picos_mux_random.mli
index b4d3173..6364c22 100644
--- a/_build/default/lib/picos_mux.random/picos_mux_random.mli
+++ b/_build/default/lib/picos_mux.random/.formatted/picos_mux_random.mli
@@ -4,15 +4,15 @@
ℹ️ This scheduler implementation is specifically intended for testing
libraries implemented in Picos.
- ⚠️ This scheduler uses {!Picos_io_select} internally. If running multiple
+ ⚠️ This scheduler uses {!Picos_io_select} internally. If running multiple
threads that each run this scheduler, {!Picos_io_select.configure} must be
called by the main thread before creating other threads.
{!Picos} is an interface that allows schedulers to make scheduling decisions
- freely. After each effect this scheduler picks the next fiber to run
- randomly from the collection of ready fibers. This can help to discover
- bugs in programs implemented in Picos that make invalid scheduling
- assumptions. *)
+ freely. After each effect this scheduler picks the next fiber to run
+ randomly from the collection of ready fibers. This can help to discover bugs
+ in programs implemented in Picos that make invalid scheduling assumptions.
+*)
open Picos
@@ -20,12 +20,12 @@ type t
(** Represents a shared context for randomized runners. *)
val context : ?fatal_exn_handler:(exn -> unit) -> unit -> t
-(** [context ()] creates a new context for randomized runners. The context
+(** [context ()] creates a new context for randomized runners. The context
should be consumed by a call of {{!run} [run ~context ...]}. *)
val runner_on_this_thread : t -> unit
(** [runner_on_this_thread context] starts a runner on the current thread to run
- fibers on the context. The runner returns when {{!run} [run ~context ...]}
+ fibers on the context. The runner returns when {{!run} [run ~context ...]}
returns. *)
val run_fiber : ?context:t -> Fiber.t -> (Fiber.t -> unit) -> unit
@@ -33,11 +33,11 @@ val run_fiber : ?context:t -> Fiber.t -> (Fiber.t -> unit) -> unit
returns [main] and all of the fibers spawned by [main] have returned.
The optional [context] argument specifies a context in which to run the
- [main] program. If unspecified, a new context is automatically created and
- the scheduler will be single-threaded. By {{!context} creating a context},
+ [main] program. If unspecified, a new context is automatically created and
+ the scheduler will be single-threaded. By {{!context} creating a context},
spawning concurrent or parallel {{!runner_on_this_thread} runners} on to the
context, and then explicitly passing the context to [run ~context ...] one
- can create a multi-threaded scheduler. Only a single call of {!run} per
+ can create a multi-threaded scheduler. Only a single call of {!run} per
context is allowed. *)
val run : ?context:t -> ?forbid:bool -> (unit -> 'a) -> 'a
File "lib/picos_aux.htbl/picos_aux_htbl.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_aux.htbl/picos_aux_htbl.mli b/_build/default/lib/picos_aux.htbl/.formatted/picos_aux_htbl.mli
index c8fcad9..1da7e7a 100644
--- a/_build/default/lib/picos_aux.htbl/picos_aux_htbl.mli
+++ b/_build/default/lib/picos_aux.htbl/.formatted/picos_aux_htbl.mli
@@ -1,13 +1,13 @@
(** Lock-free hash table.
The operations provided by this hash table are designed to work as building
- blocks of non-blocking algorithms. Specifically, the operation signatures
+ blocks of non-blocking algorithms. Specifically, the operation signatures
and semantics are designed to allow building
{{:https://dl.acm.org/doi/10.1145/62546.62593} consensus protocols over
- arbitrary numbers of processes}.
+ arbitrary numbers of processes}.
🏎️ Single key reads with this hash table are actually wait-free rather than
- just lock-free. Internal resizing automatically uses all the threads that
+ just lock-free. Internal resizing automatically uses all the threads that
are trying to write to the hash table. *)
(** {1 API} *)
@@ -29,9 +29,10 @@ val create :
table.
- The optional [hashed_type] argument can and usually should be used to
- specify the [equal] and [hash] operations on keys. Slow polymorphic
- equality [(=)] and slow polymorphic {{!Stdlib.Hashtbl.seeded_hash} [seeded_hash (Bits64.to_int (Random.bits64 ()))]}
- is used by default.
+ specify the [equal] and [hash] operations on keys. Slow polymorphic
+ equality [(=)] and slow polymorphic
+ {{!Stdlib.Hashtbl.seeded_hash}
+ [seeded_hash (Bits64.to_int (Random.bits64 ()))]} is used by default.
- The default [min_buckets] is unspecified and a given [min_buckets] may be
adjusted by the implementation.
- The default [max_buckets] is unspecified and a given [max_buckets] may be
@@ -59,8 +60,8 @@ val find_exn : ('k, 'v) t -> 'k -> 'v
(** [find_exn htbl key] returns the current binding of [key] in the hash table
[htbl] or raises {!Not_found} if no such binding exists.
- @raise Not_found in case no binding of [key] exists in the hash table
- [htbl]. *)
+ @raise Not_found
+ in case no binding of [key] exists in the hash table [htbl]. *)
val mem : ('k, 'v) t -> 'k -> bool
(** [mem htbl key] determines whether the hash table [htbl] has a binding for
@@ -70,57 +71,57 @@ val mem : ('k, 'v) t -> 'k -> bool
val try_add : ('k, 'v) t -> 'k -> 'v -> bool
(** [try_add htbl key value] tries to add a new binding of [key] to [value] to
- the hash table [htbl]. Returns [true] on success and [false] in case the
+ the hash table [htbl]. Returns [true] on success and [false] in case the
hash table already contained a binding for [key]. *)
(** {2 Updating bindings} *)
val try_set : ('k, 'v) t -> 'k -> 'v -> bool
(** [try_set htbl key value] tries to update an existing binding of [key] to
- [value] in the hash table [htbl]. Returns [true] on success and [false] in
+ [value] in the hash table [htbl]. Returns [true] on success and [false] in
case the hash table did not contain a binding for [key]. *)
val try_compare_and_set : ('k, 'v) t -> 'k -> 'v -> 'v -> bool
(** [try_compare_and_set htbl key before after] tries to update an existing
binding of [key] from the [before] value to the [after] value in the hash
- table [htbl]. Returns [true] on success and [false] in case the hash table
+ table [htbl]. Returns [true] on success and [false] in case the hash table
did not contain a binding of [key] to the [before] value.
- ℹ️ The values are compared using physical equality, i.e. the [==]
- operator. *)
+ ℹ️ The values are compared using physical equality, i.e. the [==] operator.
+*)
val set_exn : ('k, 'v) t -> 'k -> 'v -> 'v
(** [set_exn htbl key after] tries to update an existing binding of [key] from
- some [before] value to the [after] value in the hash table [htbl]. Returns
+ some [before] value to the [after] value in the hash table [htbl]. Returns
the [before] value on success or raises {!Not_found} if no such binding
exists.
- @raise Not_found in case no binding of [key] exists in the hash table
- [htbl]. *)
+ @raise Not_found
+ in case no binding of [key] exists in the hash table [htbl]. *)
(** {2 Removing bindings} *)
val try_remove : ('k, 'v) t -> 'k -> bool
(** [try_remove htbl key] tries to remove a binding of [key] from the hash table
- [htbl]. Returns [true] on success and [false] in case the hash table did
- not contain a binding for [key]. *)
+ [htbl]. Returns [true] on success and [false] in case the hash table did not
+ contain a binding for [key]. *)
val try_compare_and_remove : ('k, 'v) t -> 'k -> 'v -> bool
(** [try_compare_and_remove htbl key before] tries to remove a binding of [key]
- to the [before] value from the hash table [htbl]. Returns [true] on success
+ to the [before] value from the hash table [htbl]. Returns [true] on success
and [false] in case the hash table did not contain a binding of [key] to the
[before] value.
- ℹ️ The values are compared using physical equality, i.e. the [==]
- operator. *)
+ ℹ️ The values are compared using physical equality, i.e. the [==] operator.
+*)
val remove_exn : ('k, 'v) t -> 'k -> 'v
(** [remove_exn htbl key] tries to remove a binding of [key] to some [before]
- value from the hash table [htbl]. Returns the [before] value on success or
+ value from the hash table [htbl]. Returns the [before] value on success or
raises {!Not_found} if no such binding exists.
- @raise Not_found in case no binding of [key] exists in the hash table
- [htbl]. *)
+ @raise Not_found
+ in case no binding of [key] exists in the hash table [htbl]. *)
(** {2 Examining contents} *)
@@ -198,42 +199,39 @@ val find_random_exn : ('k, 'v) t -> 'k
module Key = struct
type t = int
+
let equal = Int.equal
let hash = Fun.id
end
- let create () =
- Htbl.create ~hashed_type:(module Key) ()
+ let create () = Htbl.create ~hashed_type:(module Key) ()
let rec push t value =
let key = Int64.to_int (Random.bits64 ()) in
- if not (Htbl.try_add t key value) then
- push t value
+ if not (Htbl.try_add t key value) then push t value
let rec pop_exn t =
let key = Htbl.find_random_exn t in
- try
- Htbl.remove_exn t key
- with Not_found ->
- pop_exn t
+ try Htbl.remove_exn t key with Not_found -> pop_exn t
end
]}
First of all, as we use random bits as keys, we can use {!Fun.id} as the
- [hash] function. However, the main idea demonstrated above is that the
+ [hash] function. However, the main idea demonstrated above is that the
{!try_add} and {!remove_exn} operations can be used as building blocks of
lock-free algorithms.
{2 External reference counting}
- Let's create a simplified version of {{!Picos_aux_rc} an external reference
- counting mechanism}.
+ Let's create a simplified version of
+ {{!Picos_aux_rc} an external reference counting mechanism}.
A [Resource] is hashed type whose values need to be disposed:
{[
module type Resource = sig
include Hashtbl.HashedType
+
val dispose : t -> unit
end
]}
@@ -255,8 +253,7 @@ val find_random_exn : ('k, 'v) t -> 'k
let rc_of = Htbl.create ~hashed_type:(module Resource) ()
let create t =
- if Htbl.try_add rc_of t 1 then t
- else invalid_arg "already created"
+ if Htbl.try_add rc_of t 1 then t else invalid_arg "already created"
let unsafe_get = Fun.id
@@ -269,29 +266,24 @@ val find_random_exn : ('k, 'v) t -> 'k
do
backoff := Backoff.once !backoff
done
- with Not_found ->
- invalid_arg "already disposed"
+ with Not_found -> invalid_arg "already disposed"
let rec decr t backoff =
match Htbl.find_exn rc_of t with
| n ->
- if n < 2 then
- if Htbl.try_compare_and_remove rc_of t n then
- Resource.dispose t
- else
- decr t (Backoff.once backoff)
- else
- if not (Htbl.try_compare_and_set rc_of t n (n - 1)) then
+ if n < 2 then
+ if Htbl.try_compare_and_remove rc_of t n then Resource.dispose t
+ else decr t (Backoff.once backoff)
+ else if not (Htbl.try_compare_and_set rc_of t n (n - 1)) then
decr t (Backoff.once backoff)
- | exception Not_found ->
- invalid_arg "already disposed"
+ | exception Not_found -> invalid_arg "already disposed"
let decr t = decr t Backoff.default
end
]}
Once again we use lock-free retry loops to update the hash table holding
- reference counts of resources. Notice that in [decr] we can conveniently
- remove the entire binding from the hash table and avoid leaks. This time we
+ reference counts of resources. Notice that in [decr] we can conveniently
+ remove the entire binding from the hash table and avoid leaks. This time we
also use a backoff mechanism, because, unlike with the lock-free bag, we
don't use randomized keys. *)
File "lib/picos_io.select/picos_io_select.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_io.select/picos_io_select.mli b/_build/default/lib/picos_io.select/.formatted/picos_io_select.mli
index 3260908..73bc52e 100644
--- a/_build/default/lib/picos_io.select/picos_io_select.mli
+++ b/_build/default/lib/picos_io.select/.formatted/picos_io_select.mli
@@ -3,11 +3,11 @@
The operations in this module automatically manage a {!Thread} per domain
that runs a {!Unix.select} loop to support the operations.
- ⚠️ Signal handlers are unfortunately fundamentally non-compositional. The
- use of signal handlers in this module has been designed to be {{!configure}
- configurable}, which should allow co-operating with other libraries using
- signals as long as care is taken at application startup to {!configure}
- things.
+ ⚠️ Signal handlers are unfortunately fundamentally non-compositional. The use
+ of signal handlers in this module has been designed to be
+ {{!configure} configurable}, which should allow co-operating with other
+ libraries using signals as long as care is taken at application startup to
+ {!configure} things.
⚠️ All the usual limitations of the {!Unix} module apply. *)
@@ -22,7 +22,7 @@ val cancel_after :
_ Computation.t -> seconds:float -> exn -> Printexc.raw_backtrace -> unit
(** [cancel_after computation ~seconds exn bt] arranges for [computation] to be
{{!Picos.Computation.cancel} canceled} with given exception and backtrace
- after given time in [seconds]. Completion of the [computation] before the
+ after given time in [seconds]. Completion of the [computation] before the
specified time effectively cancels the timeout.
ℹ️ You can use [cancel_after] to implement the handler for the
@@ -39,7 +39,7 @@ val return_on :
'a Computation.t -> Picos_io_fd.t -> [ `R | `W | `E ] -> 'a -> unit
(** [return_on computation fd op value] arranges for [computation] to be
{{!Picos.Computation.return} returned} with given [value] when [fd] becomes
- available for [op]. Completion of the [computation] before the [fd] becomes
+ available for [op]. Completion of the [computation] before the [fd] becomes
available for [op] effectively cancels the arrangement.
ℹ️ Using {!Unix.set_nonblock} and [return_on] you can implement direct-style
@@ -59,28 +59,28 @@ module Intr : sig
other purposes.
⚠️ Beware that signal handling in OCaml 5.0.0 is known to be broken and
- several fixes were included in OCaml {{:https://ocaml.org/releases/5.1.0}
- 5.1.0}. *)
+ several fixes were included in OCaml
+ {{:https://ocaml.org/releases/5.1.0} 5.1.0}. *)
type t
(** Represents an optional interrupt request. *)
val nothing : t
- (** A constant for a no request. {{!clr} [clr nothing]} does nothing. *)
+ (** A constant for a no request. {{!clr} [clr nothing]} does nothing. *)
val req : seconds:float -> t
(** [req ~seconds] requests an interrupt in the form of a signal delivered to
the thread that made the request within the specified number of [seconds].
- Blocking {!Unix} IO calls typically raise an error with the {{!Unix.EINTR}
- [Unix.EINTR]} error code when they are interrupted by a signal.
- Regardless of whether the signal gets triggered or a system call gets
- interrupted, the request must be {{!clr} cleared} exactly once!
+ Blocking {!Unix} IO calls typically raise an error with the
+ {{!Unix.EINTR} [Unix.EINTR]} error code when they are interrupted by a
+ signal. Regardless of whether the signal gets triggered or a system call
+ gets interrupted, the request must be {{!clr} cleared} exactly once!
⚠️ Due to limitations of the OCaml system modules and unlike with typical
timeout mechanisms, the interrupt may also be triggered sooner. *)
val clr : t -> unit
- (** [clr req] either cancels or acknowledges the interrupt request. Every
+ (** [clr req] either cancels or acknowledges the interrupt request. Every
{!req} must be cleared exactly once! *)
end
@@ -89,7 +89,7 @@ end
val return_on_sigchld : 'a Computation.t -> 'a -> unit
(** [return_on_sigchld computation value] arranges for [computation] to be
{{!Picos.Computation.return} returned} with given [value] on next
- {!Sys.sigchld}. Completion of the [computation] before a {!Sys.sigchld} is
+ {!Sys.sigchld}. Completion of the [computation] before a {!Sys.sigchld} is
received effectively cancels the arrangement.
⚠️ The mechanism uses the {!Sys.sigchld} signal which should not be used for
@@ -107,21 +107,21 @@ val configure :
by an application to configure the use of signals by this module.
The optional [intr_sig] argument can be used to specify the signal used by
- the {{!Intr} interrupt} mechanism. The default is to use {!Sys.sigusr2}.
+ the {{!Intr} interrupt} mechanism. The default is to use {!Sys.sigusr2}.
The optional [handle_sigchld] argument can be used to specify whether this
- module should setup handling of {!Sys.sigchld}. The default is [true].
- When explicitly specified as [~handle_sigchld:false], the application should
+ module should setup handling of {!Sys.sigchld}. The default is [true]. When
+ explicitly specified as [~handle_sigchld:false], the application should
arrange to call {!handle_signal} whenever a {!Sys.sigchld} signal occurs.
The optional [ignore_sigpipe] argument can be used to specify whether
- {!Sys.sigpipe} will be configured to be ignored or not. The default is
+ {!Sys.sigpipe} will be configured to be ignored or not. The default is
[true].
- ⚠️ This module must always be configured before use. Unless this module has
+ ⚠️ This module must always be configured before use. Unless this module has
been explicitly configured, calling a method of this module from the main
thread on the main domain will automatically configure this module with
- default options. In case the application uses multiple threads or multiple
+ default options. In case the application uses multiple threads or multiple
domains, the application should arrange to call [configure] from the main
thread on the main domain before any threads or domains besides the main are
created or spawned. *)
@@ -129,14 +129,14 @@ val configure :
val check_configured : unit -> unit
(** [check_configured ()] checks whether this module has already been
{{!configure} configured} or not and, if not, calls {!configure} with
- default arguments. In either case, calling [check_configured ()] will
+ default arguments. In either case, calling [check_configured ()] will
(re)configure signal handling for the current thread and perform other
required initialization for the thread to use this module.
⚠️ This should be called at the start of every thread using this module.
ℹ️ The intended use case for [check_configured ()] is at the point of entry
- of schedulers and other facilities that use this module. In other words,
+ of schedulers and other facilities that use this module. In other words,
application code should ideally not need to call this directly. *)
val handle_signal : int -> unit
File "lib/picos_mux.fifo/picos_mux_fifo.ml", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_mux.fifo/picos_mux_fifo.ml b/_build/default/lib/picos_mux.fifo/.formatted/picos_mux_fifo.ml
index 0c77b19..b1d0883 100644
--- a/_build/default/lib/picos_mux.fifo/picos_mux_fifo.ml
+++ b/_build/default/lib/picos_mux.fifo/.formatted/picos_mux_fifo.ml
@@ -96,8 +96,11 @@ let run_fiber ?quota ?fatal_exn_handler fiber main =
{
exnc = (match fatal_exn_handler with None -> raise | Some exnc -> exnc);
effc =
- (fun (type a) (e : a Effect.t) :
- ((a, _) Effect.Deep.continuation -> _) option ->
+ (fun (type a)
+ (e : a Effect.t)
+ :
+ ((a, _) Effect.Deep.continuation -> _) option
+ ->
match e with
| Fiber.Current -> t.current
| Fiber.Spawn r ->
File "lib/picos_std.finally/picos_std_finally.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_std.finally/picos_std_finally.mli b/_build/default/lib/picos_std.finally/.formatted/picos_std_finally.mli
index 67cdf72..ab1e3ad 100644
--- a/_build/default/lib/picos_std.finally/picos_std_finally.mli
+++ b/_build/default/lib/picos_std.finally/.formatted/picos_std_finally.mli
@@ -4,12 +4,14 @@
it is no longer needed.
⚠️ Beware that the Stdlib {{!Fun.protect} [Fun.protect ~finally]} helper does
- not {{!Picos_std_structured.Control.protect} protect against cancelation
- propagation} when it calls [finally ()]. This means that cancelable
+ not
+ {{!Picos_std_structured.Control.protect} protect against cancelation
+ propagation} when it calls [finally ()]. This means that cancelable
operations performed by [finally] may be terminated and resources might be
- leaked. So, if you want to avoid resource leaks, you should either use
- {!lastly} or explicitly {{!Picos_std_structured.Control.protect} protect
- against cancelation propagation}.
+ leaked. So, if you want to avoid resource leaks, you should either use
+ {!lastly} or explicitly
+ {{!Picos_std_structured.Control.protect} protect against cancelation
+ propagation}.
We open both this library and a few other libraries
@@ -38,64 +40,70 @@ val finally : ('r -> unit) -> (unit -> 'r) -> ('r -> 'a) -> 'a
calls [scope resource], and then calls [release resource] after the scope
exits.
- ℹ️ {{!Picos_std_structured.Control.protect} Cancelation propagation will be
- forbidden} during the call of [release]. *)
+ ℹ️
+ {{!Picos_std_structured.Control.protect} Cancelation propagation will be
+ forbidden} during the call of [release]. *)
val lastly : (unit -> unit) -> (unit -> 'a) -> 'a
(** [lastly action scope] is equivalent to
{{!finally} [finally action Fun.id scope]}.
- ℹ️ {{!Picos_std_structured.Control.protect} Cancelation propagation will be
- forbidden} during the call of [action]. *)
+ ℹ️
+ {{!Picos_std_structured.Control.protect} Cancelation propagation will be
+ forbidden} during the call of [action]. *)
(** {2 Instances} *)
type 'r instance
-(** Either contains a resource or is empty as the resource has been {{!transfer}
- transferred}, {{!drop} dropped}, or has been {{!borrow} borrowed}
- temporarily. *)
+(** Either contains a resource or is empty as the resource has been
+ {{!transfer} transferred}, {{!drop} dropped}, or has been
+ {{!borrow} borrowed} temporarily. *)
val instantiate : ('r -> unit) -> (unit -> 'r) -> ('r instance -> 'a) -> 'a
(** [instantiate release acquire scope] calls [acquire ()] to obtain a resource
- and stores it as an {!instance}, calls [scope instance]. Then, if [scope]
- returns normally, awaits until the {!instance} becomes empty. In case
+ and stores it as an {!instance}, calls [scope instance]. Then, if [scope]
+ returns normally, awaits until the {!instance} becomes empty. In case
[scope] raises an exception or the fiber is canceled, the instance will be
{{!drop} dropped}.
- ℹ️ {{!Picos_std_structured.Control.protect} Cancelation propagation will be
- forbidden} during the call of [release]. *)
+ ℹ️
+ {{!Picos_std_structured.Control.protect} Cancelation propagation will be
+ forbidden} during the call of [release]. *)
val drop : 'r instance -> unit
(** [drop instance] releases the resource, if any, contained by the {!instance}.
- @raise Invalid_argument if the resource has been {{!let&} borrowed} and
- hasn't yet been returned. *)
+ @raise Invalid_argument
+ if the resource has been {{!let&} borrowed} and hasn't yet been returned.
+*)
val borrow : 'r instance -> ('r -> 'a) -> 'a
(** [borrow instance scope] borrows the [resource] stored in the [instance],
calls [scope resource], and then returns the [resource] to the [instance]
after scope exits.
- @raise Invalid_argument if the resource has already been {{!borrow}
- borrowed} and hasn't yet been returned, has already been {{!drop}
- dropped}, or has already been {{!transfer} transferred}. *)
+ @raise Invalid_argument
+ if the resource has already been {{!borrow} borrowed} and hasn't yet been
+ returned, has already been {{!drop} dropped}, or has already been
+ {{!transfer} transferred}. *)
val transfer : 'r instance -> ('r instance -> 'a) -> 'a
(** [transfer source] transfers the [resource] stored in the [source] instance
- into a new [target] instance, calls [scope target]. Then, if [scope]
- returns normally, awaits until the [target] instance becomes empty. In case
- [scope] raises an exception or the fiber is canceled, the [target] instance
- will be {{!drop} dropped}.
+ into a new [target] instance, calls [scope target]. Then, if [scope] returns
+ normally, awaits until the [target] instance becomes empty. In case [scope]
+ raises an exception or the fiber is canceled, the [target] instance will be
+ {{!drop} dropped}.
- @raise Invalid_argument if the resource has been {{!borrow} borrowed} and
- hasn't yet been returned, has already been {{!transfer} transferred}, or
- has been {{!drop} dropped} unless the current fiber has been canceled, in
- which case the exception that the fiber was canceled with will be
- raised. *)
+ @raise Invalid_argument
+ if the resource has been {{!borrow} borrowed} and hasn't yet been
+ returned, has already been {{!transfer} transferred}, or has been
+ {{!drop} dropped} unless the current fiber has been canceled, in which
+ case the exception that the fiber was canceled with will be raised. *)
val move : 'r instance -> ('r -> 'a) -> 'a
(** [move instance scope] is equivalent to
- {{!transfer} [transfer instance (fun instance -> borrow instance scope)]}. *)
+ {{!transfer} [transfer instance (fun instance -> borrow instance scope)]}.
+*)
(** {1 Examples}
@@ -107,13 +115,11 @@ val move : 'r instance -> ('r -> 'a) -> 'a
{[
let recursive_server server_fd =
Flock.join_after @@ fun () ->
-
(* recursive server *)
let rec accept () =
let@ client_fd =
finally Unix.close @@ fun () ->
- Unix.accept ~cloexec:true server_fd
- |> fst
+ Unix.accept ~cloexec:true server_fd |> fst
in
(* fork to accept other clients *)
@@ -136,59 +142,55 @@ val move : 'r instance -> ('r -> 'a) -> 'a
{[
let looping_server server_fd =
Flock.join_after @@ fun () ->
-
(* loop to accept clients *)
while true do
let@ client_fd =
instantiate Unix.close @@ fun () ->
- Unix.accept ~cloexec:true server_fd
- |> fst
+ Unix.accept ~cloexec:true server_fd |> fst
in
(* fork to handle this client *)
Flock.fork @@ fun () ->
- let@ client_fd = move client_fd in
+ let@ client_fd = move client_fd in
- (* handle client... omitted *)
- ()
+ (* handle client... omitted *)
+ ()
done
]}
{2 Move resource from child to parent}
You can {{!move} move} an {{!instantiate} instantiated} resource between any
- two fibers and {{!borrow} borrow} it before moving it. For example, you can
+ two fibers and {{!borrow} borrow} it before moving it. For example, you can
create a resource in a child fiber, use it there, and then move it to the
parent fiber:
{[
let move_from_child_to_parent () =
Flock.join_after @@ fun () ->
-
(* for communicating a resource *)
let shared_ivar = Ivar.create () in
(* fork a child that creates a resource *)
- Flock.fork begin fun () ->
- let pretend_release () = ()
- and pretend_acquire () = () in
+ Flock.fork
+ begin
+ fun () ->
+ let pretend_release () = () and pretend_acquire () = () in
- (* allocate a resource *)
- let@ instance =
- instantiate pretend_release pretend_acquire
- in
+ (* allocate a resource *)
+ let@ instance = instantiate pretend_release pretend_acquire in
- begin
- (* borrow the resource *)
- let@ resource = borrow instance in
+ begin
+ (* borrow the resource *)
+ let@ resource = borrow instance in
- (* use the resource... omitted *)
- ()
- end;
+ (* use the resource... omitted *)
+ ()
+ end;
- (* send the resource to the parent *)
- Ivar.fill shared_ivar instance
- end;
+ (* send the resource to the parent *)
+ Ivar.fill shared_ivar instance
+ end;
(* await for a resource from the child and own it *)
let@ resource = Ivar.read shared_ivar |> move in
@@ -198,5 +200,5 @@ val move : 'r instance -> ('r -> 'a) -> 'a
]}
The above uses an {{!Picos_std_sync.Ivar} [Ivar]} to communicate the movable
- resource from the child fiber to the parent fiber. Any concurrency safe
+ resource from the child fiber to the parent fiber. Any concurrency safe
mechanism could be used. *)
File "lib/picos_mux.random/picos_mux_random.ml", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_mux.random/picos_mux_random.ml b/_build/default/lib/picos_mux.random/.formatted/picos_mux_random.ml
index 9948b41..a999de8 100644
--- a/_build/default/lib/picos_mux.random/picos_mux_random.ml
+++ b/_build/default/lib/picos_mux.random/.formatted/picos_mux_random.ml
@@ -219,8 +219,11 @@ let context ?fatal_exn_handler () =
next p t);
exnc;
effc =
- (fun (type a) (e : a Effect.t) :
- ((a, _) Effect.Deep.continuation -> _) option ->
+ (fun (type a)
+ (e : a Effect.t)
+ :
+ ((a, _) Effect.Deep.continuation -> _) option
+ ->
match e with
| Fiber.Current -> t.current
| Fiber.Spawn r ->
File "lib/picos_std.sync/picos_std_sync.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_std.sync/picos_std_sync.mli b/_build/default/lib/picos_std.sync/.formatted/picos_std_sync.mli
index c4cd77d..ae3ed1b 100644
--- a/_build/default/lib/picos_std.sync/picos_std_sync.mli
+++ b/_build/default/lib/picos_std.sync/.formatted/picos_std_sync.mli
@@ -18,16 +18,17 @@ open Picos_std_event
module Mutex : sig
(** A mutual-exclusion lock or mutex.
- ℹ️ This intentionally mimics the interface of {!Stdlib.Mutex}. Unlike with
+ ℹ️ This intentionally mimics the interface of {!Stdlib.Mutex}. Unlike with
the standard library mutex, blocking on this mutex potentially allows an
effects based scheduler to run other fibers on the thread.
🏎️ The optional [checked] argument taken by most of the operations defaults
- to [true]. When explicitly specified as [~checked:false] the mutex
- implementation may avoid having to obtain the {{!Picos.Fiber.current}
- current fiber}, which can be expensive relative to locking or unlocking an
- uncontested mutex. Note that specifying [~checked:false] on an operation
- may prevent error checking also on a subsequent operation. *)
+ to [true]. When explicitly specified as [~checked:false] the mutex
+ implementation may avoid having to obtain the
+ {{!Picos.Fiber.current} current fiber}, which can be expensive relative to
+ locking or unlocking an uncontested mutex. Note that specifying
+ [~checked:false] on an operation may prevent error checking also on a
+ subsequent operation. *)
type t
(** Represents a mutual-exclusion lock or mutex. *)
@@ -40,30 +41,32 @@ module Mutex : sig
ℹ️ If the fiber has been canceled and propagation of cancelation is
allowed, this may raise the cancelation exception before locking the
- mutex. If [~checked:false] was specified, the cancelation exception may
- or may not be raised.
+ mutex. If [~checked:false] was specified, the cancelation exception may or
+ may not be raised.
- @raise Sys_error if the mutex is already locked by the fiber. If
- [~checked:false] was specified for some previous operation on the mutex
- the exception may or may not be raised. *)
+ @raise Sys_error
+ if the mutex is already locked by the fiber. If [~checked:false] was
+ specified for some previous operation on the mutex the exception may or
+ may not be raised. *)
val try_lock : ?checked:bool -> t -> bool
- (** [try_lock mutex] locks the mutex in case the mutex is unlocked. Returns
+ (** [try_lock mutex] locks the mutex in case the mutex is unlocked. Returns
[true] on success and [false] in case the mutex was locked.
ℹ️ If the fiber has been canceled and propagation of cancelation is
allowed, this may raise the cancelation exception before locking the
- mutex. If [~checked:false] was specified, the cancelation exception may
- or may not be raised. *)
+ mutex. If [~checked:false] was specified, the cancelation exception may or
+ may not be raised. *)
val unlock : ?checked:bool -> t -> unit
(** [unlock mutex] unlocks the mutex.
ℹ️ This operation is not cancelable.
- @raise Sys_error if the mutex was locked by another fiber. If
- [~checked:false] was specified for some previous operation on the mutex
- the exception may or may not be raised. *)
+ @raise Sys_error
+ if the mutex was locked by another fiber. If [~checked:false] was
+ specified for some previous operation on the mutex the exception may or
+ may not be raised. *)
val protect : ?checked:bool -> t -> (unit -> 'a) -> 'a
(** [protect mutex thunk] locks the [mutex], runs [thunk ()], and unlocks the
@@ -71,8 +74,8 @@ module Mutex : sig
ℹ️ If the fiber has been canceled and propagation of cancelation is
allowed, this may raise the cancelation exception before locking the
- mutex. If [~checked:false] was specified, the cancelation exception may
- or may not be raised.
+ mutex. If [~checked:false] was specified, the cancelation exception may or
+ may not be raised.
@raise Sys_error for the same reasons as {!lock} and {!unlock}. *)
end
@@ -80,7 +83,7 @@ end
module Condition : sig
(** A condition variable.
- ℹ️ This intentionally mimics the interface of {!Stdlib.Condition}. Unlike
+ ℹ️ This intentionally mimics the interface of {!Stdlib.Condition}. Unlike
with the standard library condition variable, blocking on this condition
variable allows an effects based scheduler to run other fibers on the
thread. *)
@@ -111,7 +114,7 @@ end
module Semaphore : sig
(** {!Counting} and {!Binary} semaphores.
- ℹ️ This intentionally mimics the interface of {!Stdlib.Semaphore}. Unlike
+ ℹ️ This intentionally mimics the interface of {!Stdlib.Semaphore}. Unlike
with the standard library semaphores, blocking on these semaphores allows
an effects based scheduler to run other fibers on the thread. *)
@@ -125,7 +128,8 @@ module Semaphore : sig
(** [make initial] creates a new counting semaphore with the given [initial]
count.
- @raise Invalid_argument in case the given [initial] count is negative. *)
+ @raise Invalid_argument in case the given [initial] count is negative.
+ *)
val release : t -> unit
(** [release semaphore] increments the count of the semaphore.
@@ -143,7 +147,7 @@ module Semaphore : sig
the semaphore unless the count is already [0]. *)
val get_value : t -> int
- (** [get_value semaphore] returns the current count of the semaphore. This
+ (** [get_value semaphore] returns the current count of the semaphore. This
should only be used for debugging or informational messages. *)
end
@@ -175,7 +179,7 @@ end
module Lazy : sig
(** A lazy suspension.
- ℹ️ This intentionally mimics the interface of {!Stdlib.Lazy}. Unlike with
+ ℹ️ This intentionally mimics the interface of {!Stdlib.Lazy}. Unlike with
the standard library suspensions an attempt to force a suspension from
multiple fibers, possibly running on different domains, does not raise the
{!Undefined} exception. *)
@@ -200,16 +204,17 @@ module Lazy : sig
val force : 'a t -> 'a
(** [force susp] forces the suspension, i.e. computes [thunk ()] using the
[thunk] passed to {!from_fun}, stores the result of the computation to the
- suspension and reproduces its result. In case the suspension has already
+ suspension and reproduces its result. In case the suspension has already
been forced the computation is skipped and stored result is reproduced.
ℹ️ This will check whether the current fiber has been canceled before
- starting the computation of [thunk ()]. This allows the suspension to be
- forced by another fiber. However, if the fiber is canceled and the
+ starting the computation of [thunk ()]. This allows the suspension to be
+ forced by another fiber. However, if the fiber is canceled and the
cancelation exception is raised after the computation has been started,
the suspension will then store the cancelation exception.
- @raise Undefined in case the suspension is currently being forced by the
+ @raise Undefined
+ in case the suspension is currently being forced by the
{{!Picos.Fiber.current} current} fiber. *)
val force_val : 'a t -> 'a
@@ -222,10 +227,7 @@ module Lazy : sig
val map_val : ('a -> 'b) -> 'a t -> 'b t
(** [map_val fn susp] is equivalent to:
{@ocaml skip[
- if is_val susp then
- from_val (fn (force susp))
- else
- map fn susp
+ if is_val susp then from_val (fn (force susp)) else map fn susp
]} *)
end
@@ -233,15 +235,15 @@ module Latch : sig
(** A dynamic single-use countdown latch.
Latches are typically used for determining when a finite set of parallel
- computations is done. If the size of the set is known a priori, then the
+ computations is done. If the size of the set is known a priori, then the
latch can be initialized with the size as initial count and then each
computation just {{!decr} decrements} the latch.
If the size is unknown, i.e. it is determined dynamically, then a latch is
initialized with a count of one, the a priori known computations are
- started and then the latch is {{!decr} decremented}. When a computation
- is stsrted, the latch is {{!try_incr} incremented}, and then {{!decr}
- decremented} once the computation has finished. *)
+ started and then the latch is {{!decr} decremented}. When a computation is
+ stsrted, the latch is {{!try_incr} incremented}, and then
+ {{!decr} decremented} once the computation has finished. *)
type t
(** Represents a dynamic countdown latch. *)
@@ -250,8 +252,8 @@ module Latch : sig
(** [create initial] creates a new countdown latch with the specified
[initial] count.
- @raise Invalid_argument in case the specified [initial] count is
- negative. *)
+ @raise Invalid_argument in case the specified [initial] count is negative.
+ *)
val try_decr : t -> bool
(** [try_decr latch] attempts to decrement the count of the latch and returns
@@ -261,8 +263,7 @@ module Latch : sig
val decr : t -> unit
(** [decr latch] is equivalent to:
{@ocaml skip[
- if not (try_decr latch) then
- invalid_arg "zero count"
+ if not (try_decr latch) then invalid_arg "zero count"
]}
ℹ️ This operation is not cancelable.
@@ -277,8 +278,7 @@ module Latch : sig
val incr : t -> unit
(** [incr latch] is equivalent to:
{@ocaml skip[
- if not (try_incr latch) then
- invalid_arg "zero count"
+ if not (try_incr latch) then invalid_arg "zero count"
]}
@raise Invalid_argument in case the count of the latch is zero. *)
@@ -306,7 +306,7 @@ module Ivar : sig
val try_fill : 'a t -> 'a -> bool
(** [try_fill ivar value] attempts to assign the given [value] to the
- incremental variable. Returns [true] on success and [false] in case the
+ incremental variable. Returns [true] on success and [false] in case the
variable had already been poisoned or assigned a value. *)
val fill : 'a t -> 'a -> unit
@@ -315,7 +315,7 @@ module Ivar : sig
val try_poison_at : 'a t -> exn -> Printexc.raw_backtrace -> bool
(** [try_poison_at ivar exn bt] attempts to poison the incremental variable
- with the specified exception and backtrace. Returns [true] on success and
+ with the specified exception and backtrace. Returns [true] on success and
[false] in case the variable had already been poisoned or assigned a
value.
@@ -332,8 +332,8 @@ module Ivar : sig
val poison : ?callstack:int -> 'a t -> exn -> unit
(** [poison ivar exn] is equivalent to
- {{!poison_at} [poison_at ivar exn (Printexc.get_callstack n)]}
- where [n] defaults to [0]. *)
+ {{!poison_at} [poison_at ivar exn (Printexc.get_callstack n)]} where [n]
+ defaults to [0]. *)
val peek_opt : 'a t -> 'a option
(** [peek_opt ivar] either returns [Some value] in case the variable has been
@@ -343,8 +343,8 @@ module Ivar : sig
val read : 'a t -> 'a
(** [read ivar] waits until the variable is either assigned a value or the
- variable is poisoned and then returns the value or raises the
- exception. *)
+ variable is poisoned and then returns the value or raises the exception.
+ *)
val read_evt : 'a t -> 'a Event.t
(** [read_evt ivar] returns an event that can be committed to once the
@@ -356,7 +356,7 @@ module Stream : sig
Readers can {!tap} into a stream to get a {!cursor} for reading all the
values {{!push} pushed} to the stream starting from the {!cursor}
- position. Conversely, values {{!push} pushed} to a stream are lost unless
+ position. Conversely, values {{!push} pushed} to a stream are lost unless
a reader has a {!cursor} to the position in the stream. *)
type !'a t
@@ -380,26 +380,26 @@ module Stream : sig
val poison : ?callstack:int -> 'a t -> exn -> unit
(** [poison stream exn] is equivalent to
- {{!poison_at} [poison_at stream exn (Printexc.get_callstack n)]}
- where [n] defaults to [0]. *)
+ {{!poison_at} [poison_at stream exn (Printexc.get_callstack n)]} where [n]
+ defaults to [0]. *)
type !'a cursor
(** Represents a (past or current) position in a stream. *)
val tap : 'a t -> 'a cursor
- (** [tap stream] returns a {!cursor} to the current position of the
- [stream]. *)
+ (** [tap stream] returns a {!cursor} to the current position of the [stream].
+ *)
val peek_opt : 'a cursor -> ('a * 'a cursor) option
(** [peek_opt cursor] immediately returns [Some (value, next)] with the
[value] pushed to the position and a cursor to the [next] position, when
- the [cursor] points to a past position in the stream. Otherwise returns
+ the [cursor] points to a past position in the stream. Otherwise returns
[None] or raises the exception that the stream was poisoned with. *)
val read : 'a cursor -> 'a * 'a cursor
(** [read cursor] immediately returns [(value, next)] with the [value] pushed
to the position and a cursor to the [next] position, when the [cursor]
- points to a past position in the stream. If the [cursor] points to the
+ points to a past position in the stream. If the [cursor] points to the
current position of the stream, [read cursor] waits until a value is
pushed to the stream or the stream is poisoned, in which case the
exception that the stream was poisoned with will be raised. *)
@@ -419,6 +419,7 @@ end
{[
module Bounded_q : sig
type 'a t
+
val create : capacity:int -> 'a t
val push : 'a t -> 'a -> unit
val pop : 'a t -> 'a
@@ -432,18 +433,17 @@ end
}
let create ~capacity =
- if capacity < 0 then
- invalid_arg "negative capacity"
- else {
- mutex = Mutex.create ();
- queue = Queue.create ();
- capacity;
- not_empty = Condition.create ();
- not_full = Condition.create ();
- }
-
- let is_full_unsafe t =
- t.capacity <= Queue.length t.queue
+ if capacity < 0 then invalid_arg "negative capacity"
+ else
+ {
+ mutex = Mutex.create ();
+ queue = Queue.create ();
+ capacity;
+ not_empty = Condition.create ();
+ not_full = Condition.create ();
+ }
+
+ let is_full_unsafe t = t.capacity <= Queue.length t.queue
let push t x =
let was_empty =
@@ -454,21 +454,18 @@ end
Queue.push x t.queue;
Queue.length t.queue = 1
in
- if was_empty then
- Condition.broadcast t.not_empty
+ if was_empty then Condition.broadcast t.not_empty
let pop t =
let elem, was_full =
Mutex.protect t.mutex @@ fun () ->
while Queue.length t.queue = 0 do
- Condition.wait
- t.not_empty t.mutex
+ Condition.wait t.not_empty t.mutex
done;
let was_full = is_full_unsafe t in
- Queue.pop t.queue, was_full
+ (Queue.pop t.queue, was_full)
in
- if was_full then
- Condition.broadcast t.not_full;
+ if was_full then Condition.broadcast t.not_full;
elem
end
]}
@@ -524,7 +521,7 @@ end
]}
Notice how the producer was able to push three elements to the queue after
- which the fourth push blocked and the consumer was started. Also, after
+ which the fourth push blocked and the consumer was started. Also, after
canceling the consumer, the queue could still be used just fine. *)
(** {1 Conventions}
@@ -532,17 +529,17 @@ end
The optional [padded] argument taken by several constructor functions, e.g.
{!Latch.create}, {!Mutex.create}, {!Condition.create},
{!Semaphore.Counting.make}, and {!Semaphore.Binary.make}, defaults to
- [false]. When explicitly specified as [~padded:true] the object is
- allocated in a way to avoid {{:https://en.wikipedia.org/wiki/False_sharing}
- false sharing}. For relatively long lived objects this can improve
- performance and make performance more stable at the cost of using more
- memory. It is not recommended to use [~padded:true] for short lived
- objects.
+ [false]. When explicitly specified as [~padded:true] the object is allocated
+ in a way to avoid
+ {{:https://en.wikipedia.org/wiki/False_sharing} false sharing}. For
+ relatively long lived objects this can improve performance and make
+ performance more stable at the cost of using more memory. It is not
+ recommended to use [~padded:true] for short lived objects.
The primitives provided by this library are generally optimized for low
- contention scenariors and size. Generally speaking, for best performance
- and scalability, you should try to avoid high contention scenarios by
+ contention scenariors and size. Generally speaking, for best performance and
+ scalability, you should try to avoid high contention scenarios by
architecting your program to distribute processing such that sequential
- bottlenecks are avoided. If high contention is unavoidable then other
+ bottlenecks are avoided. If high contention is unavoidable then other
communication and synchronization primitive implementations may provide
better performance. *)
File "lib/picos_aux.htbl/picos_aux_htbl.ml", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_aux.htbl/picos_aux_htbl.ml b/_build/default/lib/picos_aux.htbl/.formatted/picos_aux_htbl.ml
index 58d5554..5d77b3d 100644
--- a/_build/default/lib/picos_aux.htbl/picos_aux_htbl.ml
+++ b/_build/default/lib/picos_aux.htbl/.formatted/picos_aux_htbl.ml
@@ -55,7 +55,7 @@ type ('k, 'v) state = {
max_buckets : int;
}
(** This record is [7 + 1] words and should be aligned on such a boundary on the
- second generation heap. It is probably not worth it to pad it further. *)
+ second generation heap. It is probably not worth it to pad it further. *)
type ('k, 'v) t = ('k, 'v) state Atomic.t
@@ -419,8 +419,8 @@ type ('v, _, _) op =
| Exists : ('v, _, bool) op
| Return : ('v, _, 'v) op
-let rec try_reassoc :
- type v c r. (_, v) t -> _ -> c -> v -> (v, c, r) op -> _ -> r =
+let rec try_reassoc : type v c r.
+ (_, v) t -> _ -> c -> v -> (v, c, r) op -> _ -> r =
fun t key present future op backoff ->
let r = Atomic.get t in
let h = r.hash key in
@@ -452,20 +452,20 @@ let rec try_reassoc :
else try_reassoc t key present future op (Backoff.once backoff)
else not_found op
else
- let[@tail_mod_cons] rec reassoc :
- type v c r.
+ let[@tail_mod_cons] rec reassoc : type v c r.
_ -> _ -> c -> v -> (v, c, r) op -> (_, v, 't) tdt -> (_, v, 't) tdt
=
fun t key present future op -> function
- | Nil -> raise_notrace Not_found
- | Cons r ->
- if t key r.key then
- match op with
- | Exists | Return -> Cons { r with value = future }
- | Compare ->
- if r.value == present then Cons { r with value = future }
- else raise_notrace Not_found
- else Cons { r with rest = reassoc t key present future op r.rest }
+ | Nil -> raise_notrace Not_found
+ | Cons r ->
+ if t key r.key then
+ match op with
+ | Exists | Return -> Cons { r with value = future }
+ | Compare ->
+ if r.value == present then Cons { r with value = future }
+ else raise_notrace Not_found
+ else
+ Cons { r with rest = reassoc t key present future op r.rest }
in
match reassoc r.equal key present future op cons_r.rest with
| rest ->
@@ -522,19 +522,18 @@ let rec try_dissoc : type v c r. (_, v) t -> _ -> c -> (v, c, r) op -> _ -> r =
else try_dissoc t key present op (Backoff.once backoff)
else not_found op
else
- let[@tail_mod_cons] rec dissoc :
- type v c r.
+ let[@tail_mod_cons] rec dissoc : type v c r.
_ -> _ -> c -> (v, c, r) op -> (_, v, 't) tdt -> (_, v, 't) tdt =
fun t key present op -> function
- | Nil -> raise_notrace Not_found
- | Cons r ->
- if t key r.key then
- match op with
- | Exists | Return -> r.rest
- | Compare ->
- if r.value == present then r.rest
- else raise_notrace Not_found
- else Cons { r with rest = dissoc t key present op r.rest }
+ | Nil -> raise_notrace Not_found
+ | Cons r ->
+ if t key r.key then
+ match op with
+ | Exists | Return -> r.rest
+ | Compare ->
+ if r.value == present then r.rest
+ else raise_notrace Not_found
+ else Cons { r with rest = dissoc t key present op r.rest }
in
match dissoc r.equal key present op cons_r.rest with
| (Nil | Cons _) as rest ->
File "lib/picos_mux.multififo/picos_mux_multififo.ml", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_mux.multififo/picos_mux_multififo.ml b/_build/default/lib/picos_mux.multififo/.formatted/picos_mux_multififo.ml
index 141ff9d..e96b4e6 100644
--- a/_build/default/lib/picos_mux.multififo/picos_mux_multififo.ml
+++ b/_build/default/lib/picos_mux.multififo/.formatted/picos_mux_multififo.ml
@@ -
05,9 +305,8 @@ let yield =
Mpmcq.push p.ready (Continue (fiber, k));
next pt)
-let[@alert "-handler"] effc :
- type a. a Effect.t -> ((a, _) Effect.Deep.continuation -> _) option =
- function
+let[@alert "-handler"] effc : type a.
+ a Effect.t -> ((a, _) Effect.Deep.continuation -> _) option = function
| Fiber.Current -> current
| Fiber.Spawn r ->
let (Per_thread p) = get_per_thread () in
File "lib/picos_io/picos_io.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_io/picos_io.mli b/_build/default/lib/picos_io/.formatted/picos_io.mli
index d519b12..9014239 100644
--- a/_build/default/lib/picos_io/picos_io.mli
+++ b/_build/default/lib/picos_io/.formatted/picos_io.mli
@@ -3,21 +3,21 @@
(** {1 Modules} *)
module Unix : sig
- (** A transparently asynchronous replacement for a subset of the {{!Deps.Unix}
- [Unix]} module that comes with OCaml.
+ (** A transparently asynchronous replacement for a subset of the
+ {{!Deps.Unix} [Unix]} module that comes with OCaml.
In this module operations on file descriptors, such as {!read} and
{!write} and others, including {!select}, implicitly block, in a scheduler
friendly manner, to await for the file descriptor to become available for
- the operation. This works best with file descriptors {{!set_nonblock} set
- to non-blocking mode}.
+ the operation. This works best with file descriptors
+ {{!set_nonblock} set to non-blocking mode}.
In addition to operations on file descriptors, in this module
- {!sleep}, and
- {!sleepf}
- also block in a scheduler friendly manner. Additionally
+ also block in a scheduler friendly manner. Additionally
- {!wait},
- {!waitpid}, and
@@ -25,16 +25,17 @@ module Unix : sig
also block in a scheduler friendly manner except on Windows.
- ⚠️ Shared (i.e. {{!create_process} inherited or inheritable} or {{!dup}
- duplicated}) file descriptors, such as {!stdin}, {!stdout}, and {!stderr},
- typically should not be put into non-blocking mode, because that affects
- all of the parties using the shared file descriptors. However, for
- non-shared file descriptors {{!set_nonblock} non-blocking mode} improves
- performance significantly with this module.
+ ⚠️ Shared (i.e. {{!create_process} inherited or inheritable} or
+ {{!dup} duplicated}) file descriptors, such as {!stdin}, {!stdout}, and
+ {!stderr}, typically should not be put into non-blocking mode, because
+ that affects all of the parties using the shared file descriptors.
+ However, for non-shared file descriptors
+ {{!set_nonblock} non-blocking mode} improves performance significantly
+ with this module.
⚠️ Beware that this does not currently try to work around any limitations
- of the {{!Deps.Unix} [Unix]} module that comes with OCaml. In particular,
- on Windows, only sockets can be put into non-blocking mode. Also, on
+ of the {{!Deps.Unix} [Unix]} module that comes with OCaml. In particular,
+ on Windows, only sockets can be put into non-blocking mode. Also, on
Windows, scheduler friendly blocking only works properly with non-blocking
file descriptors, i.e. sockets.
@@ -352,8 +353,8 @@ module Unix : sig
(** [select rds wrs exs timeout] is like {!Deps.Unix.select}, but uses
{!Picos_io_select} to avoid blocking the thread.
- 🐌 You may find composing multi file descriptor awaits via other means
- with {!Picos_io_select} more flexible and efficient. *)
+ 🐌 You may find composing multi file descriptor awaits via other means with
+ {!Picos_io_select} more flexible and efficient. *)
type lock_command = Unix.lock_command =
| F_ULOCK
File "lib/picos_io_cohttp/picos_io_cohttp.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_io_cohttp/picos_io_cohttp.mli b/_build/default/lib/picos_io_cohttp/.formatted/picos_io_cohttp.mli
index 43a9cc4..e97c7fc 100644
--- a/_build/default/lib/picos_io_cohttp/picos_io_cohttp.mli
+++ b/_build/default/lib/picos_io_cohttp/.formatted/picos_io_cohttp.mli
@@ -2,7 +2,7 @@
implementation using {!Picos_io} for {!Picos}.
⚠️ This library is currently minimalistic and experimental and is highly
- likely to change. Feedback from potential users is welcome! *)
+ likely to change. Feedback from potential users is welcome! *)
open Picos_io
@@ -11,7 +11,8 @@ open Picos_io
(** Convenience functions for constructing requests and processing responses.
Please consult the
- {{:https://ocaml.org/p/cohttp/latest/doc/Cohttp/Generic/Client/module-type-S/index.html} CoHTTP documentation}. *)
+ {{:https://ocaml.org/p/cohttp/latest/doc/Cohttp/Generic/Client/module-type-S/index.html}
+ CoHTTP documentation}. *)
module Client : sig
include
Cohttp.Generic.Client.S
@@ -23,7 +24,8 @@ end
(** Convenience functions for processing requests and constructing responses.
Please consult the
- {{:https://ocaml.org/p/cohttp/latest/doc/Cohttp/Generic/Server/module-type-S/index.html} CoHTTP documentation}. *)
+ {{:https://ocaml.org/p/cohttp/latest/doc/Cohttp/Generic/Server/module-type-S/index.html}
+ CoHTTP documentation}. *)
module Server : sig
include
Cohttp.Generic.Server.S
@@ -33,7 +35,7 @@ module Server : sig
val run : t -> IO.conn -> unit
(** [run server socket] starts running a server that {{!Unix.accept} accepts}
- clients on the specified [socket]. This never returns normally. *)
+ clients on the specified [socket]. This never returns normally. *)
end
(** {1 Examples}
@@ -50,14 +52,12 @@ end
{2 A server and client}
- Let's build a simple hello server. We first define a function that creates
+ Let's build a simple hello server. We first define a function that creates
and configures a socket for the server:
{[
let server_create ?(max_pending_reqs = 8) addr =
- let socket =
- Unix.socket ~cloexec:true PF_INET SOCK_STREAM 0
- in
+ let socket = Unix.socket ~cloexec:true PF_INET SOCK_STREAM 0 in
match
Unix.set_nonblock socket;
Unix.bind socket addr;
@@ -65,12 +65,12 @@ end
with
| () -> socket
| exception exn ->
- Unix.close socket;
- raise exn
+ Unix.close socket;
+ raise exn
]}
The reason for doing it like this, as we'll see later, is that we want the
- OS to decide the port for our server. Also note that we explicitly set the
+ OS to decide the port for our server. Also note that we explicitly set the
socket to non-blocking mode, which is what we should do with {!Picos_io}
whenever possible.
@@ -79,27 +79,21 @@ end
{[
let server_run socket =
let callback _conn _req body =
- let body =
- Printf.sprintf "Hello, %s!"
- (Body.to_string body)
- in
+ let body = Printf.sprintf "Hello, %s!" (Body.to_string body) in
Server.respond_string ~status:`OK ~body ()
in
Server.run (Server.make ~callback ()) socket
]}
- The idea is that the body of the request is the name to be greeted
- in the body of the response.
+ The idea is that the body of the request is the name to be greeted in the
+ body of the response.
A client then posts to the specified uri and returns the response body:
{[
let client uri name =
- let resp, body =
- Client.post ~body:(`String name) uri
- in
- if Response.status resp != `OK then
- failwith "Not OK";
+ let resp, body = Client.post ~body:(`String name) uri in
+ if Response.status resp != `OK then failwith "Not OK";
Body.to_string body
]}
@@ -140,7 +134,7 @@ end
We first create the [server_socket] and obtain the [server_port] and
ultimately the [server_uri] from it — typically one can avoid this
- complexity and use a fixed port. We then create a
+ complexity and use a fixed port. We then create a
{{!Picos_std_structured.Flock} flock} for running the server as a concurrent
- fiber, which we arrange to terminate at the end of the scope. Finally we
- act as the client to get a greeting from the server. *)
+ fiber, which we arrange to terminate at the end of the scope. Finally we act
+ as the client to get a greeting from the server. *)
File "test/test_picos_dscheck.ml", line 1, characters 0-0:
diff --git a/_build/default/test/test_picos_dscheck.ml b/_build/default/test/.formatted/test_picos_dscheck.ml
index ff7b55b..3609345 100644
--- a/_build/default/test/test_picos_dscheck.ml
+++ b/_build/default/test/.formatted/test_picos_dscheck.ml
@@ -90,8 +90,8 @@ let test_computation_contract () =
(** This covers the contract of [Computation] to remove detached triggers.
- Testing this through the public API would require relying on GC
- statistics. *)
+ Testing this through the public API would require relying on GC statistics.
+*)
let test_computation_removes_triggers () =
[ `FIFO; `LIFO ]
|> List.iter @@ fun mode ->
File "lib/picos_io.select/picos_io_select.ml", line 1, characters 0-0:
diff --git a/_build/default/lib/picos_io.select/picos_io_select.ml b/_build/default/lib/picos_io.select/.formatted/picos_io_select.ml
index eb6e2f8..9551db5 100644
--- a/_build/default/lib/picos_io.select/picos_io_select.ml
+++ b/_build/default/lib/picos_io.select/.formatted/picos_io_select.ml
@@ -248,7 +248,10 @@ let rec select_thread s timeout rd wr ex =
else begin
assert (0 < state.value);
Unix.kill (Unix.getpid ()) config.intr_sig;
- let idle = 0.000_001 (* 1μs *) in
+ let idle =
+ 0.000_001
+ (* 1μs *)
+ in
if timeout < 0.0 || idle <= timeout then idle else timeout
end
in
File "lib/picos/picos.ocaml5.ml", line 1, characters 0-0:
diff --git a/_build/default/lib/picos/picos.ocaml5.ml b/_build/default/lib/picos/.formatted/picos.ocaml5.ml
index 1dd7899..d97ce5b 100644
--- a/_build/default/lib/picos/picos.ocaml5.ml
+++ b/_build/default/lib/picos/.formatted/picos.ocaml5.ml
@@ -235,7 +235,7 @@ module Computation = struct
| S ((Canceled _ | Returned _) as completed) -> canceled completed
(** [gc] is called when balance becomes negative by both [try_attach] and
- [detach]. This ensures that the [O(n)] lazy removal done by [gc] cannot
+ [detach]. This ensures that the [O(n)] lazy removal done by [gc] cannot
cause starvation, because the only reason that CAS fails after [gc] is
that someone else completed the [gc]. *)
let rec gc balance_and_mode triggers = function
File "lib/picos/picos.mli", line 1, characters 0-0:
diff --git a/_build/default/lib/picos/picos.mli b/_build/default/lib/picos/.formatted/picos.mli
index 4f1fe8a..11ce657 100644
--- a/_build/default/lib/picos/picos.mli
+++ b/_build/default/lib/picos/.formatted/picos.mli
@@ -2,12 +2,12 @@
interface between effects based schedulers and concurrent abstractions.
This is essentially an interface between schedulers and concurrent
- abstractions that need to communicate with a scheduler. Perhaps an
+ abstractions that need to communicate with a scheduler. Perhaps an
enlightening analogy is to say that this is the
{{:https://en.wikipedia.org/wiki/POSIX} POSIX} of effects based schedulers.
ℹ️ Picos, i.e. {i this module}, is not intended to be an application level
- concurrent programming library or framework. If you are looking for a
+ concurrent programming library or framework. If you are looking for a
library or framework for programming concurrent applications, then this
module is probably not what you are looking for.
@@ -40,24 +40,26 @@
Consider the following motivating example:
{@ocaml skip[
- Mutex.protect mutex begin fun () ->
- while true do
- Condition.wait condition mutex
- done
- end
+ Mutex.protect mutex
+ begin
+ fun () ->
+ while true do
+ Condition.wait condition mutex
+ done
+ end
]}
Assume that the fiber executing the above computation might be canceled, at
- any point, by another fiber running in parallel. How could that be done
+ any point, by another fiber running in parallel. How could that be done
ensuring both safety and liveness?
- For safety, cancelation should not leave the program in an invalid state
- or cause the program to leak memory. In this case, the ownership of the
+ or cause the program to leak memory. In this case, the ownership of the
mutex must be transferred to the next fiber or be left unlocked and no
references to unused objects must be left in the mutex or the condition
variable.
- - For liveness, cancelation should take effect as soon as possible. In this
+ - For liveness, cancelation should take effect as soon as possible. In this
case, cancelation should take effect even during the
{{!Picos_std_sync.Mutex.lock} [Mutex.lock]} inside
{{!Picos_std_sync.Mutex.protect} [Mutex.protect]} and the
@@ -81,8 +83,8 @@
]}
The idea is that the main or parent fiber allocates some resources, which
- are then used by child fibers running in parallel. What should happen when
- the main fiber gets canceled? We again have both safety and liveness
+ are then used by child fibers running in parallel. What should happen when
+ the main fiber gets canceled? We again have both safety and liveness
concerns:
- For safety, to ensure that resources are not finalized prematurely, the
@@ -98,40 +100,42 @@
{2 Cancelation in Picos}
The {!Fiber} concept in Picos corresponds to an independent thread of
- execution. A fiber may explicitly {{!Fiber.forbid} forbid} or
+ execution. A fiber may explicitly {{!Fiber.forbid} forbid} or
{{!Fiber.permit} permit} the scheduler from propagating cancelation to it.
This is important for the implementation of some key concurrent abstractions
such as condition variables, where it is necessary to forbid cancelation
when the associated mutex is reacquired.
- Each fiber has an associated {!Computation} at all times. A computation is
- something that needs to be completed either by {{!Computation.return}
- returning} a value through it or by {{!Computation.cancel} canceling} it
- with an exception. To cancel a fiber one cancels the computation associated
- with the fiber or any computation whose {{!Computation.canceler} cancelation
- is propagated} to the computation associated with the fiber.
+ Each fiber has an associated {!Computation} at all times. A computation is
+ something that needs to be completed either by
+ {{!Computation.return} returning} a value through it or by
+ {{!Computation.cancel} canceling} it with an exception. To cancel a fiber
+ one cancels the computation associated with the fiber or any computation
+ whose {{!Computation.canceler} cancelation is propagated} to the computation
+ associated with the fiber.
Before a computation has been completed, it is also possible to
{{!Computation.try_attach} attach} a {!Trigger} to the computation and also
- to later {{!Computation.detach} detach} the trigger from the computation. A
+ to later {{!Computation.detach} detach} the trigger from the computation. A
trigger attached to a computation is {{!Trigger.signal} signaled} as the
computation is completed.
The {!Trigger} concept in Picos is what allows a fiber to be suspended and
- later resumed. A fiber can create a trigger, add it to any shared data
+ later resumed. A fiber can create a trigger, add it to any shared data
structure(s), and {{!Trigger.await} await} for the trigger to be signaled.
- The await operation, which is {{!Trigger.Await} implemented by the
- scheduler}, also, in case the fiber permits cancelation, attaches the
- trigger to the computation of the fiber when it suspends the fiber. This is
- what allows a fiber to be resumed via cancelation of the computation.
+ The await operation, which is
+ {{!Trigger.Await} implemented by the scheduler}, also, in case the fiber
+ permits cancelation, attaches the trigger to the computation of the fiber
+ when it suspends the fiber. This is what allows a fiber to be resumed via
+ cancelation of the computation.
The return value of {{!Trigger.await} await} tells whether the fiber was
resumed normally or due to being canceled and the caller then needs to
- properly handle either case. After being canceled, depending on the
- concurrent abstraction being implemented, the caller might need to
- e.g. remove references to the trigger from the shared data structures,
- cancel asynchronous IO operations, or transfer ownership of a mutex to the
- next fiber in the queue of the mutex. *)
+ properly handle either case. After being canceled, depending on the
+ concurrent abstraction being implemented, the caller might need to e.g.
+ remove references to the trigger from the shared data structures, cancel
+ asynchronous IO operations, or transfer ownership of a mutex to the next
+ fiber in the queue of the mutex. *)
(** {1 Modules reference}
@@ -175,23 +179,25 @@ module Trigger : sig
Here is a simple example:
{[
- run begin fun () ->
- Flock.join_after @@ fun () ->
-
- let trigger = Trigger.create () in
-
- Flock.fork begin fun () ->
- Trigger.signal trigger
- end;
-
- match Trigger.await trigger with
- | None ->
- (* We were resumed normally. *)
- ()
- | Some (exn, bt) ->
- (* We were canceled. *)
- Printexc.raise_with_backtrace exn bt
- end
+ run
+ begin
+ fun () ->
+ Flock.join_after @@ fun () ->
+ let trigger = Trigger.create () in
+
+ Flock.fork
+ begin
+ fun () -> Trigger.signal trigger
+ end;
+
+ match Trigger.await trigger with
+ | None ->
+ (* We were resumed normally. *)
+ ()
+ | Some (exn, bt) ->
+ (* We were canceled. *)
+ Printexc.raise_with_backtrace exn bt
+ end
]}
⚠️ Typically we need to cleanup after {!await}, but in the above example we
@@ -199,14 +205,14 @@ module Trigger : sig
{{!Computation.try_attach} attach} the trigger to any computation.
All operations on triggers are wait-free, with the obvious exception of
- {!await}. The {!signal} operation inherits the properties of the action
+ {!await}. The {!signal} operation inherits the properties of the action
attached with {!on_signal} to the trigger. *)
(** {2 Interface for suspending} *)
type t
- (** Represents a trigger. A trigger can be in one of three states: {i
- initial}, {i awaiting}, or {i signaled}.
+ (** Represents a trigger. A trigger can be in one of three states:
+ {i initial}, {i awaiting}, or {i signaled}.
ℹ️ Once a trigger becomes signaled it no longer changes state.
@@ -221,7 +227,7 @@ module Trigger : sig
state.
This can be useful, for example, when a [trigger] is being inserted to
- multiple locations and might be signaled concurrently while doing so. In
+ multiple locations and might be signaled concurrently while doing so. In
such a case one can periodically check with [is_signaled trigger] whether
it makes sense to continue.
@@ -233,10 +239,10 @@ module Trigger : sig
(** [await trigger] waits for the trigger to be {!signal}ed.
The return value is [None] in case the trigger has been signaled and the
- {{!Fiber} fiber} was resumed normally. Otherwise the return value is
+ {{!Fiber} fiber} was resumed normally. Otherwise the return value is
[Some (exn, bt)], which indicates that the fiber has been canceled and the
- caller should raise the exception. In either case the caller is
- responsible for cleaning up. Usually this means making sure that no
+ caller should raise the exception. In either case the caller is
+ responsible for cleaning up. Usually this means making sure that no
references to the trigger remain to avoid space leaks.
⚠️ As a rule of thumb, if you inserted the trigger to some data structure
@@ -245,36 +251,36 @@ module Trigger : sig
trigger after [await].
ℹ️ A trigger in the signaled state only takes a small constant amount of
- memory. Make sure that it is not possible for a program to accumulate
+ memory. Make sure that it is not possible for a program to accumulate
unbounded numbers of signaled triggers under any circumstance.
- ⚠️ Only the owner or creator of a trigger may call [await]. It is
+ ⚠️ Only the owner or creator of a trigger may call [await]. It is
considered an error to make multiple calls to [await].
ℹ️ The behavior is that, {i unless [await] can return immediately},
- on OCaml 5, [await] will perform the {!Await} effect, and
- - on OCaml 4, [await] will call the [await] operation of the {{!Handler}
- current handler}.
+ - on OCaml 4, [await] will call the [await] operation of the
+ {{!Handler} current handler}.
- @raise Invalid_argument if the trigger was in the awaiting state, which
- means that multiple concurrent calls of [await] are being made. *)
+ @raise Invalid_argument
+ if the trigger was in the awaiting state, which means that multiple
+ concurrent calls of [await] are being made. *)
(** {2 Interface for resuming} *)
val signal : t -> unit
- (** [signal trigger] puts the [trigger] into the signaled state and calls
- the resume action, if any, attached using {!on_signal}.
+ (** [signal trigger] puts the [trigger] into the signaled state and calls the
+ resume action, if any, attached using {!on_signal}.
The intention is that calling [signal trigger] guarantees that any fiber
- {{!await} awaiting} the [trigger] will be resumed. However, when and
+ {{!await} awaiting} the [trigger] will be resumed. However, when and
whether a fiber having called {!await} will be resumed normally or as
canceled is determined by the scheduler that handles the {!Await} effect.
ℹ️ Note that under normal circumstances, [signal] should never raise an
- exception. If an exception is raised by [signal], it means that the
- handler of {!Await} has a bug or some catastrophic failure has
- occurred.
+ exception. If an exception is raised by [signal], it means that the
+ handler of {!Await} has a bug or some catastrophic failure has occurred.
⚠️ Do not call [signal] from an effect handler in a scheduler. *)
@@ -284,8 +290,8 @@ module Trigger : sig
use requires a deeper understanding of how schedulers work. *)
val is_initial : t -> bool
- (** [is_initial trigger] determines whether the trigger is in the initial
- or in the signaled state.
+ (** [is_initial trigger] determines whether the trigger is in the initial or
+ in the signaled state.
ℹ️ Consider using {!is_signaled} instead of [is_initial] as in some
contexts a trigger might reasonably be either in the initial or the
@@ -308,7 +314,7 @@ module Trigger : sig
⚠️ The action that you attach to a trigger must be safe to call from any
context that might end up signaling the trigger directly or indirectly
- through {{!Computation.canceler} propagation}. Unless you know, then you
+ through {{!Computation.canceler} propagation}. Unless you know, then you
should assume that the [resume] action might be called from a different
domain running in parallel with neither effect nor exception handlers and
that if the attached action doesn't return the system may deadlock or if
@@ -317,9 +323,10 @@ module Trigger : sig
⚠️ It is considered an error to make multiple calls to [on_signal] with a
specific [trigger].
- @raise Invalid_argument if the trigger was in the awaiting state, which
- means that either the owner or creator of the trigger made concurrent
- calls to {!await} or the handler called [on_signal] more than once. *)
+ @raise Invalid_argument
+ if the trigger was in the awaiting state, which means that either the
+ owner or creator of the trigger made concurrent calls to {!await} or the
+ handler called [on_signal] more than once. *)
val from_action : 'x -> 'y -> (t -> 'x -> 'y -> unit) -> t
[@@alert
@@ -336,7 +343,7 @@ module Trigger : sig
⚠️ The action that you attach to a trigger must be safe to call from any
context that might end up signaling the trigger directly or indirectly
- through {{!Computation.canceler} propagation}. Unless you know, then you
+ through {{!Computation.canceler} propagation}. Unless you know, then you
should assume that the [resume] action might be called from a different
domain running in parallel with neither effect nor exception handlers and
that if the attached action doesn't return the system may deadlock or if
@@ -361,66 +368,65 @@ module Trigger : sig
A key idea behind this design is that the handler for {!Await} does not
need to run arbitrary user defined code while suspending a fiber: the
- handler calls {!on_signal} by itself. This should make it easier to get
+ handler calls {!on_signal} by itself. This should make it easier to get
both the handler and the user code correct.
Another key idea is that the {!signal} operation provides no feedback as
- to the outcome regarding cancelation. Calling {!signal} merely guarantees
- that the caller of {!await} will return. This means that the point at
- which cancelation must be determined can be as late as possible. A
+ to the outcome regarding cancelation. Calling {!signal} merely guarantees
+ that the caller of {!await} will return. This means that the point at
+ which cancelation must be determined can be as late as possible. A
scheduler can check the cancelation status just before calling [continue]
and it is, of course, possible to check the cancelation status earlier.
This allows maximal flexibility for the handler of {!Await}.
The consequence of this is that the only place to handle cancelation is at
- the point of {!await}. This makes the design simpler and should make it
- easier for the user to get the handling of cancelation right. A minor
+ the point of {!await}. This makes the design simpler and should make it
+ easier for the user to get the handling of cancelation right. A minor
detail is that {!await} returns an option instead of raising an exception.
The reason for this is that matching against an option is slightly faster
- than setting up an exception handler. Returning an option also clearly
+ than setting up an exception handler. Returning an option also clearly
communicates the two different cases to handle.
On the other hand, the trigger mechanism does not have a way to specify a
user-defined callback to perform cancelation immediately before the fiber
- is resumed. Such an immediately called callback could be useful for e.g.
- canceling an underlying IO request. One justification for not having such
+ is resumed. Such an immediately called callback could be useful for e.g.
+ canceling an underlying IO request. One justification for not having such
a callback is that cancelation is allowed to take place from outside of
the scheduler, i.e. from another system level thread, and, in such a case,
- the callback could not be called immediately. Instead, the scheduler is
+ the callback could not be called immediately. Instead, the scheduler is
free to choose how to schedule canceled and continued fibers and, assuming
that fibers can be trusted, a scheduler may give priority to canceled
fibers.
This design also separates the allocation of the atomic state for the
trigger, or {!create}, from {!await}, and allows the state to be polled
- using {!is_signaled} before calling {!await}. This is particularly useful
+ using {!is_signaled} before calling {!await}. This is particularly useful
when the trigger might need to be inserted to multiple places and be
{!signal}ed in parallel before the call of {!await}.
- No mechanism is provided to communicate any result with the signal. That
- can be done outside of the mechanism and is often not needed. This
+ No mechanism is provided to communicate any result with the signal. That
+ can be done outside of the mechanism and is often not needed. This
simplifies the design.
Once {!signal} has been called, a trigger no longer refers to any other
- object and takes just two words of memory. This e.g. allows lazy removal
+ object and takes just two words of memory. This e.g. allows lazy removal
of triggers, assuming the number of attached triggers can be bounded,
because nothing except the trigger itself would be leaked.
To further understand the problem domain, in this design, in a
suspend-resume scenario, there are three distinct pieces of state:
- {ol
- {- The state of shared data structure(s) used for communication and / or
- synchronization.}
- {- The state of the trigger.}
- {- The cancelation status of the fiber.}}
+ + The state of shared data structure(s) used for communication and / or
+ synchronization.
+ + The state of the trigger.
+ + The cancelation status of the fiber.
The trigger and cancelation status are both updated independently and
- atomically through code in this interface. The key requirement left for
+ atomically through code in this interface. The key requirement left for
the user is to make sure that the state of the shared data structure is
- updated correctly independently of what {!await} returns. So, for
- example, a mutex implementation must check, after getting [Some (exn, bt)],
- what the state of the mutex is and how it should be updated. *)
+ updated correctly independently of what {!await} returns. So, for example,
+ a mutex implementation must check, after getting [Some (exn, bt)], what
+ the state of the mutex is and how it should be updated. *)
end
module Computation : sig
@@ -447,9 +453,9 @@ module Computation : sig
To define a computation, one first {{!create} creates} it and then
arranges for the computation to be completed by {{!return} returning} a
value through it or by {{!cancel} canceling} it with an exception at some
- point in the future. There are no restrictions on what it means for a
- computation to be running. The cancelation status of a computation can be
- polled or {{!check} checked} explicitly. Observers can also
+ point in the future. There are no restrictions on what it means for a
+ computation to be running. The cancelation status of a computation can be
+ polled or {{!check} checked} explicitly. Observers can also
{{!try_attach} attach} {{!Trigger}triggers} to a computation to get a
signal when the computation is completed or {{!await} await} the
computation.
@@ -457,60 +463,56 @@ module Computation : sig
Here is an example:
{[
- run begin fun () ->
- Flock.join_after @@ fun () ->
-
- let computation =
- Computation.create ()
- in
-
- let canceler = Flock.fork_as_promise @@ fun () ->
- Fiber.sleep ~seconds:1.0;
-
- Computation.cancel computation
- Exit (Printexc.get_callstack 0)
- in
-
- Flock.fork begin fun () ->
- let rec fib i =
- Computation.check
- computation;
- if i <= 1 then
- i
- else
- fib (i - 1) + fib (i - 2)
- in
- Computation.capture computation
- fib 10;
-
- Promise.terminate canceler
- end;
-
- Computation.await computation
- end
+ run
+ begin
+ fun () ->
+ Flock.join_after @@ fun () ->
+ let computation = Computation.create () in
+
+ let canceler =
+ Flock.fork_as_promise @@ fun () ->
+ Fiber.sleep ~seconds:1.0;
+
+ Computation.cancel computation Exit (Printexc.get_callstack 0)
+ in
+
+ Flock.fork
+ begin
+ fun () ->
+ let rec fib i =
+ Computation.check computation;
+ if i <= 1 then i else fib (i - 1) + fib (i - 2)
+ in
+ Computation.capture computation fib 10;
+
+ Promise.terminate canceler
+ end;
+
+ Computation.await computation
+ end
]}
- A fiber is always associated with {{!Fiber.get_computation} at least a
- single computation}. However, {{!Fiber.spawn} it is possible for multiple
- fibers to share a single computation} and it is also possible for a single
- fiber to perform multiple computations. Furthermore, the computation
- associated with a fiber {{!Fiber.set_computation} can be changed} by the
- fiber.
+ A fiber is always associated with
+ {{!Fiber.get_computation} at least a single computation}. However,
+ {{!Fiber.spawn} it is possible for multiple fibers to share a single
+ computation} and it is also possible for a single fiber to perform
+ multiple computations. Furthermore, the computation associated with a
+ fiber {{!Fiber.set_computation} can be changed} by the fiber.
- Computations are not hierarchical. In other words, computations do not
- directly implement structured concurrency. However, it is possible to
+ Computations are not hierarchical. In other words, computations do not
+ directly implement structured concurrency. However, it is possible to
{{!canceler} propagate cancelation} to implement structured concurrency on
top of computations.
Operations on computations are either wait-free or lock-free and designed
- to avoid starvation and complete in amortized constant time. The
+ to avoid starvation and complete in amortized constant time. The
properties of operations to complete a computation depend on the
properties of actions {{!Trigger.on_signal} attached} to the triggers. *)
(** {2 Interface for creating} *)
type !'a t
- (** Represents a cancelable computation. A computation is either {i running}
+ (** Represents a cancelable computation. A computation is either {i running}
or has been {i completed} either with a return value or with canceling
exception with a backtrace.
@@ -528,11 +530,11 @@ module Computation : sig
(** [create ()] creates a new computation in the running state.
The optional [mode] specifies the order in which {{!Trigger} triggers}
- {{!try_attach} attached} to the computation will be {{!Trigger.signal}
- signaled} after the computation has been completed. [`FIFO] ordering may
- reduce latency of IO bound computations and is the default. [`LIFO] may
- improve thruput of CPU bound computations and be preferable on a
- work-stealing scheduler, for example.
+ {{!try_attach} attached} to the computation will be
+ {{!Trigger.signal} signaled} after the computation has been completed.
+ [`FIFO] ordering may reduce latency of IO bound computations and is the
+ default. [`LIFO] may improve thruput of CPU bound computations and be
+ preferable on a work-stealing scheduler, for example.
ℹ️ Typically the creator of a computation object arranges for the
computation to be completed by using the {!capture} helper, for example.
@@ -548,7 +550,7 @@ module Computation : sig
val try_return : 'a t -> 'a -> bool
(** [try_return computation value] attempts to complete the computation with
- the specified [value] and returns [true] on success. Otherwise returns
+ the specified [value] and returns [true] on success. Otherwise returns
[false], which means that the computation had already been completed
before. *)
@@ -567,8 +569,8 @@ module Computation : sig
val try_capture : 'a t -> ('b -> 'a) -> 'b -> bool
(** [try_capture computation fn x] calls [fn x] and tries to complete the
computation with the value returned or the exception raised by the call
- and returns [true] on success. Otherwise returns [false], which means
- that the computation had already been completed before. *)
+ and returns [true] on success. Otherwise returns [false], which means that
+ the computation had already been completed before. *)
val capture : 'a t -> ('b -> 'a) -> 'b -> unit
(** [capture computation fn x] is equivalent to
@@ -578,17 +580,17 @@ module Computation : sig
(** Transactional interface for atomically completing multiple computations.
⚠️ The implementation of this mechanism is designed to avoid making the
- single computation completing operations,
- i.e. {{!Computation.try_return} [try_return]} and
+ single computation completing operations, i.e.
+ {{!Computation.try_return} [try_return]} and
{{!Computation.try_cancel} [try_cancel]}, slower and to avoid making
- computations heavier. For this reason the transaction mechanism is only
+ computations heavier. For this reason the transaction mechanism is only
{{:https://en.wikipedia.org/wiki/Non-blocking_algorithm#Obstruction-freedom}
- obstruction-free}. What this means is that a transaction may be aborted
+ obstruction-free}. What this means is that a transaction may be aborted
by another transaction or by a single computation manipulating
operation. *)
type 'a computation := 'a t
- (** Destructively substituted alias for {!Computation.t}. *)
+ (** Destructively substituted alias for {!Computation.t}. *)
val same : _ computation -> _ computation -> bool
(** [same computation1 computation2] determines whether the two computations
@@ -604,7 +606,7 @@ module Computation : sig
(** [try_return tx computation value] adds the completion of the
[computation] as having returned the given [value] to the transaction.
Returns [true] in case the computation had not yet been completed and
- the transaction was still alive. Otherwise returns [false] which means
+ the transaction was still alive. Otherwise returns [false] which means
that transaction was aborted and it is as if none of the completions
succesfully added to the transaction have taken place. *)
@@ -612,15 +614,14 @@ module Computation : sig
t -> 'a computation -> exn -> Printexc.raw_backtrace -> bool
(** [try_cancel tx computation exn bt] adds the completion of the
computation as having canceled with the given exception and backtrace to
- the transaction. Returns [true] in case the computation had not yet
- been completed and the transaction was still alive. Otherwise returns
- [false] which means that transaction was aborted and it is as if none of
- the completions succesfully added to the transaction have taken
- place. *)
+ the transaction. Returns [true] in case the computation had not yet been
+ completed and the transaction was still alive. Otherwise returns [false]
+ which means that transaction was aborted and it is as if none of the
+ completions succesfully added to the transaction have taken place. *)
val try_commit : t -> bool
(** [try_commit tx] attempts to mark the transaction as committed
- successfully. Returns [true] in case of success, which means that all
+ successfully. Returns [true] in case of success, which means that all
the completions added to the transaction have been performed atomically.
Otherwise returns [false] which means that transaction was aborted and
it is as if none of the completions succesfully added to the transaction
@@ -635,7 +636,7 @@ module Computation : sig
val try_cancel : 'a t -> exn -> Printexc.raw_backtrace -> bool
(** [try_cancel computation exn bt] attempts to mark the computation as
canceled with the specified exception and backtrace and returns [true] on
- success. Otherwise returns [false], which means that the computation had
+ success. Otherwise returns [false], which means that the computation had
already been completed before. *)
val cancel : 'a t -> exn -> Printexc.raw_backtrace -> unit
@@ -648,7 +649,7 @@ module Computation : sig
'a t -> seconds:float -> exn -> Printexc.raw_backtrace -> unit
(** [cancel_after ~seconds computation exn bt] arranges to {!cancel} the
computation after the specified time with the specified exception and
- backtrace. Completion of the computation before the specified time
+ backtrace. Completion of the computation before the specified time
effectively cancels the timeout.
ℹ️ The behavior is that [cancel_after] first checks that [seconds] is not
@@ -658,8 +659,8 @@ module Computation : sig
- on OCaml 4, [cancel_after] will call the [cancel_after] operation of the
{{!Handler} current handler}.
- @raise Invalid_argument if [seconds] is negative or too large as
- determined by the scheduler. *)
+ @raise Invalid_argument
+ if [seconds] is negative or too large as determined by the scheduler. *)
(** {2 Interface for polling} *)
@@ -678,7 +679,9 @@ module Computation : sig
val check : 'a t -> unit
(** [check computation] is equivalent to
- {{!canceled} [Option.iter (fun (exn, bt) -> Printexc.raise_with_backtrace exn bt) (canceled computation)]}. *)
+ {{!canceled}
+ [Option.iter (fun (exn, bt) -> Printexc.raise_with_backtrace exn bt)
+ (canceled computation)]}. *)
val peek : 'a t -> ('a, exn * Printexc.raw_backtrace) result option
(** [peek computation] returns the result of the computation or [None] in case
@@ -686,13 +689,13 @@ module Computation : sig
exception Running
(** Exception raised by {!peek_exn} when it's used on a still running
- computation. This should never be surfaced to the user. *)
+ computation. This should never be surfaced to the user. *)
val peek_exn : 'a t -> 'a
(** [peek_exn computation] returns the result of the computation or raises an
- exception. It is important to catch the exception. If the computation
- was cancelled with exception [exn] then [exn] is re-raised with its
- original backtrace.
+ exception. It is important to catch the exception. If the computation was
+ cancelled with exception [exn] then [exn] is re-raised with its original
+ backtrace.
@raise Running if the computation has not completed. *)
@@ -712,7 +715,7 @@ module Computation : sig
detaches it from the computation.
🏎️ The {!try_attach} and [detach] operations essentially implement a
- lock-free bag. While not formally wait-free, the implementation is
+ lock-free bag. While not formally wait-free, the implementation is
designed to avoid starvation by making sure that any potentially expensive
operations are performed cooperatively. *)
@@ -731,15 +734,15 @@ module Computation : sig
val canceler : from:_ t -> into:_ t -> Trigger.t
(** [canceler ~from ~into] creates a trigger that propagates cancelation
- [from] one computation [into] another on {{:Trigger.signal} signal}. The
+ [from] one computation [into] another on {{:Trigger.signal} signal}. The
returned trigger is not attached to any computation.
The caller usually attaches the returned trigger to the computation [from]
which cancelation is to be propagated and the trigger should usually also
- be detached after it is no longer needed. See also {!attach_canceler}.
+ be detached after it is no longer needed. See also {!attach_canceler}.
The intended use case of [canceler] is as a low level building block of
- structured concurrency mechanisms. Picos does not require concurrent
+ structured concurrency mechanisms. Picos does not require concurrent
programming models to be hierarchical or structured.
⚠️ The returned trigger will be in the awaiting state, which means that it
@@ -748,7 +751,7 @@ module Computation : sig
val attach_canceler : from:_ t -> into:_ t -> Trigger.t
(** [attach_canceler ~from ~into] tries to attach a {!canceler} to the
computation [from] to propagate cancelation to the computation [into] and
- returns the {!canceler} when successful. If the computation [from] has
+ returns the {!canceler} when successful. If the computation [from] has
already been canceled, the exception that [from] was canceled with will be
raised.
@@ -772,10 +775,8 @@ module Computation : sig
(** [with_action x y resume] is equivalent to
{@ocaml skip[
let computation = create () in
- let trigger =
- Trigger.from_action x y resume in
- let _ : bool =
- try_attach computation trigger in
+ let trigger = Trigger.from_action x y resume in
+ let _ : bool = try_attach computation trigger in
computation
]}
@@ -787,37 +788,39 @@ module Computation : sig
{{!try_attach} attach} {{!Trigger} triggers} to a computation to get
notified when the status of the computation changes from running to
completed and can also efficiently {{!detach} detach} such triggers in
- case getting a notification is no longer necessary. This allows the
- status change to be propagated omnidirectionally and is what makes
- computations able to support a variety of purposes such as the
- implementation of {{!Picos_std_structured} structured concurrency}.
+ case getting a notification is no longer necessary. This allows the status
+ change to be propagated omnidirectionally and is what makes computations
+ able to support a variety of purposes such as the implementation of
+ {{!Picos_std_structured} structured concurrency}.
The computation concept can be seen as a kind of single-shot atomic
{{!Picos_std_event.Event} event} that is a generalization of both a
- cancelation Context or token and of a {{!Picos_std_structured.Promise}
- promise}. Unlike a typical promise mechanism, a computation can be
- canceled. Unlike a typical cancelation mechanism, a computation can and
- should also be completed in case it is not canceled. This promotes proper
- scoping of computations and resource cleanup at completion, which is how
- the design evolved from a more traditional cancelation context design.
+ cancelation Context or token and of a
+ {{!Picos_std_structured.Promise} promise}. Unlike a typical promise
+ mechanism, a computation can be canceled. Unlike a typical cancelation
+ mechanism, a computation can and should also be completed in case it is
+ not canceled. This promotes proper scoping of computations and resource
+ cleanup at completion, which is how the design evolved from a more
+ traditional cancelation context design.
Every fiber is {{!Fiber.get_computation} associated with a computation}.
Being able to return a value through the computation means that no
- separate promise is necessarily required to hold the result of a fiber.
- On the other hand, {{!Fiber.spawn} multiple fibers may share a single
- computation}. This allows multiple fibers to be canceled efficiently
- through a single atomic update. In other words, the design allows various
- higher level patterns to be implemented efficiently.
+ separate promise is necessarily required to hold the result of a fiber. On
+ the other hand,
+ {{!Fiber.spawn} multiple fibers may share a single computation}. This
+ allows multiple fibers to be canceled efficiently through a single atomic
+ update. In other words, the design allows various higher level patterns to
+ be implemented efficiently.
Instead of directly implementing a hierarchy of computations, the design
allows {{!try_attach} attach}ing triggers to computations and
{{!canceler}a special trigger constructor} is provided for propagating
- cancelation. This helps to keep the implementation lean, i.e. not
+ cancelation. This helps to keep the implementation lean, i.e. not
substantially heavier than a typical promise implementation.
Finally, just like with {!Trigger.Await}, a key idea is that the handler
of {!Computation.Cancel_after} does not need to run arbitrary user defined
- code. The action of any trigger attached to a computation either comes
+ code. The action of any trigger attached to a computation either comes
from some scheduler calling {!Trigger.on_signal} or from
{!Computation.canceler}. *)
end
@@ -825,17 +828,17 @@ end
module Fiber : sig
(** An independent thread of execution.
- A fiber corresponds to an independent thread of execution. Fibers are
- {!create}d by schedulers in response to {!Spawn} effects. A fiber is
+ A fiber corresponds to an independent thread of execution. Fibers are
+ {!create}d by schedulers in response to {!Spawn} effects. A fiber is
associated with a {{!Computation} computation} and either {!forbid}s or
{!permit}s the scheduler from propagating cancelation when the fiber
- performs effects. A fiber also has an associated {{!FLS} fiber local
- storage}.
+ performs effects. A fiber also has an associated
+ {{!FLS} fiber local storage}.
⚠️ Many operations on fibers can only be called safely from the fiber
itself, because those operations are neither concurrency nor parallelism
- safe. Such operations can be safely called from a handler in a scheduler
- when it is handling an effect performed by the fiber. In particular, a
+ safe. Such operations can be safely called from a handler in a scheduler
+ when it is handling an effect performed by the fiber. In particular, a
scheduler can safely check whether the fiber {!has_forbidden} cancelation
and may access the {!FLS} of the fiber. *)
@@ -847,8 +850,8 @@ module Fiber : sig
ℹ️ The behavior is that
- on OCaml 5, [yield] perform the {!Yield} effect, and
- - on OCaml 4, [yield] will call the [yield] operation of the {{!Handler}
- current handler}. *)
+ - on OCaml 4, [yield] will call the [yield] operation of the
+ {{!Handler} current handler}. *)
val sleep : seconds:float -> unit
(** [sleep ~seconds] suspends the current fiber for the specified number of
@@ -881,7 +884,7 @@ module Fiber : sig
{{!Handler} current handler}.
⚠️ The [current] operation must always resume the fiber without propagating
- cancelation. A scheduler may, of course, decide to reschedule the current
+ cancelation. A scheduler may, of course, decide to reschedule the current
fiber to be resumed later.
Because the scheduler does not discontinue a fiber calling [current], it
@@ -895,7 +898,7 @@ module Fiber : sig
ℹ️ This is mostly useful in the effect handlers of schedulers.
⚠️ There is no "reference count" of how many times a fiber has forbidden or
- permitted propagation of cancelation. Calls to {!forbid} and {!permit}
+ permitted propagation of cancelation. Calls to {!forbid} and {!permit}
directly change a single boolean flag.
⚠️ It is only safe to call [has_forbidden] from the fiber itself. *)
@@ -907,12 +910,12 @@ module Fiber : sig
The main use case of [forbid] is the implementation of concurrent
abstractions that may have to {{!Trigger.await} await} for something, or
may need to perform other effects, and must not be canceled while doing
- so. For example, the wait operation on a condition variable typically
+ so. For example, the wait operation on a condition variable typically
reacquires the associated mutex before returning, which may require
awaiting for the owner of the mutex to release it.
ℹ️ [forbid] does not prevent the fiber or the associated {!computation}
- from being canceled. It only tells the scheduler not to propagate
+ from being canceled. It only tells the scheduler not to propagate
cancelation to the fiber when it performs effects.
⚠️ It is only safe to call [forbid] from the fiber itself. *)
@@ -930,10 +933,9 @@ module Fiber : sig
val is_canceled : t -> bool
(** [is_canceled fiber] is equivalent to
{@ocaml skip[
- not (Fiber.has_forbidden fiber) &&
- let (Packed computation) =
- Fiber.get_computation fiber
- in
+ (not (Fiber.has_forbidden fiber))
+ &&
+ let (Packed computation) = Fiber.get_computation fiber in
Computation.is_canceled computation
]}
@@ -944,12 +946,9 @@ module Fiber : sig
val canceled : t -> (exn * Printexc.raw_backtrace) option
(** [canceled fiber] is equivalent to:
{@ocaml skip[
- if Fiber.has_forbidden fiber then
- None
+ if Fiber.has_forbidden fiber then None
else
- let (Packed computation) =
- Fiber.get_computation fiber
- in
+ let (Packed computation) = Fiber.get_computation fiber in
Computation.canceled computation
]}
@@ -961,9 +960,7 @@ module Fiber : sig
(** [check fiber] is equivalent to:
{@ocaml skip[
if not (Fiber.has_forbidden fiber) then
- let (Packed computation) =
- Fiber.get_computation fiber
- in
+ let (Packed computation) = Fiber.get_computation fiber in
Computation.check computation
]}
@@ -984,7 +981,7 @@ module Fiber : sig
(** Fiber local storage
Fiber local storage is intended for use as a low overhead storage
- mechanism for fiber extensions. For example, one might associate a
+ mechanism for fiber extensions. For example, one might associate a
priority value with each fiber for a scheduler that uses a priority
queue or one might use FLS to store unique id values for fibers. *)
@@ -1052,31 +1049,32 @@ module Fiber : sig
val create : forbid:bool -> 'a Computation.t -> t
(** [create ~forbid computation] is equivalent to
- {{!create_packed} [create_packed ~forbid (Computation.Packed computation)]}. *)
+ {{!create_packed}
+ [create_packed ~forbid (Computation.Packed computation)]}. *)
val spawn : t -> (t -> unit) -> unit
(** [spawn fiber main] starts a new fiber by performing the {!Spawn} effect.
⚠️ Fiber records must be unique and the caller of [spawn] must make sure
- that a specific {{!fiber} fiber} record is not reused. Failure to ensure
+ that a specific {{!fiber} fiber} record is not reused. Failure to ensure
that fiber records are unique will break concurrent abstractions written
on top the the Picos interface.
- ⚠️ If the [main] function raises an exception it is considered a {i fatal
- error}. A fatal error should effectively either directly exit the program
- or stop the entire scheduler, without discontinuing existing fibers, and
- force the invocations of the scheduler on all domains to exit. What this
- means is that the caller of [spawn] {i should} ideally arrange for any
- exception to be handled by [main], but, in case that is not practical, it
- is also possible to allow an exception to propagate out of [main], which
- is then guaranteed to, one way or the other, to stop the entire program.
- It is not possible to recover from a fatal error.
+ ⚠️ If the [main] function raises an exception it is considered a
+ {i fatal error}. A fatal error should effectively either directly exit the
+ program or stop the entire scheduler, without discontinuing existing
+ fibers, and force the invocations of the scheduler on all domains to exit.
+ What this means is that the caller of [spawn] {i should} ideally arrange
+ for any exception to be handled by [main], but, in case that is not
+ practical, it is also possible to allow an exception to propagate out of
+ [main], which is then guaranteed to, one way or the other, to stop the
+ entire program. It is not possible to recover from a fatal error.
ℹ️ The behavior is that
- on OCaml 5, [spawn] performs the {!Spawn} effect, and
- - on OCaml 4, [spawn] will call the [spawn] operation of the {{!Handler}
- current handler}. *)
+ - on OCaml 4, [spawn] will call the [spawn] operation of the
+ {{!Handler} current handler}. *)
(** {2 Interface for structuring} *)
@@ -1142,19 +1140,19 @@ module Fiber : sig
val equal : t -> t -> bool
(** [equal l r] determines whether [l] and [r] are maybe equal.
Specifically, if either [l] or [r] or both is {!nothing}, then they are
- considered (maybe) equal. Otherwise [l] and [r] are compared for
+ considered (maybe) equal. Otherwise [l] and [r] are compared for
{{!Fiber.equal} physical equality}. *)
val unequal : t -> t -> bool
(** [equal l r] determines whether [l] and [r] are maybe unequal.
Specifically, if either [l] or [r] or both is {!nothing}, then they are
- considered (maybe) unequal. Otherwise [l] and [r] are compared for
+ considered (maybe) unequal. Otherwise [l] and [r] are compared for
{{!Fiber.equal} physical equality}. *)
(** {2 Design rationale}
The fiber identity is often needed only for the purpose of dynamically
- checking against programming errors. Unfortunately it can be relative
+ checking against programming errors. Unfortunately it can be relative
expensive to obtain the {{!Fiber.current} current} fiber.
As a data point, in a benchmark that increments an [int ref] protected
@@ -1172,9 +1170,9 @@ module Fiber : sig
val try_suspend :
t -> Trigger.t -> 'x -> 'y -> (Trigger.t -> 'x -> 'y -> unit) -> bool
(** [try_suspend fiber trigger x y resume] tries to suspend the [fiber] to
- await for the [trigger] to be {{!Trigger.signal} signaled}. If the result
+ await for the [trigger] to be {{!Trigger.signal} signaled}. If the result
is [false], then the [trigger] is guaranteed to be in the signaled state
- and the fiber should be eventually resumed. If the result is [true], then
+ and the fiber should be eventually resumed. If the result is [true], then
the fiber was suspended, meaning that the [trigger] will have had the
[resume] action {{!Trigger.on_signal} attached} to it and the trigger has
potentially been {{!Computation.try_attach} attached} to the
@@ -1182,7 +1180,7 @@ module Fiber : sig
val unsuspend : t -> Trigger.t -> bool
(** [unsuspend fiber trigger] makes sure that the [trigger] will not be
- attached to the computation of the [fiber]. Returns [false] in case the
+ attached to the computation of the [fiber]. Returns [false] in case the
fiber has been canceled and propagation of cancelation is not forbidden.
Otherwise returns [true].
@@ -1194,26 +1192,26 @@ module Fiber : sig
(** {2 Design rationale}
The idea is that fibers correspond 1-to-1 with independent threads of
- execution. This allows a fiber to non-atomically store state related to a
+ execution. This allows a fiber to non-atomically store state related to a
thread of execution.
The status of whether propagation of cancelation is forbidden or permitted
- could be stored in the {{!FLS} fiber local storage}. The justification
- for storing it directly with the fiber is that the implementation of some
- key synchronization and communication mechanisms, such as condition
- variables, requires the capability.
+ could be stored in the {{!FLS} fiber local storage}. The justification for
+ storing it directly with the fiber is that the implementation of some key
+ synchronization and communication mechanisms, such as condition variables,
+ requires the capability.
- No integer fiber id is provided by default. It would seem that for most
- intents and purposes the identity of the fiber is sufficient. {{!FLS}
- Fiber local storage} can be used to implement a fiber id or e.g. a fiber
- hash.
+ No integer fiber id is provided by default. It would seem that for most
+ intents and purposes the identity of the fiber is sufficient.
+ {{!FLS} Fiber local storage} can be used to implement a fiber id or e.g. a
+ fiber hash.
The {{!FLS} fiber local storage} is designed for the purpose of extending
- fibers and to be as fast as possible. It is not intended for application
+ fibers and to be as fast as possible. It is not intended for application
programming.
{!Yield} is provided as a separate effect to specifically communicate the
- intent that the current fiber should be rescheduled. This allows all the
+ intent that the current fiber should be rescheduled. This allows all the
other effect handlers more freedom in choosing which fiber to schedule
next. *)
end
@@ -1239,7 +1237,7 @@ module Handler : sig
(** See {!Picos.Trigger.await}. *)
}
(** A record of implementations of the primitive effects based operations of
- Picos. The operations take a context of type ['c] as an argument. *)
+ Picos. The operations take a context of type ['c] as an argument. *)
val using : 'c t -> 'c -> (Fiber.t -> unit) -> unit
(** [using handler context main] sets the [handler] and the [context] for the
dune build @fmt failed
"/usr/bin/env" "bash" "-c" "opam exec -- dune build @fmt --ignore-promoted-rules || (echo "dune build @fmt failed"; exit 2)" failed with exit status 2
2024-12-15 12:34.09: Job failed: Failed: Build failed