Compare commits
16 Commits
ipc-refact
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 1a7230ce9b | |||
| 32d6237dc5 | |||
| 06debb3636 | |||
| 0b2b05d44e | |||
| 8753d4c751 | |||
| 224c4ecca2 | |||
| 0f89cde246 | |||
| 85d45cf0ef | |||
| d211f3127d | |||
| 4e4dc381ea | |||
| ecf151158d | |||
| 4f989271c5 | |||
| 219033be0d | |||
| 663ff612ba | |||
| 603efef28e | |||
| b77653f841 |
903
Cargo.lock
generated
903
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
22
Cargo.toml
22
Cargo.toml
@@ -2,14 +2,16 @@
|
||||
resolver = "2"
|
||||
|
||||
members = [
|
||||
"orcx",
|
||||
"orchid-std",
|
||||
"orchid-host",
|
||||
"orchid-extension",
|
||||
"orchid-base",
|
||||
"orchid-api",
|
||||
"orchid-api-derive",
|
||||
"orchid-api-traits",
|
||||
"stdio-perftest",
|
||||
"xtask", "async-fn-stream",
|
||||
"orcx",
|
||||
"orchid-std",
|
||||
"orchid-host",
|
||||
"orchid-extension",
|
||||
"orchid-base",
|
||||
"orchid-api",
|
||||
"orchid-api-derive",
|
||||
"orchid-api-traits",
|
||||
"stdio-perftest",
|
||||
"xtask",
|
||||
"async-fn-stream",
|
||||
"unsync-pipe",
|
||||
]
|
||||
|
||||
11
LICENCE
Normal file
11
LICENCE
Normal file
@@ -0,0 +1,11 @@
|
||||
THIS SOFTWARE IS PROVIDED WITHOUT WARRANTY
|
||||
|
||||
The code in this repository is free for noncommercial use, including derivative works and inclusion in other software if those are also free for noncommercial use. Commercial use, or inclusion in any derivative works licensed for commercial use is forbidden under this general licence.
|
||||
|
||||
Identifying marks stored in the repository are restricted for use with an unmodified copy of this software. If you distribute modified versions of this software, you must either replace these identifying marks or modify them in a way that clearly indicates that what you are distributing is a derivative work and not this official vversion. You must also replace any contact information in such a way that your derivative work does not suggest that we may be contacted about issues. Your derivative work may use the original identifying marks and contact information to identify this project as its basis, while emphasizing that the authors of the original project are neither in control of, nor liable for the derivative work.
|
||||
|
||||
Identifying marks include the Orchid logo, the ribbon image in the readme, and the names "Orchid", "Orchidlang" unless they are part of a technical interface.
|
||||
|
||||
Contact information includes email addresses, links to the source code and issue tracker.
|
||||
|
||||
Words listed as identifying marks are explicltly not considered as such when they appear in technical interfaces or APIs. For example, shell commands, identifiers within Orchid or Rust code, and names in package registries are not considered identifying marks.
|
||||
674
LICENSE
674
LICENSE
@@ -1,674 +0,0 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<https://www.gnu.org/licenses/why-not-lgpl.html>.
|
||||
18
README.md
18
README.md
@@ -7,7 +7,7 @@ An experimental lazy, pure functional programming language designed to be embedd
|
||||
|
||||
## Usage
|
||||
|
||||
The standalone interpreter can be built as the binary target from this package. The language tutorial and standard library documentation is at [www.lbfalvy.com/orchid-reference](https://lbfalvy.github.io/orchid-reference/). Embedder guide and Rust API documentation are coming soon.
|
||||
Updated language tutorial, standard library documentation, embedder guide and Rust API documentation coming soon.
|
||||
|
||||
## Design
|
||||
|
||||
@@ -19,10 +19,10 @@ Namespaces are inspired by Rust modules and ES6. Every file and directory is imp
|
||||
|
||||
## Try it out
|
||||
|
||||
The project uses the nighly rust toolchain. Go to one of the folders within `examples` and run
|
||||
The project uses both the stable and nightly rust toolchain. Run the examples with
|
||||
|
||||
```sh
|
||||
cargo run --release
|
||||
cargo orcx -- exec --proj ./examples/hello-world "src::main::main"
|
||||
```
|
||||
|
||||
you can try modifying the examples, but error reporting for the time being is pretty terrible.
|
||||
@@ -35,12 +35,14 @@ Orchids and mangrove trees form complex ecosystems; The flowers persuade the tre
|
||||
|
||||
All contributions are welcome. For the time being, use the issue tracker to discuss ideas.
|
||||
|
||||
## Forks
|
||||
Unless we agree on different terms, by contributing to this software you declare that you have created or otherwise have the right to license your contribution, agree to license it publicly under the general noncommercial licence included in this repository, and grant me (the owner of the project) a permanent, unrestricted license to use, modify, distribute and relicense your contribution. You retain ownership of your intellectual property to ensure that the copyleft protections cementing the noncommercial availability of the code are preserved.
|
||||
|
||||
The code in this repository is available under the GNU GPLv3, but identifying marks stored in the repository are restricted for use with an unmodified copy of this software. If you distribute modified versions of this software, you must either replace these identifying marks or modify them in a way that clearly indicates that what you are distributing is a derivative work and not this official vversion. You must also replace any contact information in such a way that your derivative work does not suggest that we may be contacted about issues. Your derivative work may use the original identifying marks and contact information to identify this project as its basis, while emphasizing that the authors of the original project are neither in control of, nor liable for the derivative work.
|
||||
## About the license
|
||||
|
||||
Identifying marks include the Orchid logo, the ribbon image above, and the names "Orchid", "Orchidlang" unless they are part of a technical interface.
|
||||
This software is free for noncommercial use. If you would like to use it for commercial purposes, or distribute your derivative work under a license that permits commercial use, contact me for a separate license. These licences are provided on a case-by-case basis with any limitations and compensation we agree on.
|
||||
|
||||
Contact information includes email addresses, links to the source code and issue tracker.
|
||||
I generally appreciate the ethos of free software, and particularly the patterns used in copyleft to cement the guarantees of the licence. However, I don't think commercial entities fit that ethos, and I think they should be addressed separately rather than attempting to ignore the inherent unfairness towards contributors.
|
||||
|
||||
Words listed as identifying marks are explicltly not considered as such when they appear in technical interfaces or APIs. For example, shell commands, identifiers within Orchid or Rust code, and names in package registries are not considered as identifying marks.
|
||||
My intent with the custom license included in this project is to enable the strong guarantees of copyleft towards noncommercial users, while leaving commercial users to engage with this project and its possible future ecosystem in a commercial way; if you intend to profit off my work, the barest cash flow should justify shooting me an email and agreeing on a simple temporary profit sharing deal until you figure out your business model, and the cash flow of a full scale business should more than justify dedicated attention to the software you rely on.
|
||||
|
||||
The clause about identifying marks is intended to prevent another pitfall of open-source, wherein Linux distros borrow entire codebases, break them, and then distribute the result under the original author's name. If would like to package Orchid, I'd be delighted if you would talk to me about making it official, but if you would rather operate independently, you should present your project as the rogue derivative work that it is rather than borrowing the original project's identity for something its owner has no control over.
|
||||
|
||||
@@ -7,4 +7,4 @@ edition = "2024"
|
||||
futures = { version = "0.3.31", features = ["std"], default-features = false }
|
||||
|
||||
[dev-dependencies]
|
||||
test_executors = "0.3.5"
|
||||
test_executors = "0.4.1"
|
||||
|
||||
@@ -1,114 +1,44 @@
|
||||
use std::cell::Cell;
|
||||
use std::future::poll_fn;
|
||||
use std::marker::PhantomData;
|
||||
use std::pin::Pin;
|
||||
use std::ptr;
|
||||
use std::task::{Context, Poll};
|
||||
|
||||
use futures::future::LocalBoxFuture;
|
||||
use futures::{FutureExt, Stream};
|
||||
|
||||
type YieldSlot<'a, T> = &'a Cell<Option<T>>;
|
||||
use futures::channel::mpsc;
|
||||
use futures::stream::{PollNext, select_with_strategy};
|
||||
use futures::{FutureExt, SinkExt, Stream, StreamExt};
|
||||
|
||||
/// Handle that allows you to emit values on a stream. If you drop
|
||||
/// this, the stream will end and you will not be polled again.
|
||||
pub struct StreamCtx<'a, T>(&'a Cell<Option<T>>, PhantomData<&'a ()>);
|
||||
pub struct StreamCtx<'a, T>(mpsc::Sender<T>, PhantomData<&'a ()>);
|
||||
impl<T> StreamCtx<'_, T> {
|
||||
pub fn emit(&mut self, value: T) -> impl Future<Output = ()> {
|
||||
assert!(self.0.replace(Some(value)).is_none(), "Leftover value in stream");
|
||||
let mut state = Poll::Pending;
|
||||
poll_fn(move |_| std::mem::replace(&mut state, Poll::Ready(())))
|
||||
pub async fn emit(&mut self, value: T) {
|
||||
(self.0.send(value).await)
|
||||
.expect("Dropped a stream receiver without dropping the driving closure");
|
||||
}
|
||||
}
|
||||
|
||||
enum FnOrFut<'a, T, O> {
|
||||
Fn(Option<Box<dyn FnOnce(YieldSlot<'a, T>) -> LocalBoxFuture<'a, O> + 'a>>),
|
||||
Fut(LocalBoxFuture<'a, O>),
|
||||
}
|
||||
|
||||
struct AsyncFnStream<'a, T> {
|
||||
driver: FnOrFut<'a, T, ()>,
|
||||
output: Cell<Option<T>>,
|
||||
}
|
||||
impl<'a, T> Stream for AsyncFnStream<'a, T> {
|
||||
type Item = T;
|
||||
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
|
||||
unsafe {
|
||||
let self_mut = self.get_unchecked_mut();
|
||||
let fut = match &mut self_mut.driver {
|
||||
FnOrFut::Fut(fut) => fut,
|
||||
FnOrFut::Fn(f) => {
|
||||
// safety: the cell is held inline in self, which is pinned.
|
||||
let cell = ptr::from_ref(&self_mut.output).as_ref().unwrap();
|
||||
let fut = f.take().unwrap()(cell);
|
||||
self_mut.driver = FnOrFut::Fut(fut);
|
||||
return Pin::new_unchecked(self_mut).poll_next(cx);
|
||||
},
|
||||
};
|
||||
match fut.as_mut().poll(cx) {
|
||||
Poll::Ready(()) => Poll::Ready(None),
|
||||
Poll::Pending => match self_mut.output.replace(None) {
|
||||
None => Poll::Pending,
|
||||
Some(t) => Poll::Ready(Some(t)),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
struct AsyncFnTryStream<'a, T, E> {
|
||||
driver: FnOrFut<'a, T, Result<StreamCtx<'a, T>, E>>,
|
||||
output: Cell<Option<T>>,
|
||||
}
|
||||
impl<'a, T, E> Stream for AsyncFnTryStream<'a, T, E> {
|
||||
type Item = Result<T, E>;
|
||||
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
|
||||
unsafe {
|
||||
let self_mut = self.get_unchecked_mut();
|
||||
let fut = match &mut self_mut.driver {
|
||||
FnOrFut::Fut(fut) => fut,
|
||||
FnOrFut::Fn(f) => {
|
||||
// safety: the cell is held inline in self, which is pinned.
|
||||
let cell = ptr::from_ref(&self_mut.output).as_ref().unwrap();
|
||||
let fut = f.take().unwrap()(cell);
|
||||
self_mut.driver = FnOrFut::Fut(fut);
|
||||
return Pin::new_unchecked(self_mut).poll_next(cx);
|
||||
},
|
||||
};
|
||||
match fut.as_mut().poll(cx) {
|
||||
Poll::Ready(Ok(_)) => Poll::Ready(None),
|
||||
Poll::Ready(Err(ex)) => Poll::Ready(Some(Err(ex))),
|
||||
Poll::Pending => match self_mut.output.replace(None) {
|
||||
None => Poll::Pending,
|
||||
Some(t) => Poll::Ready(Some(Ok(t))),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
fn left_strat(_: &mut ()) -> PollNext { PollNext::Left }
|
||||
|
||||
/// Create a stream from an async function acting as a coroutine
|
||||
pub fn stream<'a, T: 'a>(
|
||||
f: impl for<'b> AsyncFnOnce(StreamCtx<'b, T>) + 'a,
|
||||
) -> impl Stream<Item = T> + 'a {
|
||||
AsyncFnStream {
|
||||
output: Cell::new(None),
|
||||
driver: FnOrFut::Fn(Some(Box::new(|t| {
|
||||
async { f(StreamCtx(t, PhantomData)).await }.boxed_local()
|
||||
}))),
|
||||
}
|
||||
let (send, recv) = mpsc::channel::<T>(1);
|
||||
let fut = async { f(StreamCtx(send, PhantomData)).await };
|
||||
// use options to ensure that the stream is driven to exhaustion
|
||||
select_with_strategy(fut.into_stream().map(|()| None), recv.map(|t| Some(t)), left_strat)
|
||||
.filter_map(async |opt| opt)
|
||||
}
|
||||
|
||||
/// Create a stream of result from a fallible function.
|
||||
pub fn try_stream<'a, T: 'a, E: 'a>(
|
||||
f: impl for<'b> AsyncFnOnce(StreamCtx<'b, T>) -> Result<StreamCtx<'b, T>, E> + 'a,
|
||||
) -> impl Stream<Item = Result<T, E>> + 'a {
|
||||
AsyncFnTryStream {
|
||||
output: Cell::new(None),
|
||||
driver: FnOrFut::Fn(Some(Box::new(|t| {
|
||||
async { f(StreamCtx(t, PhantomData)).await }.boxed_local()
|
||||
}))),
|
||||
}
|
||||
let (send, recv) = mpsc::channel::<T>(1);
|
||||
let fut = async { f(StreamCtx(send, PhantomData)).await };
|
||||
select_with_strategy(
|
||||
fut.into_stream().map(|res| if let Err(e) = res { Some(Err(e)) } else { None }),
|
||||
recv.map(|t| Some(Ok(t))),
|
||||
left_strat,
|
||||
)
|
||||
.filter_map(async |opt| opt)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
|
||||
@@ -1,2 +1,6 @@
|
||||
let user = "dave"
|
||||
let main = println "Hello $user!" exit_status::success
|
||||
let my_tuple = option::some t[1, 2]
|
||||
|
||||
let main = match my_tuple {
|
||||
option::some t[ref head, ..] => head;
|
||||
option::none => "foo";
|
||||
}
|
||||
|
||||
@@ -9,8 +9,8 @@ proc-macro = true
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
quote = "1.0.40"
|
||||
syn = { version = "2.0.106" }
|
||||
quote = "1.0.42"
|
||||
syn = { version = "2.0.112" }
|
||||
orchid-api-traits = { version = "0.1.0", path = "../orchid-api-traits" }
|
||||
proc-macro2 = "1.0.101"
|
||||
proc-macro2 = "1.0.104"
|
||||
itertools = "0.14.0"
|
||||
|
||||
@@ -14,8 +14,8 @@ pub fn derive(input: TokenStream) -> TokenStream {
|
||||
impl #impl_generics orchid_api_traits::Decode for #name #ty_generics #where_clause {
|
||||
async fn decode<R: orchid_api_traits::AsyncRead + ?Sized>(
|
||||
mut read: std::pin::Pin<&mut R>
|
||||
) -> Self {
|
||||
#decode
|
||||
) -> std::io::Result<Self> {
|
||||
Ok(#decode)
|
||||
}
|
||||
}
|
||||
};
|
||||
@@ -30,7 +30,7 @@ fn decode_fields(fields: &syn::Fields) -> pm2::TokenStream {
|
||||
let syn::Field { ty, ident, .. } = &f;
|
||||
quote! {
|
||||
#ident : (Box::pin(< #ty as orchid_api_traits::Decode>::decode(read.as_mut()))
|
||||
as std::pin::Pin<Box<dyn std::future::Future<Output = _>>>).await
|
||||
as std::pin::Pin<Box<dyn std::future::Future<Output = std::io::Result<_>>>>).await?
|
||||
}
|
||||
});
|
||||
quote! { { #( #exprs, )* } }
|
||||
@@ -40,7 +40,7 @@ fn decode_fields(fields: &syn::Fields) -> pm2::TokenStream {
|
||||
let ty = &field.ty;
|
||||
quote! {
|
||||
(Box::pin(< #ty as orchid_api_traits::Decode>::decode(read.as_mut()))
|
||||
as std::pin::Pin<Box<dyn std::future::Future<Output = _>>>).await,
|
||||
as std::pin::Pin<Box<dyn std::future::Future<Output = std::io::Result<_>>>>).await?,
|
||||
}
|
||||
});
|
||||
quote! { ( #( #exprs )* ) }
|
||||
@@ -62,7 +62,7 @@ fn decode_body(data: &syn::Data) -> proc_macro2::TokenStream {
|
||||
quote! { #id => Self::#ident #fields, }
|
||||
});
|
||||
quote! {
|
||||
match <u8 as orchid_api_traits::Decode>::decode(read.as_mut()).await {
|
||||
match <u8 as orchid_api_traits::Decode>::decode(read.as_mut()).await? {
|
||||
#(#opts)*
|
||||
x => panic!("Unrecognized enum kind {x}")
|
||||
}
|
||||
|
||||
@@ -17,8 +17,9 @@ pub fn derive(input: TokenStream) -> TokenStream {
|
||||
async fn encode<W: orchid_api_traits::AsyncWrite + ?Sized>(
|
||||
&self,
|
||||
mut write: std::pin::Pin<&mut W>
|
||||
) {
|
||||
#encode
|
||||
) -> std::io::Result<()> {
|
||||
#encode;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
};
|
||||
@@ -43,7 +44,7 @@ fn encode_body(data: &syn::Data) -> Option<pm2::TokenStream> {
|
||||
quote! {
|
||||
Self::#ident #dest => {
|
||||
(Box::pin((#i as u8).encode(write.as_mut()))
|
||||
as std::pin::Pin<Box<dyn std::future::Future<Output = _>>>).await;
|
||||
as std::pin::Pin<Box<dyn std::future::Future<Output = std::io::Result<()>>>>).await?;
|
||||
#body
|
||||
}
|
||||
}
|
||||
@@ -61,7 +62,7 @@ fn encode_body(data: &syn::Data) -> Option<pm2::TokenStream> {
|
||||
fn encode_names<T: ToTokens>(names: impl Iterator<Item = T>) -> pm2::TokenStream {
|
||||
quote! { #(
|
||||
(Box::pin(#names .encode(write.as_mut()))
|
||||
as std::pin::Pin<Box<dyn std::future::Future<Output = _>>>).await;
|
||||
as std::pin::Pin<Box<dyn std::future::Future<Output = std::io::Result<()>>>>).await?;
|
||||
)* }
|
||||
}
|
||||
|
||||
|
||||
@@ -6,8 +6,7 @@ edition = "2024"
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
async-fn-stream = { version = "0.1.0", path = "../async-fn-stream" }
|
||||
futures = { version = "0.3.31", features = ["std"], default-features = false }
|
||||
itertools = "0.14.0"
|
||||
never = "0.1.0"
|
||||
ordered-float = "5.0.0"
|
||||
ordered-float = "5.1.0"
|
||||
|
||||
@@ -1,33 +1,44 @@
|
||||
use std::collections::HashMap;
|
||||
use std::future::Future;
|
||||
use std::hash::Hash;
|
||||
use std::io;
|
||||
use std::num::NonZero;
|
||||
use std::ops::{Range, RangeInclusive};
|
||||
use std::pin::Pin;
|
||||
use std::rc::Rc;
|
||||
use std::sync::Arc;
|
||||
|
||||
use async_fn_stream::stream;
|
||||
use futures::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt, StreamExt};
|
||||
use futures::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
|
||||
use never::Never;
|
||||
use ordered_float::NotNan;
|
||||
|
||||
use crate::encode_enum;
|
||||
use crate::{decode_err, decode_err_for, encode_enum, spin_on};
|
||||
|
||||
pub trait Decode: 'static {
|
||||
pub trait Decode: 'static + Sized {
|
||||
/// Decode an instance from the beginning of the buffer. Return the decoded
|
||||
/// data and the remaining buffer.
|
||||
fn decode<R: AsyncRead + ?Sized>(read: Pin<&mut R>) -> impl Future<Output = Self> + '_;
|
||||
fn decode<R: AsyncRead + ?Sized>(
|
||||
read: Pin<&mut R>,
|
||||
) -> impl Future<Output = io::Result<Self>> + '_;
|
||||
fn decode_slice(slc: &mut &[u8]) -> Self {
|
||||
spin_on(Self::decode(Pin::new(slc) as Pin<&mut _>)).expect("Decode from slice cannot fail")
|
||||
}
|
||||
}
|
||||
pub trait Encode {
|
||||
/// Append an instance of the struct to the buffer
|
||||
fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) -> impl Future<Output = ()>;
|
||||
fn encode<W: AsyncWrite + ?Sized>(
|
||||
&self,
|
||||
write: Pin<&mut W>,
|
||||
) -> impl Future<Output = io::Result<()>>;
|
||||
fn encode_vec(&self, vec: &mut Vec<u8>) {
|
||||
spin_on(self.encode(Pin::new(vec) as Pin<&mut _>)).expect("Encode to vector cannot fail")
|
||||
}
|
||||
}
|
||||
pub trait Coding: Encode + Decode + Clone {
|
||||
fn get_decoder<T: 'static, F: Future<Output = T> + 'static>(
|
||||
map: impl Fn(Self) -> F + Clone + 'static,
|
||||
) -> impl AsyncFn(Pin<&mut dyn AsyncRead>) -> T {
|
||||
async move |r| map(Self::decode(r).await).await
|
||||
fn get_decoder<T: 'static>(
|
||||
map: impl AsyncFn(Self) -> T + Clone + 'static,
|
||||
) -> impl AsyncFn(Pin<&mut dyn AsyncRead>) -> io::Result<T> {
|
||||
async move |r| Ok(map(Self::decode(r).await?).await)
|
||||
}
|
||||
}
|
||||
impl<T: Encode + Decode + Clone> Coding for T {}
|
||||
@@ -35,15 +46,15 @@ impl<T: Encode + Decode + Clone> Coding for T {}
|
||||
macro_rules! num_impl {
|
||||
($number:ty) => {
|
||||
impl Decode for $number {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
let mut bytes = [0u8; (<$number>::BITS / 8) as usize];
|
||||
read.read_exact(&mut bytes).await.unwrap();
|
||||
<$number>::from_be_bytes(bytes)
|
||||
read.read_exact(&mut bytes).await?;
|
||||
Ok(<$number>::from_be_bytes(bytes))
|
||||
}
|
||||
}
|
||||
impl Encode for $number {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
write.write_all(&self.to_be_bytes()).await.expect("Could not write number")
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
write.write_all(&self.to_be_bytes()).await
|
||||
}
|
||||
}
|
||||
};
|
||||
@@ -62,12 +73,12 @@ num_impl!(i8);
|
||||
macro_rules! nonzero_impl {
|
||||
($name:ty) => {
|
||||
impl Decode for NonZero<$name> {
|
||||
async fn decode<R: AsyncRead + ?Sized>(read: Pin<&mut R>) -> Self {
|
||||
Self::new(<$name as Decode>::decode(read).await).unwrap()
|
||||
async fn decode<R: AsyncRead + ?Sized>(read: Pin<&mut R>) -> io::Result<Self> {
|
||||
Self::new(<$name as Decode>::decode(read).await?).ok_or_else(decode_err)
|
||||
}
|
||||
}
|
||||
impl Encode for NonZero<$name> {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) -> io::Result<()> {
|
||||
self.get().encode(write).await
|
||||
}
|
||||
}
|
||||
@@ -86,22 +97,22 @@ nonzero_impl!(i64);
|
||||
nonzero_impl!(i128);
|
||||
|
||||
impl<T: Encode + ?Sized> Encode for &T {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) -> io::Result<()> {
|
||||
(**self).encode(write).await
|
||||
}
|
||||
}
|
||||
macro_rules! float_impl {
|
||||
($t:ty, $size:expr) => {
|
||||
impl Decode for NotNan<$t> {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
let mut bytes = [0u8; $size];
|
||||
read.read_exact(&mut bytes).await.unwrap();
|
||||
NotNan::new(<$t>::from_be_bytes(bytes)).expect("Float was NaN")
|
||||
read.read_exact(&mut bytes).await?;
|
||||
NotNan::new(<$t>::from_be_bytes(bytes)).map_err(|_| decode_err())
|
||||
}
|
||||
}
|
||||
impl Encode for NotNan<$t> {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
write.write_all(&self.as_ref().to_be_bytes()).await.expect("Could not write number")
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
write.write_all(&self.as_ref().to_be_bytes()).await
|
||||
}
|
||||
}
|
||||
};
|
||||
@@ -111,78 +122,77 @@ float_impl!(f64, 8);
|
||||
float_impl!(f32, 4);
|
||||
|
||||
impl Decode for String {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
let len = u64::decode(read.as_mut()).await.try_into().unwrap();
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
let len: usize = u64::decode(read.as_mut()).await?.try_into().map_err(decode_err_for)?;
|
||||
let mut data = vec![0u8; len];
|
||||
read.read_exact(&mut data).await.unwrap();
|
||||
std::str::from_utf8(&data).expect("String invalid UTF-8").to_owned()
|
||||
read.read_exact(&mut data).await?;
|
||||
Ok(std::str::from_utf8(&data).map_err(decode_err_for)?.to_owned())
|
||||
}
|
||||
}
|
||||
impl Encode for String {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
u64::try_from(self.len()).unwrap().encode(write.as_mut()).await;
|
||||
write.write_all(self.as_bytes()).await.unwrap()
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
u64::try_from(self.len()).map_err(decode_err_for)?.encode(write.as_mut()).await?;
|
||||
write.write_all(self.as_bytes()).await
|
||||
}
|
||||
}
|
||||
impl Encode for str {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
u64::try_from(self.len()).unwrap().encode(write.as_mut()).await;
|
||||
write.write_all(self.as_bytes()).await.unwrap()
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
u64::try_from(self.len()).map_err(decode_err_for)?.encode(write.as_mut()).await?;
|
||||
write.write_all(self.as_bytes()).await
|
||||
}
|
||||
}
|
||||
impl<T: Decode> Decode for Vec<T> {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
let len = u64::decode(read.as_mut()).await.try_into().unwrap();
|
||||
stream(async |mut cx| {
|
||||
for _ in 0..len {
|
||||
cx.emit(T::decode(read.as_mut()).await).await
|
||||
}
|
||||
})
|
||||
.collect()
|
||||
.await
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
let len = u64::decode(read.as_mut()).await?;
|
||||
let mut values = Vec::with_capacity(len.try_into().map_err(decode_err_for)?);
|
||||
for _ in 0..len {
|
||||
values.push(T::decode(read.as_mut()).await?);
|
||||
}
|
||||
Ok(values)
|
||||
}
|
||||
}
|
||||
impl<T: Encode> Encode for Vec<T> {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) -> io::Result<()> {
|
||||
self.as_slice().encode(write).await
|
||||
}
|
||||
}
|
||||
impl<T: Encode> Encode for [T] {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
u64::try_from(self.len()).unwrap().encode(write.as_mut()).await;
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
u64::try_from(self.len()).unwrap().encode(write.as_mut()).await?;
|
||||
for t in self.iter() {
|
||||
t.encode(write.as_mut()).await
|
||||
t.encode(write.as_mut()).await?
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
impl<T: Decode> Decode for Option<T> {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
match u8::decode(read.as_mut()).await {
|
||||
0 => None,
|
||||
1 => Some(T::decode(read).await),
|
||||
x => panic!("{x} is not a valid option value"),
|
||||
}
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
Ok(match bool::decode(read.as_mut()).await? {
|
||||
false => None,
|
||||
true => Some(T::decode(read).await?),
|
||||
})
|
||||
}
|
||||
}
|
||||
impl<T: Encode> Encode for Option<T> {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
let t = if let Some(t) = self { t } else { return 0u8.encode(write.as_mut()).await };
|
||||
1u8.encode(write.as_mut()).await;
|
||||
t.encode(write).await;
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
self.is_some().encode(write.as_mut()).await?;
|
||||
if let Some(t) = self {
|
||||
t.encode(write).await?
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
impl<T: Decode, E: Decode> Decode for Result<T, E> {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
match u8::decode(read.as_mut()).await {
|
||||
0 => Self::Ok(T::decode(read).await),
|
||||
1 => Self::Err(E::decode(read).await),
|
||||
x => panic!("Invalid Result tag {x}"),
|
||||
}
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
Ok(match bool::decode(read.as_mut()).await? {
|
||||
false => Self::Ok(T::decode(read).await?),
|
||||
true => Self::Err(E::decode(read).await?),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Encode, E: Encode> Encode for Result<T, E> {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) -> io::Result<()> {
|
||||
match self {
|
||||
Ok(t) => encode_enum(write, 0, |w| t.encode(w)).await,
|
||||
Err(e) => encode_enum(write, 1, |w| e.encode(w)).await,
|
||||
@@ -190,36 +200,37 @@ impl<T: Encode, E: Encode> Encode for Result<T, E> {
|
||||
}
|
||||
}
|
||||
impl<K: Decode + Eq + Hash, V: Decode> Decode for HashMap<K, V> {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
let len = u64::decode(read.as_mut()).await.try_into().unwrap();
|
||||
stream(async |mut cx| {
|
||||
for _ in 0..len {
|
||||
cx.emit(<(K, V)>::decode(read.as_mut()).await).await
|
||||
}
|
||||
})
|
||||
.collect()
|
||||
.await
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
let len = u64::decode(read.as_mut()).await?;
|
||||
let mut map = HashMap::with_capacity(len.try_into().map_err(decode_err_for)?);
|
||||
for _ in 0..len {
|
||||
map.insert(K::decode(read.as_mut()).await?, V::decode(read.as_mut()).await?);
|
||||
}
|
||||
Ok(map)
|
||||
}
|
||||
}
|
||||
impl<K: Encode + Eq + Hash, V: Encode> Encode for HashMap<K, V> {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
u64::try_from(self.len()).unwrap().encode(write.as_mut()).await;
|
||||
for pair in self.iter() {
|
||||
pair.encode(write.as_mut()).await
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
u64::try_from(self.len()).unwrap().encode(write.as_mut()).await?;
|
||||
for (key, value) in self.iter() {
|
||||
key.encode(write.as_mut()).await?;
|
||||
value.encode(write.as_mut()).await?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
macro_rules! tuple {
|
||||
(($($t:ident)*) ($($T:ident)*)) => {
|
||||
impl<$($T: Decode),*> Decode for ($($T,)*) {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
($($T::decode(read.as_mut()).await,)*)
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
Ok(($($T::decode(read.as_mut()).await?,)*))
|
||||
}
|
||||
}
|
||||
impl<$($T: Encode),*> Encode for ($($T,)*) {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
let ($($t,)*) = self;
|
||||
$( $t.encode(write.as_mut()).await; )*
|
||||
$( $t.encode(write.as_mut()).await?; )*
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
};
|
||||
@@ -243,63 +254,67 @@ tuple!((t u v x y z a b c d e f g h i) (T U V X Y Z A B C D E F G H I));
|
||||
tuple!((t u v x y z a b c d e f g h i j) (T U V X Y Z A B C D E F G H I J)); // 16
|
||||
|
||||
impl Decode for () {
|
||||
async fn decode<R: AsyncRead + ?Sized>(_: Pin<&mut R>) -> Self {}
|
||||
async fn decode<R: AsyncRead + ?Sized>(_: Pin<&mut R>) -> io::Result<Self> { Ok(()) }
|
||||
}
|
||||
impl Encode for () {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, _: Pin<&mut W>) {}
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, _: Pin<&mut W>) -> io::Result<()> { Ok(()) }
|
||||
}
|
||||
impl Decode for Never {
|
||||
async fn decode<R: AsyncRead + ?Sized>(_: Pin<&mut R>) -> Self {
|
||||
async fn decode<R: AsyncRead + ?Sized>(_: Pin<&mut R>) -> io::Result<Self> {
|
||||
unreachable!("A value of Never cannot exist so it can't have been serialized");
|
||||
}
|
||||
}
|
||||
impl Encode for Never {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, _: Pin<&mut W>) { match *self {} }
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, _: Pin<&mut W>) -> io::Result<()> {
|
||||
match *self {}
|
||||
}
|
||||
}
|
||||
impl Decode for bool {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
let mut buf = [0];
|
||||
read.read_exact(&mut buf).await.unwrap();
|
||||
buf[0] != 0
|
||||
read.read_exact(&mut buf).await?;
|
||||
Ok(buf[0] != 0)
|
||||
}
|
||||
}
|
||||
impl Encode for bool {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
write.write_all(&[if *self { 0xffu8 } else { 0u8 }]).await.unwrap()
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
write.write_all(&[if *self { 0xffu8 } else { 0u8 }]).await
|
||||
}
|
||||
}
|
||||
impl<T: Decode, const N: usize> Decode for [T; N] {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
let v = stream(async |mut cx| {
|
||||
for _ in 0..N {
|
||||
cx.emit(T::decode(read.as_mut()).await).await
|
||||
}
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
.await;
|
||||
v.try_into().unwrap_or_else(|_| unreachable!("The length of this stream is statically known"))
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
let mut v = Vec::with_capacity(N);
|
||||
for _ in 0..N {
|
||||
v.push(T::decode(read.as_mut()).await?);
|
||||
}
|
||||
match v.try_into() {
|
||||
Err(_) => unreachable!("The length of this stream is statically known"),
|
||||
Ok(arr) => Ok(arr),
|
||||
}
|
||||
}
|
||||
}
|
||||
impl<T: Encode, const N: usize> Encode for [T; N] {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
for t in self.iter() {
|
||||
t.encode(write.as_mut()).await
|
||||
t.encode(write.as_mut()).await?
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
macro_rules! two_end_range {
|
||||
($this:ident, $name:tt, $op:tt, $start:expr, $end:expr) => {
|
||||
impl<T: Decode> Decode for $name<T> {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
T::decode(read.as_mut()).await $op T::decode(read).await
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
Ok(T::decode(read.as_mut()).await? $op T::decode(read).await?)
|
||||
}
|
||||
}
|
||||
impl<T: Encode> Encode for $name<T> {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
let $this = self;
|
||||
($start).encode(write.as_mut()).await;
|
||||
($end).encode(write).await;
|
||||
($start).encode(write.as_mut()).await?;
|
||||
($end).encode(write).await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -311,12 +326,12 @@ two_end_range!(x, RangeInclusive, ..=, x.start(), x.end());
|
||||
macro_rules! smart_ptr {
|
||||
($name:tt) => {
|
||||
impl<T: Decode> Decode for $name<T> {
|
||||
async fn decode<R: AsyncRead + ?Sized>(read: Pin<&mut R>) -> Self {
|
||||
$name::new(T::decode(read).await)
|
||||
async fn decode<R: AsyncRead + ?Sized>(read: Pin<&mut R>) -> io::Result<Self> {
|
||||
Ok($name::new(T::decode(read).await?))
|
||||
}
|
||||
}
|
||||
impl<T: Encode> Encode for $name<T> {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) -> io::Result<()> {
|
||||
(**self).encode(write).await
|
||||
}
|
||||
}
|
||||
@@ -328,12 +343,12 @@ smart_ptr!(Rc);
|
||||
smart_ptr!(Box);
|
||||
|
||||
impl Decode for char {
|
||||
async fn decode<R: AsyncRead + ?Sized>(read: Pin<&mut R>) -> Self {
|
||||
char::from_u32(u32::decode(read).await).unwrap()
|
||||
async fn decode<R: AsyncRead + ?Sized>(read: Pin<&mut R>) -> io::Result<Self> {
|
||||
char::from_u32(u32::decode(read).await?).ok_or_else(decode_err)
|
||||
}
|
||||
}
|
||||
impl Encode for char {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, write: Pin<&mut W>) -> io::Result<()> {
|
||||
(*self as u32).encode(write).await
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,24 +1,24 @@
|
||||
use std::future::Future;
|
||||
use std::pin::Pin;
|
||||
use std::error::Error;
|
||||
use std::io;
|
||||
use std::pin::{Pin, pin};
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicBool, Ordering};
|
||||
use std::task::{Context, Poll, Wake};
|
||||
|
||||
use futures::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
|
||||
use futures::{AsyncRead, AsyncReadExt, AsyncWrite};
|
||||
use itertools::{Chunk, Itertools};
|
||||
|
||||
use crate::Encode;
|
||||
|
||||
pub async fn encode_enum<'a, W: AsyncWrite + ?Sized, F: Future<Output = ()>>(
|
||||
pub async fn encode_enum<'a, W: AsyncWrite + ?Sized>(
|
||||
mut write: Pin<&'a mut W>,
|
||||
id: u8,
|
||||
f: impl FnOnce(Pin<&'a mut W>) -> F,
|
||||
) {
|
||||
id.encode(write.as_mut()).await;
|
||||
f: impl AsyncFnOnce(Pin<&'a mut W>) -> io::Result<()>,
|
||||
) -> io::Result<()> {
|
||||
id.encode(write.as_mut()).await?;
|
||||
f(write).await
|
||||
}
|
||||
|
||||
pub async fn write_exact<W: AsyncWrite + ?Sized>(mut write: Pin<&mut W>, bytes: &'static [u8]) {
|
||||
write.write_all(bytes).await.expect("Failed to write exact bytes")
|
||||
}
|
||||
|
||||
pub fn print_bytes(b: &[u8]) -> String {
|
||||
(b.iter().map(|b| format!("{b:02x}")))
|
||||
.chunks(4)
|
||||
@@ -27,16 +27,52 @@ pub fn print_bytes(b: &[u8]) -> String {
|
||||
.join(" ")
|
||||
}
|
||||
|
||||
pub async fn read_exact<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>, bytes: &'static [u8]) {
|
||||
pub async fn read_exact<R: AsyncRead + ?Sized>(
|
||||
mut read: Pin<&mut R>,
|
||||
bytes: &'static [u8],
|
||||
) -> io::Result<()> {
|
||||
let mut data = vec![0u8; bytes.len()];
|
||||
read.read_exact(&mut data).await.expect("Failed to read bytes");
|
||||
if data != bytes {
|
||||
panic!("Wrong bytes!\nExpected: {}\nFound: {}", print_bytes(bytes), print_bytes(&data));
|
||||
read.read_exact(&mut data).await?;
|
||||
if data == bytes {
|
||||
Ok(())
|
||||
} else {
|
||||
let msg =
|
||||
format!("Wrong bytes!\nExpected: {}\nFound: {}", print_bytes(bytes), print_bytes(&data));
|
||||
Err(io::Error::new(io::ErrorKind::InvalidData, msg))
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn enc_vec(enc: &impl Encode) -> Vec<u8> {
|
||||
pub fn enc_vec(enc: &impl Encode) -> Vec<u8> {
|
||||
let mut vec = Vec::new();
|
||||
enc.encode(Pin::new(&mut vec)).await;
|
||||
enc.encode_vec(&mut vec);
|
||||
vec
|
||||
}
|
||||
|
||||
/// Raises a bool flag when called
|
||||
struct FlagWaker(AtomicBool);
|
||||
impl Wake for FlagWaker {
|
||||
fn wake(self: Arc<Self>) { self.0.store(true, Ordering::Relaxed) }
|
||||
}
|
||||
|
||||
pub fn spin_on<F: Future>(fut: F) -> F::Output {
|
||||
let flag = AtomicBool::new(false);
|
||||
let flag_waker = Arc::new(FlagWaker(flag));
|
||||
let mut future = pin!(fut);
|
||||
loop {
|
||||
let waker = flag_waker.clone().into();
|
||||
let mut ctx = Context::from_waker(&waker);
|
||||
match future.as_mut().poll(&mut ctx) {
|
||||
// ideally the future should return synchronously
|
||||
Poll::Ready(res) => break res,
|
||||
// poorly written futures may yield and immediately wake
|
||||
Poll::Pending if flag_waker.0.load(Ordering::Relaxed) => (),
|
||||
// there is no external event to wait for, this has to be a deadlock
|
||||
Poll::Pending => panic!("Future inside spin_on cannot block"),
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
pub fn decode_err() -> io::Error { io::Error::new(io::ErrorKind::InvalidData, "Unexpected zero") }
|
||||
pub fn decode_err_for(e: impl Error) -> io::Error {
|
||||
io::Error::new(io::ErrorKind::InvalidData, e.to_string())
|
||||
}
|
||||
|
||||
@@ -29,25 +29,21 @@ pub trait Extends: InHierarchy<IsRoot = TLFalse> + Into<Self::Parent> {
|
||||
pub trait UnderRootImpl<IsRoot: TLBool>: Sized {
|
||||
type __Root: UnderRoot<IsRoot = TLTrue, Root = Self::__Root>;
|
||||
fn __into_root(self) -> Self::__Root;
|
||||
fn __try_from_root(root: Self::__Root) -> Result<Self, Self::__Root>;
|
||||
}
|
||||
|
||||
pub trait UnderRoot: InHierarchy {
|
||||
type Root: UnderRoot<IsRoot = TLTrue, Root = Self::Root>;
|
||||
fn into_root(self) -> Self::Root;
|
||||
fn try_from_root(root: Self::Root) -> Result<Self, Self::Root>;
|
||||
}
|
||||
|
||||
impl<T: InHierarchy + UnderRootImpl<T::IsRoot>> UnderRoot for T {
|
||||
type Root = <Self as UnderRootImpl<<Self as InHierarchy>::IsRoot>>::__Root;
|
||||
fn into_root(self) -> Self::Root { self.__into_root() }
|
||||
fn try_from_root(root: Self::Root) -> Result<Self, Self::Root> { Self::__try_from_root(root) }
|
||||
}
|
||||
|
||||
impl<T: InHierarchy<IsRoot = TLTrue>> UnderRootImpl<TLTrue> for T {
|
||||
type __Root = Self;
|
||||
fn __into_root(self) -> Self::__Root { self }
|
||||
fn __try_from_root(root: Self::__Root) -> Result<Self, Self::__Root> { Ok(root) }
|
||||
}
|
||||
|
||||
impl<T: InHierarchy<IsRoot = TLFalse> + Extends> UnderRootImpl<TLFalse> for T {
|
||||
@@ -57,8 +53,4 @@ impl<T: InHierarchy<IsRoot = TLFalse> + Extends> UnderRootImpl<TLFalse> for T {
|
||||
fn __into_root(self) -> Self::__Root {
|
||||
<Self as Into<<Self as Extends>::Parent>>::into(self).into_root()
|
||||
}
|
||||
fn __try_from_root(root: Self::__Root) -> Result<Self, Self::__Root> {
|
||||
let parent = <Self as Extends>::Parent::try_from_root(root)?;
|
||||
parent.clone().try_into().map_err(|_| parent.into_root())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,27 +1,30 @@
|
||||
use core::fmt;
|
||||
use std::future::Future;
|
||||
|
||||
use never::Never;
|
||||
|
||||
use super::coding::Coding;
|
||||
use crate::helpers::enc_vec;
|
||||
|
||||
pub trait Request: fmt::Debug + Coding + Sized + 'static {
|
||||
pub trait Request: fmt::Debug + Sized + 'static {
|
||||
type Response: fmt::Debug + Coding + 'static;
|
||||
}
|
||||
|
||||
pub async fn respond<R: Request>(_: &R, rep: R::Response) -> Vec<u8> { enc_vec(&rep).await }
|
||||
pub async fn respond_with<R: Request, F: Future<Output = R::Response>>(
|
||||
r: &R,
|
||||
f: impl FnOnce(&R) -> F,
|
||||
) -> Vec<u8> {
|
||||
respond(r, f(r).await).await
|
||||
}
|
||||
pub fn respond<R: Request>(_: &R, rep: R::Response) -> Vec<u8> { enc_vec(&rep) }
|
||||
|
||||
pub trait Channel: 'static {
|
||||
type Req: Coding + Sized + 'static;
|
||||
type Notif: Coding + Sized + 'static;
|
||||
}
|
||||
impl Channel for Never {
|
||||
type Notif = Never;
|
||||
type Req = Never;
|
||||
}
|
||||
|
||||
pub trait MsgSet: Sync + 'static {
|
||||
type In: Channel;
|
||||
type Out: Channel;
|
||||
}
|
||||
impl MsgSet for Never {
|
||||
type In = Never;
|
||||
type Out = Never;
|
||||
}
|
||||
|
||||
@@ -6,11 +6,12 @@ edition = "2024"
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
ordered-float = "5.0.0"
|
||||
ordered-float = "5.1.0"
|
||||
orchid-api-traits = { version = "0.1.0", path = "../orchid-api-traits" }
|
||||
orchid-api-derive = { version = "0.1.0", path = "../orchid-api-derive" }
|
||||
futures = { version = "0.3.31", features = ["std"], default-features = false }
|
||||
itertools = "0.14.0"
|
||||
unsync-pipe = { version = "0.2.0", path = "../unsync-pipe" }
|
||||
|
||||
[dev-dependencies]
|
||||
test_executors = "0.3.2"
|
||||
test_executors = "0.4.1"
|
||||
|
||||
90
orchid-api/src/binary.rs
Normal file
90
orchid-api/src/binary.rs
Normal file
@@ -0,0 +1,90 @@
|
||||
//! # Binary extension definition
|
||||
//!
|
||||
//! A binary extension is a DLL / shared object / dylib with a symbol called
|
||||
//! `orchid_extension_main` which accepts a single argument of type
|
||||
//! [ExtensionContext]. Once that is received, communication continuees through
|
||||
//! the channel with the same protocol outlined in [crate::proto]
|
||||
|
||||
use unsync_pipe::{Reader, Writer};
|
||||
|
||||
/// !Send !Sync owned waker
|
||||
///
|
||||
/// This object is [Clone] for convenience but it has `drop` and no `clone` so
|
||||
/// interactions must reflect a single logical owner
|
||||
#[derive(Clone, Copy)]
|
||||
#[repr(C)]
|
||||
pub struct OwnedWakerVT {
|
||||
pub data: *const (),
|
||||
/// `self`
|
||||
pub drop: extern "C" fn(*const ()),
|
||||
/// `self`
|
||||
pub wake: extern "C" fn(*const ()),
|
||||
/// `&self`
|
||||
pub wake_ref: extern "C" fn(*const ()),
|
||||
}
|
||||
|
||||
/// !Send !Sync, equivalent to `&mut Context<'a>`, hence no `drop`.
|
||||
/// When received in [FutureVT::poll], it must not outlive the call.
|
||||
///
|
||||
/// You cannot directly wake using this waker, because such a trampoline would
|
||||
/// pass through the binary interface twice for no reason. An efficient
|
||||
/// implementation should implement that trampoline action internally, whereas
|
||||
/// an inefficient but compliant implementation can clone a fresh waker and use
|
||||
/// it up.
|
||||
#[derive(Clone, Copy)]
|
||||
#[repr(C)]
|
||||
pub struct FutureContextVT {
|
||||
pub data: *const (),
|
||||
/// `&self`
|
||||
pub waker: extern "C" fn(*const ()) -> OwnedWakerVT,
|
||||
}
|
||||
|
||||
/// ABI-stable `Poll<()>`
|
||||
#[derive(Clone, Copy)]
|
||||
#[repr(C)]
|
||||
pub enum UnitPoll {
|
||||
Pending,
|
||||
Ready,
|
||||
}
|
||||
|
||||
/// ABI-stable `Pin<Box<dyn Future<Output = ()>>>`
|
||||
///
|
||||
/// This object is [Clone] for convenience, but it has `drop` and no `clone` so
|
||||
/// interactions must reflect a single logical owner
|
||||
#[derive(Clone, Copy)]
|
||||
#[repr(C)]
|
||||
pub struct FutureVT {
|
||||
pub data: *const (),
|
||||
/// `self`
|
||||
pub drop: extern "C" fn(*const ()),
|
||||
/// `&mut self` Equivalent to [Future::poll]
|
||||
pub poll: extern "C" fn(*const (), FutureContextVT) -> UnitPoll,
|
||||
}
|
||||
|
||||
/// Handle for a runtime that allows its holder to spawn futures across dynamic
|
||||
/// library boundaries
|
||||
#[derive(Clone, Copy)]
|
||||
#[repr(C)]
|
||||
pub struct Spawner {
|
||||
pub data: *const (),
|
||||
/// `self`
|
||||
pub drop: extern "C" fn(*const ()),
|
||||
/// `&self` Add a future to this extension's task
|
||||
pub spawn: extern "C" fn(*const (), FutureVT),
|
||||
}
|
||||
|
||||
/// Extension context.
|
||||
///
|
||||
/// This struct is a plain old value, all of the contained values have a
|
||||
/// distinct `drop` member
|
||||
#[repr(C)]
|
||||
pub struct ExtensionContext {
|
||||
/// Spawns tasks associated with this extension
|
||||
pub spawner: Spawner,
|
||||
/// serialized [crate::HostExtChannel]
|
||||
pub input: Reader,
|
||||
/// serialized [crate::ExtHostChannel]
|
||||
pub output: Writer,
|
||||
/// UTF-8 log stream directly to log service.
|
||||
pub log: Writer,
|
||||
}
|
||||
@@ -43,17 +43,6 @@ pub struct Acquire(pub SysId, pub ExprTicket);
|
||||
#[extends(ExprNotif, ExtHostNotif)]
|
||||
pub struct Release(pub SysId, pub ExprTicket);
|
||||
|
||||
/// Decrement the reference count for one system and increment it for another,
|
||||
/// to indicate passing an owned reference. Equivalent to [Acquire] followed by
|
||||
/// [Release].
|
||||
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq, Coding, Hierarchy)]
|
||||
#[extends(ExprNotif, ExtHostNotif)]
|
||||
pub struct Move {
|
||||
pub dec: SysId,
|
||||
pub inc: SysId,
|
||||
pub expr: ExprTicket,
|
||||
}
|
||||
|
||||
/// A description of a new expression. It is used as the return value of
|
||||
/// [crate::atom::Call] or [crate::atom::CallRef], or a constant in the
|
||||
/// [crate::tree::Tree].
|
||||
@@ -67,8 +56,9 @@ pub enum ExpressionKind {
|
||||
/// template
|
||||
Arg(u64),
|
||||
/// Insert the specified host-expression in the template here. When the clause
|
||||
/// is used in the const tree, this variant is forbidden.
|
||||
Slot { tk: ExprTicket, by_value: bool },
|
||||
/// is used in the const tree, this variant is forbidden. The ticket held
|
||||
/// within is always owning. To avoid a leak, it must be deserialized.
|
||||
Slot(ExprTicket),
|
||||
/// The lhs must be fully processed before the rhs can be processed.
|
||||
/// Equivalent to Haskell's function of the same name
|
||||
Seq(Box<Expression>, Box<Expression>),
|
||||
@@ -115,11 +105,12 @@ impl Request for Inspect {
|
||||
type Response = Inspected;
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq, Coding, Hierarchy)]
|
||||
#[derive(Clone, Debug, Coding, Hierarchy)]
|
||||
#[extends(ExtHostReq)]
|
||||
#[extendable]
|
||||
pub enum ExprReq {
|
||||
Inspect(Inspect),
|
||||
Create(Create),
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq, Coding, Hierarchy)]
|
||||
@@ -128,5 +119,11 @@ pub enum ExprReq {
|
||||
pub enum ExprNotif {
|
||||
Acquire(Acquire),
|
||||
Release(Release),
|
||||
Move(Move),
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Coding, Hierarchy)]
|
||||
#[extends(ExprReq, ExtHostReq)]
|
||||
pub struct Create(pub Expression);
|
||||
impl Request for Create {
|
||||
type Response = ExprTicket;
|
||||
}
|
||||
|
||||
@@ -3,7 +3,7 @@ use std::num::NonZeroU64;
|
||||
use orchid_api_derive::{Coding, Hierarchy};
|
||||
use orchid_api_traits::Request;
|
||||
|
||||
use crate::{ExtHostReq, HostExtReq};
|
||||
use crate::{ExtHostNotif, ExtHostReq, HostExtReq};
|
||||
|
||||
/// Intern requests sent by the replica to the master. These requests are
|
||||
/// repeatable.
|
||||
@@ -71,18 +71,21 @@ pub struct TStr(pub NonZeroU64);
|
||||
pub struct TStrv(pub NonZeroU64);
|
||||
|
||||
/// A request to sweep the replica. The master will not be sweeped until all
|
||||
/// replicas respond, as it must retain everything the replicas retained
|
||||
/// replicas respond. For efficiency, replicas should make sure to send the
|
||||
/// [Sweeped] notif before returning.
|
||||
#[derive(Clone, Copy, Debug, Coding, Hierarchy)]
|
||||
#[extends(HostExtReq)]
|
||||
pub struct Sweep;
|
||||
impl Request for Sweep {
|
||||
type Response = Retained;
|
||||
type Response = ();
|
||||
}
|
||||
|
||||
/// List of keys in this replica that couldn't be sweeped because local
|
||||
/// datastructures reference their value.
|
||||
#[derive(Clone, Debug, Coding)]
|
||||
pub struct Retained {
|
||||
/// List of keys in this replica that were removed during a sweep. This may have
|
||||
/// been initiated via a [Sweep] request, but can also be triggered by the
|
||||
/// replica autonomously.
|
||||
#[derive(Clone, Debug, Coding, Hierarchy)]
|
||||
#[extends(ExtHostNotif)]
|
||||
pub struct Sweeped {
|
||||
pub strings: Vec<TStr>,
|
||||
pub vecs: Vec<TStrv>,
|
||||
}
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
pub mod binary;
|
||||
mod lexer;
|
||||
pub use lexer::*;
|
||||
mod format;
|
||||
|
||||
@@ -17,6 +17,8 @@ pub enum Location {
|
||||
Gen(CodeGenInfo),
|
||||
/// Range and file
|
||||
SourceRange(SourceRange),
|
||||
/// Multiple locations
|
||||
Multi(Vec<Location>),
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Coding)]
|
||||
|
||||
@@ -1,14 +1,30 @@
|
||||
use std::collections::HashMap;
|
||||
|
||||
use orchid_api_derive::{Coding, Hierarchy};
|
||||
|
||||
use crate::ExtHostNotif;
|
||||
use crate::{ExtHostNotif, TStr};
|
||||
|
||||
/// Describes what to do with a log stream.
|
||||
/// Log streams are unstructured utf8 text unless otherwise stated.
|
||||
#[derive(Clone, Debug, Coding, PartialEq, Eq, Hash)]
|
||||
pub enum LogStrategy {
|
||||
StdErr,
|
||||
File(String),
|
||||
/// Context-dependent default stream, often stderr
|
||||
Default,
|
||||
/// A file on the local filesystem
|
||||
File { path: String, append: bool },
|
||||
/// Discard any log output
|
||||
Discard,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Coding)]
|
||||
pub struct Logger {
|
||||
pub routing: HashMap<String, LogStrategy>,
|
||||
pub default: Option<LogStrategy>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Coding, Hierarchy)]
|
||||
#[extends(ExtHostNotif)]
|
||||
pub struct Log(pub String);
|
||||
pub struct Log {
|
||||
pub category: TStr,
|
||||
pub message: String,
|
||||
}
|
||||
|
||||
@@ -22,51 +22,49 @@
|
||||
//! be preserved. Toolkits must ensure that the client code is able to observe
|
||||
//! the ordering of messages.
|
||||
|
||||
use std::io;
|
||||
use std::pin::Pin;
|
||||
|
||||
use futures::{AsyncRead, AsyncWrite};
|
||||
use futures::{AsyncRead, AsyncWrite, AsyncWriteExt};
|
||||
use orchid_api_derive::{Coding, Hierarchy};
|
||||
use orchid_api_traits::{Channel, Decode, Encode, MsgSet, Request, read_exact, write_exact};
|
||||
use orchid_api_traits::{Channel, Decode, Encode, MsgSet, Request, read_exact};
|
||||
|
||||
use crate::{atom, expr, interner, lexer, logging, parser, system, tree};
|
||||
use crate::{Sweeped, atom, expr, interner, lexer, logging, parser, system, tree};
|
||||
|
||||
static HOST_INTRO: &[u8] = b"Orchid host, binary API v0\n";
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct HostHeader {
|
||||
pub log_strategy: logging::LogStrategy,
|
||||
pub msg_logs: logging::LogStrategy,
|
||||
pub logger: logging::Logger,
|
||||
}
|
||||
impl Decode for HostHeader {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
read_exact(read.as_mut(), HOST_INTRO).await;
|
||||
Self {
|
||||
log_strategy: logging::LogStrategy::decode(read.as_mut()).await,
|
||||
msg_logs: logging::LogStrategy::decode(read.as_mut()).await,
|
||||
}
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
read_exact(read.as_mut(), HOST_INTRO).await?;
|
||||
Ok(Self { logger: logging::Logger::decode(read).await? })
|
||||
}
|
||||
}
|
||||
impl Encode for HostHeader {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
write_exact(write.as_mut(), HOST_INTRO).await;
|
||||
self.log_strategy.encode(write.as_mut()).await;
|
||||
self.msg_logs.encode(write.as_mut()).await
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
write.write_all(HOST_INTRO).await?;
|
||||
self.logger.encode(write.as_mut()).await
|
||||
}
|
||||
}
|
||||
|
||||
static EXT_INTRO: &[u8] = b"Orchid extension, binary API v0\n";
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct ExtensionHeader {
|
||||
pub name: String,
|
||||
pub systems: Vec<system::SystemDecl>,
|
||||
}
|
||||
impl Decode for ExtensionHeader {
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> Self {
|
||||
read_exact(read.as_mut(), EXT_INTRO).await;
|
||||
Self { name: String::decode(read.as_mut()).await, systems: Vec::decode(read).await }
|
||||
async fn decode<R: AsyncRead + ?Sized>(mut read: Pin<&mut R>) -> io::Result<Self> {
|
||||
read_exact(read.as_mut(), EXT_INTRO).await?;
|
||||
Ok(Self { name: String::decode(read.as_mut()).await?, systems: Vec::decode(read).await? })
|
||||
}
|
||||
}
|
||||
impl Encode for ExtensionHeader {
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) {
|
||||
write_exact(write.as_mut(), EXT_INTRO).await;
|
||||
self.name.encode(write.as_mut()).await;
|
||||
async fn encode<W: AsyncWrite + ?Sized>(&self, mut write: Pin<&mut W>) -> io::Result<()> {
|
||||
write.write_all(EXT_INTRO).await?;
|
||||
self.name.encode(write.as_mut()).await?;
|
||||
self.systems.encode(write).await
|
||||
}
|
||||
}
|
||||
@@ -99,6 +97,7 @@ pub enum ExtHostReq {
|
||||
pub enum ExtHostNotif {
|
||||
ExprNotif(expr::ExprNotif),
|
||||
Log(logging::Log),
|
||||
Sweeped(Sweeped),
|
||||
}
|
||||
|
||||
pub struct ExtHostChannel;
|
||||
@@ -155,22 +154,22 @@ impl MsgSet for HostMsgSet {
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::collections::HashMap;
|
||||
|
||||
use orchid_api_traits::enc_vec;
|
||||
use ordered_float::NotNan;
|
||||
use test_executors::spin_on;
|
||||
|
||||
use super::*;
|
||||
use crate::Logger;
|
||||
|
||||
#[test]
|
||||
fn host_header_enc() {
|
||||
spin_on(async {
|
||||
let hh = HostHeader {
|
||||
log_strategy: logging::LogStrategy::File("SomeFile".to_string()),
|
||||
msg_logs: logging::LogStrategy::File("SomeFile".to_string()),
|
||||
};
|
||||
let mut enc = &enc_vec(&hh).await[..];
|
||||
let hh = HostHeader { logger: Logger { routing: HashMap::new(), default: None } };
|
||||
let mut enc = &enc_vec(&hh)[..];
|
||||
eprintln!("Encoded to {enc:?}");
|
||||
HostHeader::decode(Pin::new(&mut enc)).await;
|
||||
HostHeader::decode(Pin::new(&mut enc)).await.unwrap();
|
||||
assert_eq!(enc, []);
|
||||
})
|
||||
}
|
||||
@@ -187,9 +186,9 @@ mod tests {
|
||||
priority: NotNan::new(1f64).unwrap(),
|
||||
}],
|
||||
};
|
||||
let mut enc = &enc_vec(&eh).await[..];
|
||||
let mut enc = &enc_vec(&eh)[..];
|
||||
eprintln!("Encoded to {enc:?}");
|
||||
ExtensionHeader::decode(Pin::new(&mut enc)).await;
|
||||
ExtensionHeader::decode(Pin::new(&mut enc)).await.unwrap();
|
||||
assert_eq!(enc, [])
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use std::collections::HashMap;
|
||||
use std::fmt;
|
||||
use std::num::NonZeroU64;
|
||||
use std::ops::Range;
|
||||
use std::rc::Rc;
|
||||
|
||||
use orchid_api_derive::{Coding, Hierarchy};
|
||||
use orchid_api_traits::Request;
|
||||
@@ -47,7 +47,7 @@ pub enum Token {
|
||||
/// NewExpr(Bottom) because it fails in dead branches too.
|
||||
Bottom(Vec<OrcError>),
|
||||
/// A comment
|
||||
Comment(Rc<String>),
|
||||
Comment(TStr),
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, Coding)]
|
||||
@@ -56,6 +56,15 @@ pub enum Paren {
|
||||
Square,
|
||||
Curly,
|
||||
}
|
||||
impl fmt::Display for Paren {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
write!(f, "{}", match self {
|
||||
Self::Round => "()",
|
||||
Self::Curly => "{}",
|
||||
Self::Square => "[]",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord, Coding)]
|
||||
pub struct TreeId(pub NonZeroU64);
|
||||
|
||||
@@ -6,12 +6,14 @@ edition = "2024"
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
unsync-pipe = { version = "0.2.0", path = "../unsync-pipe" }
|
||||
async-fn-stream = { version = "0.1.0", path = "../async-fn-stream" }
|
||||
async-once-cell = "0.5.4"
|
||||
bound = "0.6.0"
|
||||
derive_destructure = "1.0.0"
|
||||
dyn-clone = "1.0.20"
|
||||
futures = { version = "0.3.31", features = ["std"], default-features = false }
|
||||
hashbrown = "0.16.0"
|
||||
hashbrown = "0.16.1"
|
||||
itertools = "0.14.0"
|
||||
lazy_static = "1.5.0"
|
||||
never = "0.1.0"
|
||||
@@ -19,10 +21,13 @@ num-traits = "0.2.19"
|
||||
orchid-api = { version = "0.1.0", path = "../orchid-api" }
|
||||
orchid-api-derive = { version = "0.1.0", path = "../orchid-api-derive" }
|
||||
orchid-api-traits = { version = "0.1.0", path = "../orchid-api-traits" }
|
||||
ordered-float = "5.0.0"
|
||||
regex = "1.11.2"
|
||||
rust-embed = "8.7.2"
|
||||
some_executor = "0.6.1"
|
||||
ordered-float = "5.1.0"
|
||||
regex = "1.12.2"
|
||||
rust-embed = "8.9.0"
|
||||
substack = "1.1.1"
|
||||
test_executors = "0.3.5"
|
||||
trait-set = "0.3.0"
|
||||
task-local = "0.1.0"
|
||||
|
||||
[dev-dependencies]
|
||||
futures = "0.3.31"
|
||||
test_executors = "0.4.1"
|
||||
|
||||
118
orchid-base/src/binary.rs
Normal file
118
orchid-base/src/binary.rs
Normal file
@@ -0,0 +1,118 @@
|
||||
use std::pin::Pin;
|
||||
use std::rc::Rc;
|
||||
use std::task::{Context, Poll, RawWaker, RawWakerVTable, Waker};
|
||||
|
||||
use orchid_api::binary::{FutureContextVT, FutureVT, OwnedWakerVT, UnitPoll};
|
||||
|
||||
type WideBox = Box<dyn Future<Output = ()>>;
|
||||
|
||||
static OWNED_VTABLE: RawWakerVTable = RawWakerVTable::new(
|
||||
|data| {
|
||||
let data = unsafe { Rc::<OwnedWakerVT>::from_raw(data as *const _) };
|
||||
let val = RawWaker::new(Rc::into_raw(data.clone()) as *const (), &OWNED_VTABLE);
|
||||
// Clone must create a duplicate of the Rc, so it has to be un-leaked, cloned,
|
||||
// then leaked again.
|
||||
let _ = Rc::into_raw(data);
|
||||
val
|
||||
},
|
||||
|data| {
|
||||
// Wake must awaken the task and then clean up the state, so the waker must be
|
||||
// un-leaked
|
||||
let data = unsafe { Rc::<OwnedWakerVT>::from_raw(data as *const _) };
|
||||
(data.wake)(data.data);
|
||||
},
|
||||
|data| {
|
||||
// Wake-by-ref must awaken the task while preserving the future, so the Rc is
|
||||
// untouched
|
||||
let data = unsafe { (data as *const OwnedWakerVT).as_ref() }.unwrap();
|
||||
(data.wake_ref)(data.data);
|
||||
},
|
||||
|data| {
|
||||
// Drop must clean up the state, so the waker must be un-leaked
|
||||
let data = unsafe { Rc::<OwnedWakerVT>::from_raw(data as *const _) };
|
||||
(data.drop)(data.data);
|
||||
},
|
||||
);
|
||||
|
||||
struct BorrowedWakerData<'a> {
|
||||
go_around: &'a mut bool,
|
||||
cx: FutureContextVT,
|
||||
}
|
||||
static BORROWED_VTABLE: RawWakerVTable = RawWakerVTable::new(
|
||||
|data| {
|
||||
let data = unsafe { (data as *mut BorrowedWakerData).as_mut() }.unwrap();
|
||||
let owned_data = Rc::<OwnedWakerVT>::new((data.cx.waker)(data.cx.data));
|
||||
RawWaker::new(Rc::into_raw(owned_data) as *const (), &OWNED_VTABLE)
|
||||
},
|
||||
|data| *unsafe { (data as *mut BorrowedWakerData).as_mut() }.unwrap().go_around = true,
|
||||
|data| *unsafe { (data as *mut BorrowedWakerData).as_mut() }.unwrap().go_around = true,
|
||||
|_data| {},
|
||||
);
|
||||
|
||||
/// Convert a future to a binary-compatible format that can be sent across
|
||||
/// dynamic library boundaries
|
||||
pub fn future_to_vt<Fut: Future<Output = ()> + 'static>(fut: Fut) -> FutureVT {
|
||||
let wide_box = Box::new(fut) as WideBox;
|
||||
let data = Box::into_raw(Box::new(wide_box));
|
||||
extern "C" fn drop(raw: *const ()) {
|
||||
std::mem::drop(unsafe { Box::<WideBox>::from_raw(raw as *mut _) })
|
||||
}
|
||||
extern "C" fn poll(raw: *const (), cx: FutureContextVT) -> UnitPoll {
|
||||
let mut this = unsafe { Pin::new_unchecked(&mut **(raw as *mut WideBox).as_mut().unwrap()) };
|
||||
loop {
|
||||
let mut go_around = false;
|
||||
let borrowed_waker = unsafe {
|
||||
Waker::from_raw(RawWaker::new(
|
||||
&mut BorrowedWakerData { go_around: &mut go_around, cx } as *mut _ as *const (),
|
||||
&BORROWED_VTABLE,
|
||||
))
|
||||
};
|
||||
let mut ctx = Context::from_waker(&borrowed_waker);
|
||||
let result = this.as_mut().poll(&mut ctx);
|
||||
if matches!(result, Poll::Ready(())) {
|
||||
break UnitPoll::Ready;
|
||||
}
|
||||
if !go_around {
|
||||
break UnitPoll::Pending;
|
||||
}
|
||||
}
|
||||
}
|
||||
FutureVT { data: data as *const _, drop, poll }
|
||||
}
|
||||
|
||||
struct VirtualFuture {
|
||||
vt: FutureVT,
|
||||
}
|
||||
impl Unpin for VirtualFuture {}
|
||||
impl Future for VirtualFuture {
|
||||
type Output = ();
|
||||
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
|
||||
extern "C" fn waker(raw: *const ()) -> OwnedWakerVT {
|
||||
let waker = unsafe { (raw as *mut Context).as_mut() }.unwrap().waker().clone();
|
||||
let data = Box::into_raw(Box::<Waker>::new(waker)) as *const ();
|
||||
return OwnedWakerVT { data, drop, wake, wake_ref };
|
||||
extern "C" fn drop(raw: *const ()) {
|
||||
std::mem::drop(unsafe { Box::<Waker>::from_raw(raw as *mut Waker) })
|
||||
}
|
||||
extern "C" fn wake(raw: *const ()) {
|
||||
unsafe { Box::<Waker>::from_raw(raw as *mut Waker) }.wake();
|
||||
}
|
||||
extern "C" fn wake_ref(raw: *const ()) {
|
||||
unsafe { (raw as *mut Waker).as_mut() }.unwrap().wake_by_ref();
|
||||
}
|
||||
}
|
||||
let cx = FutureContextVT { data: cx as *mut Context as *const (), waker };
|
||||
let result = (self.vt.poll)(self.vt.data, cx);
|
||||
match result {
|
||||
UnitPoll::Pending => Poll::Pending,
|
||||
UnitPoll::Ready => Poll::Ready(()),
|
||||
}
|
||||
}
|
||||
}
|
||||
impl Drop for VirtualFuture {
|
||||
fn drop(&mut self) { (self.vt.drop)(self.vt.data) }
|
||||
}
|
||||
|
||||
/// Receive a future sent across dynamic library boundaries and convert it into
|
||||
/// an owned object
|
||||
pub fn vt_to_future(vt: FutureVT) -> impl Future<Output = ()> { VirtualFuture { vt } }
|
||||
@@ -1,34 +0,0 @@
|
||||
use std::ops::Deref;
|
||||
use std::rc::Rc;
|
||||
|
||||
use futures::future::LocalBoxFuture;
|
||||
|
||||
use crate::api;
|
||||
|
||||
pub type Spawner = Rc<dyn Fn(LocalBoxFuture<'static, ()>)>;
|
||||
|
||||
/// The 3 primary contact points with an extension are
|
||||
/// - send a message
|
||||
/// - wait for a message to arrive
|
||||
/// - wait for the extension to stop after exit (this is the implicit Drop)
|
||||
///
|
||||
/// There are no ordering guarantees about these
|
||||
pub trait ExtPort {
|
||||
#[must_use]
|
||||
fn send<'a>(&'a self, msg: &'a [u8]) -> LocalBoxFuture<'a, ()>;
|
||||
#[must_use]
|
||||
fn recv(&self) -> LocalBoxFuture<'_, Option<Vec<u8>>>;
|
||||
}
|
||||
|
||||
pub struct ExtInit {
|
||||
pub header: api::ExtensionHeader,
|
||||
pub port: Box<dyn ExtPort>,
|
||||
}
|
||||
impl ExtInit {
|
||||
pub async fn send(&self, msg: &[u8]) { self.port.send(msg).await }
|
||||
pub async fn recv(&self) -> Option<Vec<u8>> { self.port.recv().await }
|
||||
}
|
||||
impl Deref for ExtInit {
|
||||
type Target = api::ExtensionHeader;
|
||||
fn deref(&self) -> &Self::Target { &self.header }
|
||||
}
|
||||
@@ -2,13 +2,16 @@ use std::cell::RefCell;
|
||||
use std::ffi::OsStr;
|
||||
use std::fmt;
|
||||
use std::ops::Add;
|
||||
use std::rc::Rc;
|
||||
use std::sync::Arc;
|
||||
|
||||
use futures::FutureExt;
|
||||
use futures::future::join_all;
|
||||
use itertools::Itertools;
|
||||
use task_local::task_local;
|
||||
|
||||
use crate::api;
|
||||
use crate::interner::{Interner, Tok};
|
||||
use crate::interner::{IStr, es, is};
|
||||
use crate::location::Pos;
|
||||
|
||||
/// A point of interest in resolving the error, such as the point where
|
||||
@@ -24,10 +27,10 @@ impl ErrPos {
|
||||
pub fn new(msg: &str, position: Pos) -> Self {
|
||||
Self { message: Some(Arc::new(msg.to_string())), position }
|
||||
}
|
||||
async fn from_api(api: &api::ErrLocation, i: &Interner) -> Self {
|
||||
async fn from_api(api: &api::ErrLocation) -> Self {
|
||||
Self {
|
||||
message: Some(api.message.clone()).filter(|s| !s.is_empty()),
|
||||
position: Pos::from_api(&api.location, i).await,
|
||||
position: Pos::from_api(&api.location).await,
|
||||
}
|
||||
}
|
||||
fn to_api(&self) -> api::ErrLocation {
|
||||
@@ -51,7 +54,7 @@ impl fmt::Display for ErrPos {
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct OrcErr {
|
||||
pub description: Tok<String>,
|
||||
pub description: IStr,
|
||||
pub message: Arc<String>,
|
||||
pub positions: Vec<ErrPos>,
|
||||
}
|
||||
@@ -63,16 +66,16 @@ impl OrcErr {
|
||||
locations: self.positions.iter().map(ErrPos::to_api).collect(),
|
||||
}
|
||||
}
|
||||
async fn from_api(api: &api::OrcError, i: &Interner) -> Self {
|
||||
async fn from_api(api: &api::OrcError) -> Self {
|
||||
Self {
|
||||
description: Tok::from_api(api.description, i).await,
|
||||
description: es(api.description).await,
|
||||
message: api.message.clone(),
|
||||
positions: join_all(api.locations.iter().map(|e| ErrPos::from_api(e, i))).await,
|
||||
positions: join_all(api.locations.iter().map(ErrPos::from_api)).await,
|
||||
}
|
||||
}
|
||||
}
|
||||
impl PartialEq<Tok<String>> for OrcErr {
|
||||
fn eq(&self, other: &Tok<String>) -> bool { self.description == *other }
|
||||
impl PartialEq<IStr> for OrcErr {
|
||||
fn eq(&self, other: &IStr) -> bool { self.description == *other }
|
||||
}
|
||||
impl From<OrcErr> for Vec<OrcErr> {
|
||||
fn from(value: OrcErr) -> Self { vec![value] }
|
||||
@@ -122,12 +125,10 @@ impl OrcErrv {
|
||||
self.0.iter().flat_map(|e| e.positions.iter().cloned())
|
||||
}
|
||||
pub fn to_api(&self) -> Vec<api::OrcError> { self.0.iter().map(OrcErr::to_api).collect() }
|
||||
pub async fn from_api<'a>(
|
||||
api: impl IntoIterator<Item = &'a api::OrcError>,
|
||||
i: &Interner,
|
||||
) -> Self {
|
||||
Self(join_all(api.into_iter().map(|e| OrcErr::from_api(e, i))).await)
|
||||
pub async fn from_api<'a>(api: impl IntoIterator<Item = &'a api::OrcError>) -> Self {
|
||||
Self(join_all(api.into_iter().map(OrcErr::from_api)).await)
|
||||
}
|
||||
pub fn iter(&self) -> impl Iterator<Item = OrcErr> + '_ { self.0.iter().cloned() }
|
||||
}
|
||||
impl From<OrcErr> for OrcErrv {
|
||||
fn from(value: OrcErr) -> Self { Self(vec![value]) }
|
||||
@@ -191,12 +192,12 @@ macro_rules! join_ok {
|
||||
(@VALUES) => { Ok(()) };
|
||||
}
|
||||
|
||||
pub fn mk_errv_floating(description: Tok<String>, message: impl AsRef<str>) -> OrcErrv {
|
||||
pub fn mk_errv_floating(description: IStr, message: impl AsRef<str>) -> OrcErrv {
|
||||
mk_errv::<Pos>(description, message, [])
|
||||
}
|
||||
|
||||
pub fn mk_errv<I: Into<ErrPos>>(
|
||||
description: Tok<String>,
|
||||
description: IStr,
|
||||
message: impl AsRef<str>,
|
||||
posv: impl IntoIterator<Item = I>,
|
||||
) -> OrcErrv {
|
||||
@@ -210,45 +211,71 @@ pub fn mk_errv<I: Into<ErrPos>>(
|
||||
|
||||
pub async fn async_io_err<I: Into<ErrPos>>(
|
||||
err: std::io::Error,
|
||||
i: &Interner,
|
||||
posv: impl IntoIterator<Item = I>,
|
||||
) -> OrcErrv {
|
||||
mk_errv(i.i(&err.kind().to_string()).await, err.to_string(), posv)
|
||||
mk_errv(is(&err.kind().to_string()).await, err.to_string(), posv)
|
||||
}
|
||||
|
||||
pub async fn os_str_to_string<'a, I: Into<ErrPos>>(
|
||||
str: &'a OsStr,
|
||||
i: &Interner,
|
||||
pub async fn os_str_to_string<I: Into<ErrPos>>(
|
||||
str: &OsStr,
|
||||
posv: impl IntoIterator<Item = I>,
|
||||
) -> OrcRes<&'a str> {
|
||||
) -> OrcRes<&str> {
|
||||
match str.to_str() {
|
||||
Some(str) => Ok(str),
|
||||
None => Err(mk_errv(
|
||||
i.i("Non-unicode string").await,
|
||||
is("Non-unicode string").await,
|
||||
format!("{str:?} is not representable as unicode"),
|
||||
posv,
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
pub struct Reporter {
|
||||
errors: RefCell<Vec<OrcErr>>,
|
||||
#[derive(Clone, Default)]
|
||||
struct Reporter {
|
||||
errors: Rc<RefCell<Vec<OrcErr>>>,
|
||||
}
|
||||
|
||||
impl Reporter {
|
||||
pub fn report(&self, e: impl Into<OrcErrv>) { self.errors.borrow_mut().extend(e.into()) }
|
||||
pub fn new() -> Self { Self { errors: RefCell::new(vec![]) } }
|
||||
pub fn errv(self) -> Option<OrcErrv> { OrcErrv::new(self.errors.into_inner()).ok() }
|
||||
pub fn merge<T>(self, res: OrcRes<T>) -> OrcRes<T> {
|
||||
match (res, self.errv()) {
|
||||
(res, None) => res,
|
||||
(Ok(_), Some(errv)) => Err(errv),
|
||||
(Err(e), Some(errv)) => Err(e + errv),
|
||||
}
|
||||
task_local! {
|
||||
static REPORTER: Reporter;
|
||||
}
|
||||
|
||||
/// Run the future with a new reporter, and return all errors reported within.
|
||||
///
|
||||
/// If your future returns [OrcRes], see [try_with_reporter]
|
||||
pub async fn with_reporter<T>(fut: impl Future<Output = T>) -> OrcRes<T> {
|
||||
try_with_reporter(fut.map(Ok)).await
|
||||
}
|
||||
|
||||
/// Run the future with a new reporter, and return all errors either returned or
|
||||
/// reported by it
|
||||
///
|
||||
/// If your future may report errors but always returns an approximate value,
|
||||
/// see [with_reporter]
|
||||
pub async fn try_with_reporter<T>(fut: impl Future<Output = OrcRes<T>>) -> OrcRes<T> {
|
||||
let rep = Reporter::default();
|
||||
let res = REPORTER.scope(rep.clone(), fut).await;
|
||||
let errors = rep.errors.take();
|
||||
match (res, &errors[..]) {
|
||||
(Ok(t), []) => Ok(t),
|
||||
(Ok(_), [_, ..]) => Err(OrcErrv::new(errors).unwrap()),
|
||||
(Err(e), _) => Err(e.extended(errors)),
|
||||
}
|
||||
pub fn is_empty(&self) -> bool { self.errors.borrow().is_empty() }
|
||||
}
|
||||
|
||||
impl Default for Reporter {
|
||||
fn default() -> Self { Self::new() }
|
||||
pub async fn is_erroring() -> bool {
|
||||
(REPORTER.try_with(|r| !r.errors.borrow().is_empty()))
|
||||
.expect("Sidechannel errors must be caught by a reporter")
|
||||
}
|
||||
|
||||
/// Report an error that is fatal and prevents a correct output, but
|
||||
/// still allows the current task to continue and produce an approximate output.
|
||||
/// This can be used for
|
||||
pub fn report(e: impl Into<OrcErrv>) {
|
||||
let errv = e.into();
|
||||
REPORTER.try_with(|r| r.errors.borrow_mut().extend(errv.clone())).unwrap_or_else(|_| {
|
||||
panic!(
|
||||
"Unhandled error! Sidechannel errors must be caught by an enclosing call to with_reporter.\n\
|
||||
Error: {errv}"
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,15 +1,17 @@
|
||||
use std::borrow::Borrow;
|
||||
use std::cmp::Ordering;
|
||||
use std::convert::Infallible;
|
||||
use std::future::Future;
|
||||
use std::iter;
|
||||
use std::marker::PhantomData;
|
||||
use std::rc::Rc;
|
||||
use std::str::FromStr;
|
||||
|
||||
use itertools::Itertools;
|
||||
use futures::future::join_all;
|
||||
use itertools::{Itertools, chain};
|
||||
use never::Never;
|
||||
use regex::Regex;
|
||||
|
||||
use crate::interner::Interner;
|
||||
use crate::{api, match_mapping};
|
||||
|
||||
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
|
||||
@@ -45,12 +47,14 @@ impl FmtUnit {
|
||||
}
|
||||
}
|
||||
pub fn sequence(
|
||||
head: &str,
|
||||
delim: &str,
|
||||
tail: &str,
|
||||
seq_bnd: Option<bool>,
|
||||
seq: impl IntoIterator<Item = FmtUnit>,
|
||||
) -> Self {
|
||||
let items = seq.into_iter().collect_vec();
|
||||
FmtUnit::new(Variants::sequence(items.len(), delim, seq_bnd), items)
|
||||
Variants::default().sequence(items.len(), head, delim, tail, seq_bnd).units_own(items)
|
||||
}
|
||||
}
|
||||
impl<T> From<T> for FmtUnit
|
||||
@@ -77,9 +81,12 @@ impl FmtElement {
|
||||
pub fn bounded(i: u32) -> Self { Self::sub(i, Some(true)) }
|
||||
pub fn unbounded(i: u32) -> Self { Self::sub(i, Some(false)) }
|
||||
pub fn last(i: u32) -> Self { Self::sub(i, None) }
|
||||
pub fn sequence(len: usize, bounded: Option<bool>) -> impl Iterator<Item = Self> {
|
||||
let len32: u32 = len.try_into().unwrap();
|
||||
(0..len32 - 1).map(FmtElement::unbounded).chain([FmtElement::sub(len32 - 1, bounded)])
|
||||
pub fn sequence(len: usize, bounded: Option<bool>) -> Vec<Self> {
|
||||
match len.try_into().unwrap() {
|
||||
0u32 => vec![],
|
||||
1u32 => vec![FmtElement::sub(0, bounded)],
|
||||
n => (0..n - 1).map(FmtElement::unbounded).chain([FmtElement::sub(n - 1, bounded)]).collect(),
|
||||
}
|
||||
}
|
||||
pub fn from_api(api: &api::FormattingElement) -> Self {
|
||||
match_mapping!(api, api::FormattingElement => FmtElement {
|
||||
@@ -105,10 +112,38 @@ pub struct Variant {
|
||||
|
||||
#[test]
|
||||
fn variants_parse_test() {
|
||||
let vars = Variants::default().bounded("({0})");
|
||||
println!("final: {vars:?}")
|
||||
let vars = Rc::new(Variants::default().bounded("({{{0}}})"));
|
||||
let expected_vars = Rc::new(Variants(vec![Variant {
|
||||
bounded: true,
|
||||
elements: vec![
|
||||
FmtElement::String(Rc::new("({".to_string())),
|
||||
FmtElement::Sub { bounded: Some(false), slot: 0 },
|
||||
FmtElement::String(Rc::new("})".to_string())),
|
||||
],
|
||||
}]));
|
||||
assert_eq!(vars.as_ref(), expected_vars.as_ref());
|
||||
let unit = vars.units(["1".into()]);
|
||||
assert_eq!(unit, FmtUnit {
|
||||
subs: vec![FmtUnit {
|
||||
subs: vec![],
|
||||
variants: Rc::new(Variants(vec![Variant {
|
||||
bounded: true,
|
||||
elements: vec![FmtElement::String(Rc::new("1".to_string()))]
|
||||
}]))
|
||||
}],
|
||||
variants: expected_vars
|
||||
});
|
||||
let str = take_first(&unit, true);
|
||||
assert_eq!(str, "({1})");
|
||||
}
|
||||
|
||||
/// Represents a collection of formatting strings for the same set of parameters
|
||||
/// from which the formatter can choose within their associated constraints.
|
||||
///
|
||||
/// - {0b} can be replaced by any variant of the parameter.
|
||||
/// - {0} can only be replaced by a bounded variant of the parameter
|
||||
/// - {0l} causes the current end restriction to be applied to the parameter.
|
||||
/// This is to be used if the parameter is at the very end of the variant.
|
||||
#[derive(Clone, Debug, Hash, PartialEq, Eq, Default)]
|
||||
pub struct Variants(pub Vec<Variant>);
|
||||
impl Variants {
|
||||
@@ -183,20 +218,40 @@ impl Variants {
|
||||
fn add(&mut self, bounded: bool, s: &'_ str) {
|
||||
self.0.push(Variant { bounded, elements: Self::parse(s) })
|
||||
}
|
||||
// This option is available in all positions
|
||||
/// This option is available in all positions.
|
||||
/// See [Variants] for a description of the format strings
|
||||
pub fn bounded(mut self, s: &'_ str) -> Self {
|
||||
self.add(true, s);
|
||||
self
|
||||
}
|
||||
// This option is only available in positions immediately preceding the end of
|
||||
// the sequence or a parenthesized subsequence.
|
||||
/// This option is only available in positions immediately preceding the end
|
||||
/// of the sequence or a parenthesized subsequence.
|
||||
/// See [Variants] for a description of the format strings
|
||||
pub fn unbounded(mut self, s: &'_ str) -> Self {
|
||||
self.add(false, s);
|
||||
self
|
||||
}
|
||||
pub fn sequence(len: usize, delim: &str, seq_bnd: Option<bool>) -> Rc<Self> {
|
||||
let seq = Itertools::intersperse(FmtElement::sequence(len, seq_bnd), FmtElement::str(delim));
|
||||
Rc::new(Variants(vec![Variant { bounded: true, elements: seq.collect_vec() }]))
|
||||
pub fn sequence(
|
||||
mut self,
|
||||
len: usize,
|
||||
head: &str,
|
||||
delim: &str,
|
||||
tail: &str,
|
||||
seq_bnd: Option<bool>,
|
||||
) -> Self {
|
||||
let seq = chain!(
|
||||
[FmtElement::str(head)],
|
||||
Itertools::intersperse(
|
||||
FmtElement::sequence(len, seq_bnd).into_iter(),
|
||||
FmtElement::str(delim),
|
||||
),
|
||||
[FmtElement::str(tail)],
|
||||
);
|
||||
self.0.push(Variant { bounded: true, elements: seq.collect_vec() });
|
||||
self
|
||||
}
|
||||
pub fn units_own(self, subs: impl IntoIterator<Item = FmtUnit>) -> FmtUnit {
|
||||
FmtUnit::new(Rc::new(self), subs)
|
||||
}
|
||||
pub fn units(self: &Rc<Self>, subs: impl IntoIterator<Item = FmtUnit>) -> FmtUnit {
|
||||
FmtUnit::new(self.clone(), subs)
|
||||
@@ -245,16 +300,16 @@ pub fn take_first(unit: &FmtUnit, bounded: bool) -> String {
|
||||
fill_slots(&first.elements, &unit.subs, 0, bounded)
|
||||
}
|
||||
|
||||
pub async fn take_first_fmt(v: &(impl Format + ?Sized), i: &Interner) -> String {
|
||||
take_first(&v.print(&FmtCtxImpl { i }).await, false)
|
||||
pub async fn take_first_fmt(v: &(impl Format + ?Sized)) -> String {
|
||||
take_first(&v.print(&FmtCtxImpl { _foo: PhantomData }).await, false)
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct FmtCtxImpl<'a> {
|
||||
pub i: &'a Interner,
|
||||
_foo: PhantomData<&'a ()>,
|
||||
}
|
||||
|
||||
pub trait FmtCtx {
|
||||
fn i(&self) -> &Interner;
|
||||
// fn print_as(&self, p: &(impl Format + ?Sized)) -> impl Future<Output =
|
||||
// String> where Self: Sized {
|
||||
// async {
|
||||
@@ -264,9 +319,7 @@ pub trait FmtCtx {
|
||||
// }
|
||||
// }
|
||||
}
|
||||
impl FmtCtx for FmtCtxImpl<'_> {
|
||||
fn i(&self) -> &Interner { self.i }
|
||||
}
|
||||
impl FmtCtx for FmtCtxImpl<'_> {}
|
||||
|
||||
pub trait Format {
|
||||
#[must_use]
|
||||
@@ -277,4 +330,10 @@ impl Format for Never {
|
||||
}
|
||||
|
||||
/// Format with default strategy. Currently equal to [take_first_fmt]
|
||||
pub async fn fmt(v: &(impl Format + ?Sized), i: &Interner) -> String { take_first_fmt(v, i).await }
|
||||
pub async fn fmt(v: &(impl Format + ?Sized)) -> String { take_first_fmt(v).await }
|
||||
/// Format a sequence with default strategy. Currently equal to [take_first_fmt]
|
||||
pub async fn fmt_v<F: Format + ?Sized>(
|
||||
v: impl IntoIterator<Item: Borrow<F>>,
|
||||
) -> impl Iterator<Item = String> {
|
||||
join_all(v.into_iter().map(|f| async move { take_first_fmt(f.borrow()).await })).await.into_iter()
|
||||
}
|
||||
|
||||
@@ -1,310 +1,382 @@
|
||||
use std::borrow::Borrow;
|
||||
use std::fmt::{Debug, Display};
|
||||
use std::future::Future;
|
||||
use std::hash::BuildHasher as _;
|
||||
use std::num::NonZeroU64;
|
||||
use std::hash::Hash;
|
||||
use std::ops::Deref;
|
||||
use std::rc::Rc;
|
||||
use std::sync::atomic;
|
||||
use std::{fmt, hash};
|
||||
|
||||
use futures::lock::Mutex;
|
||||
use hashbrown::{HashMap, HashSet};
|
||||
use itertools::Itertools as _;
|
||||
use orchid_api_traits::Request;
|
||||
use futures::future::LocalBoxFuture;
|
||||
use task_local::task_local;
|
||||
|
||||
use crate::api;
|
||||
use crate::reqnot::{DynRequester, Requester};
|
||||
|
||||
/// Clippy crashes while verifying `Tok: Sized` without this and I cba to create
|
||||
/// a minimal example
|
||||
#[derive(Clone)]
|
||||
struct ForceSized<T>(T);
|
||||
pub trait IStrHandle: AsRef<str> {
|
||||
fn rc(&self) -> Rc<String>;
|
||||
}
|
||||
pub trait IStrvHandle: AsRef<[IStr]> {
|
||||
fn rc(&self) -> Rc<Vec<IStr>>;
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct Tok<T: Interned> {
|
||||
data: Rc<T>,
|
||||
marker: ForceSized<T::Marker>,
|
||||
pub struct IStr(pub api::TStr, pub Rc<dyn IStrHandle>);
|
||||
impl IStr {
|
||||
/// Obtain a unique ID for this interned data.
|
||||
///
|
||||
/// NOTICE: the ID is guaranteed to be the same for any interned instance of
|
||||
/// the same value only as long as at least one instance exists. If a value is
|
||||
/// no longer interned, the interner is free to forget about it.
|
||||
pub fn to_api(&self) -> api::TStr { self.0 }
|
||||
pub fn rc(&self) -> Rc<String> { self.1.rc() }
|
||||
}
|
||||
impl<T: Interned> Tok<T> {
|
||||
pub fn new(data: Rc<T>, marker: T::Marker) -> Self { Self { data, marker: ForceSized(marker) } }
|
||||
pub fn to_api(&self) -> T::Marker { self.marker.0 }
|
||||
pub async fn from_api<M>(marker: M, i: &Interner) -> Self
|
||||
where M: InternMarker<Interned = T> {
|
||||
i.ex(marker).await
|
||||
}
|
||||
pub fn rc(&self) -> Rc<T> { self.data.clone() }
|
||||
impl Deref for IStr {
|
||||
type Target = str;
|
||||
fn deref(&self) -> &Self::Target { self.1.as_ref().as_ref() }
|
||||
}
|
||||
impl<T: Interned> Deref for Tok<T> {
|
||||
type Target = T;
|
||||
|
||||
fn deref(&self) -> &Self::Target { self.data.as_ref() }
|
||||
impl Eq for IStr {}
|
||||
impl PartialEq for IStr {
|
||||
fn eq(&self, other: &Self) -> bool { self.0 == other.0 }
|
||||
}
|
||||
impl<T: Interned> Ord for Tok<T> {
|
||||
fn cmp(&self, other: &Self) -> std::cmp::Ordering { self.to_api().cmp(&other.to_api()) }
|
||||
impl Hash for IStr {
|
||||
fn hash<H: hash::Hasher>(&self, state: &mut H) { self.0.hash(state) }
|
||||
}
|
||||
impl<T: Interned> PartialOrd for Tok<T> {
|
||||
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> { Some(self.cmp(other)) }
|
||||
impl Display for IStr {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "{}", self.deref()) }
|
||||
}
|
||||
impl<T: Interned> Eq for Tok<T> {}
|
||||
impl<T: Interned> PartialEq for Tok<T> {
|
||||
fn eq(&self, other: &Self) -> bool { self.cmp(other).is_eq() }
|
||||
impl Debug for IStr {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "IStr({self}") }
|
||||
}
|
||||
impl<T: Interned> hash::Hash for Tok<T> {
|
||||
fn hash<H: hash::Hasher>(&self, state: &mut H) { self.to_api().hash(state) }
|
||||
#[derive(Clone)]
|
||||
pub struct IStrv(pub api::TStrv, pub Rc<dyn IStrvHandle>);
|
||||
impl IStrv {
|
||||
/// Obtain a unique ID for this interned data.
|
||||
///
|
||||
/// NOTICE: the ID is guaranteed to be the same for any interned instance of
|
||||
/// the same value only as long as at least one instance exists. If a value is
|
||||
/// no longer interned, the interner is free to forget about it.
|
||||
pub fn to_api(&self) -> api::TStrv { self.0 }
|
||||
pub fn rc(&self) -> Rc<Vec<IStr>> { self.1.rc() }
|
||||
}
|
||||
impl<T: Interned + fmt::Display> fmt::Display for Tok<T> {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{}", &*self.data)
|
||||
}
|
||||
impl Deref for IStrv {
|
||||
type Target = [IStr];
|
||||
fn deref(&self) -> &Self::Target { self.1.as_ref().as_ref() }
|
||||
}
|
||||
impl<T: Interned + fmt::Debug> fmt::Debug for Tok<T> {
|
||||
impl Eq for IStrv {}
|
||||
impl PartialEq for IStrv {
|
||||
fn eq(&self, other: &Self) -> bool { self.0 == other.0 }
|
||||
}
|
||||
impl Hash for IStrv {
|
||||
fn hash<H: hash::Hasher>(&self, state: &mut H) { self.0.0.hash(state) }
|
||||
}
|
||||
impl Display for IStrv {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
write!(f, "Token({} -> {:?})", self.to_api().get_id(), self.data.as_ref())
|
||||
}
|
||||
}
|
||||
|
||||
pub trait Interned: Eq + hash::Hash + Clone + fmt::Debug + Internable<Interned = Self> {
|
||||
type Marker: InternMarker<Interned = Self> + Sized;
|
||||
fn intern(
|
||||
self: Rc<Self>,
|
||||
req: &(impl DynRequester<Transfer = api::IntReq> + ?Sized),
|
||||
) -> impl Future<Output = Self::Marker>;
|
||||
fn bimap(interner: &mut TypedInterners) -> &mut Bimap<Self>;
|
||||
}
|
||||
|
||||
pub trait Internable: fmt::Debug {
|
||||
type Interned: Interned;
|
||||
fn get_owned(&self) -> Rc<Self::Interned>;
|
||||
}
|
||||
|
||||
pub trait InternMarker: Copy + PartialEq + Eq + PartialOrd + Ord + hash::Hash + Sized {
|
||||
type Interned: Interned<Marker = Self>;
|
||||
/// Only called on replicas
|
||||
fn resolve(self, i: &Interner) -> impl Future<Output = Tok<Self::Interned>>;
|
||||
fn get_id(self) -> NonZeroU64;
|
||||
fn from_id(id: NonZeroU64) -> Self;
|
||||
}
|
||||
|
||||
impl Interned for String {
|
||||
type Marker = api::TStr;
|
||||
async fn intern(
|
||||
self: Rc<Self>,
|
||||
req: &(impl DynRequester<Transfer = api::IntReq> + ?Sized),
|
||||
) -> Self::Marker {
|
||||
req.request(api::InternStr(self.to_string())).await
|
||||
}
|
||||
fn bimap(interners: &mut TypedInterners) -> &mut Bimap<Self> { &mut interners.strings }
|
||||
}
|
||||
impl InternMarker for api::TStr {
|
||||
type Interned = String;
|
||||
async fn resolve(self, i: &Interner) -> Tok<Self::Interned> {
|
||||
Tok::new(Rc::new(i.0.master.as_ref().unwrap().request(api::ExternStr(self)).await), self)
|
||||
}
|
||||
fn get_id(self) -> NonZeroU64 { self.0 }
|
||||
fn from_id(id: NonZeroU64) -> Self { Self(id) }
|
||||
}
|
||||
impl Internable for str {
|
||||
type Interned = String;
|
||||
fn get_owned(&self) -> Rc<Self::Interned> { Rc::new(self.to_string()) }
|
||||
}
|
||||
impl Internable for String {
|
||||
type Interned = String;
|
||||
fn get_owned(&self) -> Rc<Self::Interned> { Rc::new(self.to_string()) }
|
||||
}
|
||||
|
||||
impl Interned for Vec<Tok<String>> {
|
||||
type Marker = api::TStrv;
|
||||
async fn intern(
|
||||
self: Rc<Self>,
|
||||
req: &(impl DynRequester<Transfer = api::IntReq> + ?Sized),
|
||||
) -> Self::Marker {
|
||||
req.request(api::InternStrv(self.iter().map(|t| t.to_api()).collect())).await
|
||||
}
|
||||
fn bimap(interners: &mut TypedInterners) -> &mut Bimap<Self> { &mut interners.vecs }
|
||||
}
|
||||
impl InternMarker for api::TStrv {
|
||||
type Interned = Vec<Tok<String>>;
|
||||
async fn resolve(self, i: &Interner) -> Tok<Self::Interned> {
|
||||
let rep = i.0.master.as_ref().unwrap().request(api::ExternStrv(self)).await;
|
||||
let data = futures::future::join_all(rep.into_iter().map(|m| i.ex(m))).await;
|
||||
Tok::new(Rc::new(data), self)
|
||||
}
|
||||
fn get_id(self) -> NonZeroU64 { self.0 }
|
||||
fn from_id(id: NonZeroU64) -> Self { Self(id) }
|
||||
}
|
||||
impl Internable for [Tok<String>] {
|
||||
type Interned = Vec<Tok<String>>;
|
||||
fn get_owned(&self) -> Rc<Self::Interned> { Rc::new(self.to_vec()) }
|
||||
}
|
||||
impl<const N: usize> Internable for [Tok<String>; N] {
|
||||
type Interned = Vec<Tok<String>>;
|
||||
fn get_owned(&self) -> Rc<Self::Interned> { Rc::new(self.to_vec()) }
|
||||
}
|
||||
impl Internable for Vec<Tok<String>> {
|
||||
type Interned = Vec<Tok<String>>;
|
||||
fn get_owned(&self) -> Rc<Self::Interned> { Rc::new(self.to_vec()) }
|
||||
}
|
||||
// impl Internable for Vec<api::TStr> {
|
||||
// type Interned = Vec<Tok<String>>;
|
||||
// fn get_owned(&self) -> Arc<Self::Interned> {
|
||||
// Arc::new(self.iter().map(|ts| deintern(*ts)).collect())
|
||||
// }
|
||||
// }
|
||||
// impl Internable for [api::TStr] {
|
||||
// type Interned = Vec<Tok<String>>;
|
||||
// fn get_owned(&self) -> Arc<Self::Interned> {
|
||||
// Arc::new(self.iter().map(|ts| deintern(*ts)).collect())
|
||||
// }
|
||||
// }
|
||||
|
||||
/// The number of references held to any token by the interner.
|
||||
const BASE_RC: usize = 3;
|
||||
|
||||
#[test]
|
||||
fn base_rc_correct() {
|
||||
let tok = Tok::new(Rc::new("foo".to_string()), api::TStr(1.try_into().unwrap()));
|
||||
let mut bimap = Bimap::default();
|
||||
bimap.insert(tok.clone());
|
||||
assert_eq!(Rc::strong_count(&tok.data), BASE_RC + 1, "the bimap plus the current instance");
|
||||
}
|
||||
|
||||
pub struct Bimap<T: Interned> {
|
||||
intern: HashMap<Rc<T>, Tok<T>>,
|
||||
by_id: HashMap<T::Marker, Tok<T>>,
|
||||
}
|
||||
impl<T: Interned> Bimap<T> {
|
||||
pub fn insert(&mut self, token: Tok<T>) {
|
||||
self.intern.insert(token.data.clone(), token.clone());
|
||||
self.by_id.insert(token.to_api(), token);
|
||||
}
|
||||
|
||||
pub fn by_marker(&self, marker: T::Marker) -> Option<Tok<T>> { self.by_id.get(&marker).cloned() }
|
||||
|
||||
pub fn by_value<Q: Eq + hash::Hash>(&self, q: &Q) -> Option<Tok<T>>
|
||||
where T: Borrow<Q> {
|
||||
(self.intern.raw_entry())
|
||||
.from_hash(self.intern.hasher().hash_one(q), |k| k.as_ref().borrow() == q)
|
||||
.map(|p| p.1.clone())
|
||||
}
|
||||
|
||||
pub fn sweep_replica(&mut self) -> Vec<T::Marker> {
|
||||
(self.intern)
|
||||
.extract_if(|k, _| Rc::strong_count(k) == BASE_RC)
|
||||
.map(|(_, v)| {
|
||||
self.by_id.remove(&v.to_api());
|
||||
v.to_api()
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
pub fn sweep_master(&mut self, retained: HashSet<T::Marker>) {
|
||||
self.intern.retain(|k, v| BASE_RC < Rc::strong_count(k) || retained.contains(&v.to_api()))
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Interned> Default for Bimap<T> {
|
||||
fn default() -> Self { Self { by_id: HashMap::new(), intern: HashMap::new() } }
|
||||
}
|
||||
|
||||
pub trait UpComm {
|
||||
fn up<R: Request>(&self, req: R) -> R::Response;
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct TypedInterners {
|
||||
strings: Bimap<String>,
|
||||
vecs: Bimap<Vec<Tok<String>>>,
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct InternerData {
|
||||
interners: Mutex<TypedInterners>,
|
||||
master: Option<Box<dyn DynRequester<Transfer = api::IntReq>>>,
|
||||
}
|
||||
#[derive(Clone, Default)]
|
||||
pub struct Interner(Rc<InternerData>);
|
||||
impl Interner {
|
||||
pub fn new_master() -> Self { Self::default() }
|
||||
pub fn new_replica(req: impl DynRequester<Transfer = api::IntReq> + 'static) -> Self {
|
||||
Self(Rc::new(InternerData { master: Some(Box::new(req)), interners: Mutex::default() }))
|
||||
}
|
||||
/// Intern some data; query its identifier if not known locally
|
||||
pub async fn i<T: Interned>(&self, t: &(impl Internable<Interned = T> + ?Sized)) -> Tok<T> {
|
||||
let data = t.get_owned();
|
||||
let mut g = self.0.interners.lock().await;
|
||||
let typed = T::bimap(&mut g);
|
||||
if let Some(tok) = typed.by_value(&data) {
|
||||
return tok;
|
||||
let mut iter = self.deref().iter();
|
||||
match iter.next() {
|
||||
None => return Ok(()),
|
||||
Some(s) => write!(f, "{s}")?,
|
||||
}
|
||||
let marker = match &self.0.master {
|
||||
Some(c) => data.clone().intern(&**c).await,
|
||||
None =>
|
||||
T::Marker::from_id(NonZeroU64::new(ID.fetch_add(1, atomic::Ordering::Relaxed)).unwrap()),
|
||||
};
|
||||
let tok = Tok::new(data, marker);
|
||||
T::bimap(&mut g).insert(tok.clone());
|
||||
tok
|
||||
}
|
||||
/// Extern an identifier; query the data it represents if not known locally
|
||||
pub async fn ex<M: InternMarker>(&self, marker: M) -> Tok<M::Interned> {
|
||||
if let Some(tok) = M::Interned::bimap(&mut *self.0.interners.lock().await).by_marker(marker) {
|
||||
return tok;
|
||||
for s in iter {
|
||||
write!(f, "::{s}")?
|
||||
}
|
||||
assert!(self.0.master.is_some(), "ID not in local interner and this is master");
|
||||
let token = marker.resolve(self).await;
|
||||
M::Interned::bimap(&mut *self.0.interners.lock().await).insert(token.clone());
|
||||
token
|
||||
}
|
||||
pub async fn sweep_replica(&self) -> api::Retained {
|
||||
assert!(self.0.master.is_some(), "Not a replica");
|
||||
let mut g = self.0.interners.lock().await;
|
||||
api::Retained { strings: g.strings.sweep_replica(), vecs: g.vecs.sweep_replica() }
|
||||
}
|
||||
pub async fn sweep_master(&self, retained: api::Retained) {
|
||||
assert!(self.0.master.is_none(), "Not master");
|
||||
let mut g = self.0.interners.lock().await;
|
||||
g.strings.sweep_master(retained.strings.into_iter().collect());
|
||||
g.vecs.sweep_master(retained.vecs.into_iter().collect());
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
impl fmt::Debug for Interner {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
write!(f, "Interner{{ replica: {} }}", self.0.master.is_none())
|
||||
}
|
||||
impl Debug for IStrv {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "IStrv({self})") }
|
||||
}
|
||||
|
||||
static ID: atomic::AtomicU64 = atomic::AtomicU64::new(1);
|
||||
|
||||
pub fn merge_retained(into: &mut api::Retained, from: &api::Retained) {
|
||||
into.strings = into.strings.iter().chain(&from.strings).copied().unique().collect();
|
||||
into.vecs = into.vecs.iter().chain(&from.vecs).copied().unique().collect();
|
||||
pub trait InternerSrv {
|
||||
fn is<'a>(&'a self, v: &'a str) -> LocalBoxFuture<'a, IStr>;
|
||||
fn es(&self, t: api::TStr) -> LocalBoxFuture<'_, IStr>;
|
||||
fn iv<'a>(&'a self, v: &'a [IStr]) -> LocalBoxFuture<'a, IStrv>;
|
||||
fn ev(&self, t: api::TStrv) -> LocalBoxFuture<'_, IStrv>;
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use std::num::NonZero;
|
||||
use std::pin::Pin;
|
||||
task_local! {
|
||||
static INTERNER: Rc<dyn InternerSrv>;
|
||||
}
|
||||
|
||||
use orchid_api_traits::{Decode, enc_vec};
|
||||
use test_executors::spin_on;
|
||||
pub async fn with_interner<F: Future>(val: Rc<dyn InternerSrv>, fut: F) -> F::Output {
|
||||
INTERNER.scope(val, fut).await
|
||||
}
|
||||
|
||||
use super::*;
|
||||
fn get_interner() -> Rc<dyn InternerSrv> {
|
||||
INTERNER.try_with(|i| i.clone()).expect("Interner not initialized")
|
||||
}
|
||||
|
||||
pub async fn is(v: &str) -> IStr { get_interner().is(v).await }
|
||||
pub async fn iv(v: &[IStr]) -> IStrv { get_interner().iv(v).await }
|
||||
pub async fn es(v: api::TStr) -> IStr { get_interner().es(v).await }
|
||||
pub async fn ev(v: api::TStrv) -> IStrv { get_interner().ev(v).await }
|
||||
|
||||
pub mod local_interner {
|
||||
use std::borrow::Borrow;
|
||||
use std::cell::RefCell;
|
||||
use std::fmt::Debug;
|
||||
use std::future;
|
||||
use std::hash::{BuildHasher, Hash};
|
||||
use std::num::NonZeroU64;
|
||||
use std::rc::{Rc, Weak};
|
||||
|
||||
use futures::future::LocalBoxFuture;
|
||||
use hashbrown::hash_table::{Entry, OccupiedEntry, VacantEntry};
|
||||
use hashbrown::{DefaultHashBuilder, HashTable};
|
||||
use orchid_api_traits::Coding;
|
||||
|
||||
use super::{IStr, IStrHandle, IStrv, IStrvHandle, InternerSrv};
|
||||
use crate::api;
|
||||
|
||||
#[test]
|
||||
fn test_i() {
|
||||
let i = Interner::new_master();
|
||||
let _: Tok<String> = spin_on(i.i("foo"));
|
||||
let _: Tok<Vec<Tok<String>>> = spin_on(i.i(&[spin_on(i.i("bar")), spin_on(i.i("baz"))]));
|
||||
/// Associated types and methods for parallel concepts between scalar and
|
||||
/// vector interning
|
||||
pub trait InternableCard: 'static + Sized + Default + Debug {
|
||||
/// API representation of an interner key
|
||||
type Token: Clone + Copy + Debug + Hash + Eq + PartialOrd + Ord + Coding + 'static;
|
||||
/// Owned version of interned value physically held by `'static` interner
|
||||
/// and token
|
||||
type Data: 'static + Borrow<Self::Borrow> + Eq + Hash + Debug;
|
||||
/// Borrowed version of interned value placed in intern queries to avoid a
|
||||
/// copy
|
||||
type Borrow: ToOwned<Owned = Self::Data> + ?Sized + Eq + Hash + Debug;
|
||||
/// Smart object handed out by the interner for storage and comparison in
|
||||
/// third party code. [IStr] or [IStrv]
|
||||
type Interned: Clone + Debug;
|
||||
/// Create smart object from token for fast comparison and a handle for
|
||||
/// everything else incl. virtual drop
|
||||
fn new_interned(token: Self::Token, handle: Rc<Handle<Self>>) -> Self::Interned;
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_coding() {
|
||||
spin_on(async {
|
||||
let coded = api::TStr(NonZero::new(3u64).unwrap());
|
||||
let mut enc = &enc_vec(&coded).await[..];
|
||||
api::TStr::decode(Pin::new(&mut enc)).await;
|
||||
assert_eq!(enc, [], "Did not consume all of {enc:?}")
|
||||
})
|
||||
#[derive(Default, Debug)]
|
||||
pub struct StrBranch;
|
||||
impl InternableCard for StrBranch {
|
||||
type Data = String;
|
||||
type Token = api::TStr;
|
||||
type Borrow = str;
|
||||
type Interned = IStr;
|
||||
fn new_interned(t: Self::Token, h: Rc<Handle<Self>>) -> Self::Interned { IStr(t, h) }
|
||||
}
|
||||
|
||||
#[derive(Default, Debug)]
|
||||
pub struct StrvBranch;
|
||||
impl InternableCard for StrvBranch {
|
||||
type Data = Vec<IStr>;
|
||||
type Token = api::TStrv;
|
||||
type Borrow = [IStr];
|
||||
type Interned = IStrv;
|
||||
fn new_interned(t: Self::Token, h: Rc<Handle<Self>>) -> Self::Interned { IStrv(t, h) }
|
||||
}
|
||||
|
||||
/// Pairs interned data with its internment key
|
||||
#[derive(Debug)]
|
||||
struct Data<B: InternableCard> {
|
||||
token: B::Token,
|
||||
data: Rc<B::Data>,
|
||||
}
|
||||
impl<B: InternableCard> Clone for Data<B> {
|
||||
fn clone(&self) -> Self { Self { token: self.token, data: self.data.clone() } }
|
||||
}
|
||||
|
||||
/// Implementor for the trait objects held by [IStr] and [IStrv]
|
||||
pub struct Handle<B: InternableCard> {
|
||||
data: Data<B>,
|
||||
parent: Weak<RefCell<IntData<B>>>,
|
||||
}
|
||||
impl IStrHandle for Handle<StrBranch> {
|
||||
fn rc(&self) -> Rc<String> { self.data.data.clone() }
|
||||
}
|
||||
impl AsRef<str> for Handle<StrBranch> {
|
||||
fn as_ref(&self) -> &str { self.data.data.as_ref().as_ref() }
|
||||
}
|
||||
impl IStrvHandle for Handle<StrvBranch> {
|
||||
fn rc(&self) -> Rc<Vec<IStr>> { self.data.data.clone() }
|
||||
}
|
||||
impl AsRef<[IStr]> for Handle<StrvBranch> {
|
||||
fn as_ref(&self) -> &[IStr] { self.data.data.as_ref().as_ref() }
|
||||
}
|
||||
impl<B: InternableCard> Drop for Handle<B> {
|
||||
fn drop(&mut self) {
|
||||
let Some(parent) = self.parent.upgrade() else { return };
|
||||
if let Entry::Occupied(ent) =
|
||||
parent.borrow_mut().entry_by_data(self.data.data.as_ref().borrow())
|
||||
{
|
||||
ent.remove();
|
||||
}
|
||||
if let Entry::Occupied(ent) = parent.borrow_mut().entry_by_tok(self.data.token) {
|
||||
ent.remove();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Information retained about an interned token indexed both by key and
|
||||
/// value.
|
||||
struct Rec<B: InternableCard> {
|
||||
/// This reference is weak, but the [Drop] handler of [Handle] removes all
|
||||
/// [Rec]s from the interner so it is guaranteed to be live.
|
||||
handle: Weak<Handle<B>>,
|
||||
/// Keys for indexing from either table
|
||||
data: Data<B>,
|
||||
}
|
||||
|
||||
/// Read data from an occupied entry in an interner. The equivalent insert
|
||||
/// command is [insert]
|
||||
fn read<B: InternableCard>(entry: OccupiedEntry<'_, Rec<B>>) -> B::Interned {
|
||||
let hand = entry.get().handle.upgrade().expect("Found entry but handle already dropped");
|
||||
B::new_interned(entry.get().data.token, hand)
|
||||
}
|
||||
|
||||
/// Insert some data into an entry borrowed from this same interner.
|
||||
/// The equivalent read command is [read]
|
||||
fn insert<B: InternableCard>(entry: VacantEntry<'_, Rec<B>>, handle: Rc<Handle<B>>) {
|
||||
entry.insert(Rec { data: handle.data.clone(), handle: Rc::downgrade(&handle) });
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
struct IntData<B: InternableCard> {
|
||||
by_tok: HashTable<Rec<B>>,
|
||||
by_data: HashTable<Rec<B>>,
|
||||
hasher: DefaultHashBuilder,
|
||||
}
|
||||
impl<B: InternableCard> IntData<B> {
|
||||
fn entry_by_data(&mut self, query: &B::Borrow) -> Entry<'_, Rec<B>> {
|
||||
self.by_data.entry(
|
||||
self.hasher.hash_one(query),
|
||||
|rec| rec.data.data.as_ref().borrow() == query,
|
||||
|rec| self.hasher.hash_one(rec.data.data.as_ref().borrow()),
|
||||
)
|
||||
}
|
||||
fn entry_by_tok(&mut self, token: B::Token) -> Entry<'_, Rec<B>> {
|
||||
self.by_tok.entry(
|
||||
self.hasher.hash_one(token),
|
||||
|rec| rec.data.token == token,
|
||||
|rec| self.hasher.hash_one(rec.data.token),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/// Failing intern command that can be recovered if the value is found
|
||||
/// elsewhere
|
||||
pub struct InternError<'a, B: InternableCard> {
|
||||
int: &'a Int<B>,
|
||||
query: &'a B::Borrow,
|
||||
}
|
||||
impl<B: InternableCard> InternError<'_, B> {
|
||||
/// If a racing write populates the entry, the continuation returns that
|
||||
/// value and discards its argument
|
||||
pub fn set_if_empty(self, token: B::Token) -> B::Interned {
|
||||
let mut int_data = self.int.0.borrow_mut();
|
||||
match int_data.entry_by_data(self.query) {
|
||||
Entry::Occupied(ent) => read(ent),
|
||||
Entry::Vacant(ent) => {
|
||||
let hand = self.int.mk_handle(Data { token, data: Rc::new(self.query.to_owned()) });
|
||||
insert(ent, hand.clone());
|
||||
let Entry::Vacant(other_ent) = int_data.entry_by_tok(token) else {
|
||||
panic!("Data and key tables out of sync")
|
||||
};
|
||||
insert(other_ent, hand.clone());
|
||||
B::new_interned(token, hand)
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
impl<B: InternableCard> Debug for InternError<'_, B> {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.debug_tuple("InternEntry").field(&self.query).finish()
|
||||
}
|
||||
}
|
||||
|
||||
/// Failing extern command that can be recovered if the value is found
|
||||
/// elsewhere
|
||||
pub struct ExternError<'a, B: InternableCard> {
|
||||
int: &'a Int<B>,
|
||||
token: B::Token,
|
||||
}
|
||||
impl<B: InternableCard> ExternError<'_, B> {
|
||||
/// If a racing write populates the entry, the continuation returns that
|
||||
/// value and discards its argument
|
||||
pub fn set_if_empty(&self, data: Rc<B::Data>) -> B::Interned {
|
||||
let mut int_data = self.int.0.borrow_mut();
|
||||
match int_data.entry_by_tok(self.token) {
|
||||
Entry::Occupied(ent) => read(ent),
|
||||
Entry::Vacant(ent) => {
|
||||
let hand = self.int.mk_handle(Data { token: self.token, data: data.clone() });
|
||||
insert(ent, hand.clone());
|
||||
let Entry::Vacant(other_ent) = int_data.entry_by_data(data.as_ref().borrow()) else {
|
||||
panic!("Data and key tables out of sync")
|
||||
};
|
||||
insert(other_ent, hand.clone());
|
||||
B::new_interned(self.token, hand)
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
impl<B: InternableCard> Debug for ExternError<'_, B> {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.debug_tuple("ExternEntry").field(&self.token).finish()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct Int<B: InternableCard>(Rc<RefCell<IntData<B>>>);
|
||||
impl<B: InternableCard> Int<B> {
|
||||
fn mk_handle(&self, data: Data<B>) -> Rc<Handle<B>> {
|
||||
Rc::new(Handle { data: data.clone(), parent: Rc::downgrade(&self.0.clone()) })
|
||||
}
|
||||
|
||||
/// Look up by value, or yield to figure out its ID from elsewhere
|
||||
pub fn i<'a>(&'a self, query: &'a B::Borrow) -> Result<B::Interned, InternError<'a, B>> {
|
||||
if let Entry::Occupied(val) = self.0.borrow_mut().entry_by_data(query) {
|
||||
return Ok(read(val));
|
||||
}
|
||||
Err(InternError { int: self, query })
|
||||
}
|
||||
|
||||
/// Look up by key or yield to figure out its value from elsewhere
|
||||
pub fn e(&self, token: B::Token) -> Result<B::Interned, ExternError<'_, B>> {
|
||||
if let Entry::Occupied(ent) = self.0.borrow_mut().entry_by_tok(token) {
|
||||
return Ok(read(ent));
|
||||
}
|
||||
Err(ExternError { int: self, token })
|
||||
}
|
||||
}
|
||||
|
||||
thread_local! {
|
||||
static NEXT_ID: RefCell<u64> = 0.into();
|
||||
}
|
||||
|
||||
fn with_new_id<T>(fun: impl FnOnce(NonZeroU64) -> T) -> T {
|
||||
fun(
|
||||
NonZeroU64::new(NEXT_ID.with_borrow_mut(|id| {
|
||||
*id += 1;
|
||||
*id
|
||||
}))
|
||||
.unwrap(),
|
||||
)
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
struct LocalInterner {
|
||||
str: Int<StrBranch>,
|
||||
strv: Int<StrvBranch>,
|
||||
}
|
||||
impl InternerSrv for LocalInterner {
|
||||
fn is<'a>(&'a self, v: &'a str) -> LocalBoxFuture<'a, IStr> {
|
||||
match self.str.i(v) {
|
||||
Ok(int) => Box::pin(future::ready(int)),
|
||||
Err(e) => with_new_id(|id| Box::pin(future::ready(e.set_if_empty(api::TStr(id))))),
|
||||
}
|
||||
}
|
||||
fn es(&self, t: api::TStr) -> LocalBoxFuture<'_, IStr> {
|
||||
Box::pin(future::ready(self.str.e(t).expect("Unrecognized token cannot be externed")))
|
||||
}
|
||||
fn iv<'a>(&'a self, v: &'a [IStr]) -> LocalBoxFuture<'a, IStrv> {
|
||||
match self.strv.i(v) {
|
||||
Ok(int) => Box::pin(future::ready(int)),
|
||||
Err(e) => with_new_id(|id| Box::pin(future::ready(e.set_if_empty(api::TStrv(id))))),
|
||||
}
|
||||
}
|
||||
fn ev(&self, t: orchid_api::TStrv) -> LocalBoxFuture<'_, IStrv> {
|
||||
Box::pin(future::ready(self.strv.e(t).expect("Unrecognized token cannot be externed")))
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates a basic thread-local interner for testing and root role.
|
||||
pub fn local_interner() -> Rc<dyn InternerSrv> { Rc::<LocalInterner>::default() }
|
||||
}
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
pub use async_once_cell;
|
||||
use orchid_api as api;
|
||||
|
||||
pub mod binary;
|
||||
pub mod box_cow;
|
||||
pub mod boxed_iter;
|
||||
pub mod builtin;
|
||||
pub mod char_filter;
|
||||
pub mod clone;
|
||||
pub mod combine;
|
||||
@@ -14,6 +14,7 @@ pub mod id_store;
|
||||
pub mod interner;
|
||||
pub mod iter_utils;
|
||||
pub mod join;
|
||||
mod localset;
|
||||
pub mod location;
|
||||
pub mod logging;
|
||||
mod match_mapping;
|
||||
@@ -25,6 +26,7 @@ pub mod pure_seq;
|
||||
pub mod reqnot;
|
||||
pub mod sequence;
|
||||
pub mod side;
|
||||
pub mod stash;
|
||||
mod tl_cache;
|
||||
pub mod tokens;
|
||||
pub mod tree;
|
||||
|
||||
48
orchid-base/src/localset.rs
Normal file
48
orchid-base/src/localset.rs
Normal file
@@ -0,0 +1,48 @@
|
||||
use std::collections::VecDeque;
|
||||
use std::pin::Pin;
|
||||
use std::task::Poll;
|
||||
|
||||
use futures::StreamExt;
|
||||
use futures::channel::mpsc::{UnboundedReceiver, UnboundedSender, unbounded};
|
||||
use futures::future::LocalBoxFuture;
|
||||
|
||||
pub struct LocalSet<'a, E> {
|
||||
receiver: UnboundedReceiver<LocalBoxFuture<'a, Result<(), E>>>,
|
||||
pending: VecDeque<LocalBoxFuture<'a, Result<(), E>>>,
|
||||
}
|
||||
impl<'a, E> LocalSet<'a, E> {
|
||||
pub fn new() -> (UnboundedSender<LocalBoxFuture<'a, Result<(), E>>>, Self) {
|
||||
let (sender, receiver) = unbounded();
|
||||
(sender, Self { receiver, pending: VecDeque::new() })
|
||||
}
|
||||
}
|
||||
impl<E> Future for LocalSet<'_, E> {
|
||||
type Output = Result<(), E>;
|
||||
fn poll(self: Pin<&mut Self>, cx: &mut std::task::Context<'_>) -> Poll<Self::Output> {
|
||||
let this = self.get_mut();
|
||||
let mut any_pending = false;
|
||||
loop {
|
||||
match this.receiver.poll_next_unpin(cx) {
|
||||
Poll::Pending => {
|
||||
any_pending = true;
|
||||
break;
|
||||
},
|
||||
Poll::Ready(None) => break,
|
||||
Poll::Ready(Some(fut)) => this.pending.push_back(fut),
|
||||
}
|
||||
}
|
||||
let count = this.pending.len();
|
||||
for _ in 0..count {
|
||||
let mut req = this.pending.pop_front().unwrap();
|
||||
match req.as_mut().poll(cx) {
|
||||
Poll::Ready(Ok(())) => (),
|
||||
Poll::Ready(Err(e)) => return Poll::Ready(Err(e)),
|
||||
Poll::Pending => {
|
||||
any_pending = true;
|
||||
this.pending.push_back(req)
|
||||
},
|
||||
}
|
||||
}
|
||||
if any_pending { Poll::Pending } else { Poll::Ready(Ok(())) }
|
||||
}
|
||||
}
|
||||
@@ -2,17 +2,18 @@
|
||||
|
||||
use std::fmt;
|
||||
use std::hash::Hash;
|
||||
use std::ops::Range;
|
||||
use std::ops::{Add, AddAssign, Range};
|
||||
|
||||
use futures::future::join_all;
|
||||
use trait_set::trait_set;
|
||||
|
||||
use crate::error::ErrPos;
|
||||
use crate::interner::{Interner, Tok};
|
||||
use crate::interner::{IStr, es, is};
|
||||
use crate::name::Sym;
|
||||
use crate::{api, match_mapping, sym};
|
||||
|
||||
trait_set! {
|
||||
pub trait GetSrc = FnMut(&Sym) -> Tok<String>;
|
||||
pub trait GetSrc = FnMut(&Sym) -> IStr;
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
@@ -25,6 +26,7 @@ pub enum Pos {
|
||||
Gen(CodeGenInfo),
|
||||
/// Range and file
|
||||
SrcRange(SrcRange),
|
||||
Multi(Vec<Pos>),
|
||||
}
|
||||
impl Pos {
|
||||
pub fn pretty_print(&self, get_src: &mut impl GetSrc) -> String {
|
||||
@@ -35,18 +37,20 @@ impl Pos {
|
||||
other => format!("{other:?}"),
|
||||
}
|
||||
}
|
||||
pub async fn from_api(api: &api::Location, i: &Interner) -> Self {
|
||||
pub async fn from_api(api: &api::Location) -> Self {
|
||||
match_mapping!(api, api::Location => Pos {
|
||||
None, Inherit, SlotTarget,
|
||||
Gen(cgi => CodeGenInfo::from_api(cgi, i).await),
|
||||
Gen(cgi => CodeGenInfo::from_api(cgi).await),
|
||||
Multi(v => join_all(v.iter().map(Pos::from_api)).await)
|
||||
} {
|
||||
api::Location::SourceRange(sr) => Self::SrcRange(SrcRange::from_api(sr, i).await)
|
||||
api::Location::SourceRange(sr) => Self::SrcRange(SrcRange::from_api(sr).await)
|
||||
})
|
||||
}
|
||||
pub fn to_api(&self) -> api::Location {
|
||||
match_mapping!(self, Pos => api::Location {
|
||||
None, Inherit, SlotTarget,
|
||||
Gen(cgi.to_api()),
|
||||
Multi(v => v.iter().map(|pos| pos.to_api()).collect()),
|
||||
} {
|
||||
Self::SrcRange(sr) => api::Location::SourceRange(sr.to_api()),
|
||||
})
|
||||
@@ -60,9 +64,36 @@ impl fmt::Display for Pos {
|
||||
Pos::None => f.write_str("N/A"),
|
||||
Pos::Gen(g) => write!(f, "{g}"),
|
||||
Pos::SrcRange(sr) => write!(f, "{sr}"),
|
||||
Pos::Multi(posv) => {
|
||||
write!(f, "{}", posv[0])?;
|
||||
for pos in posv {
|
||||
write!(f, "+{}", pos)?;
|
||||
}
|
||||
Ok(())
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
impl Add for Pos {
|
||||
type Output = Pos;
|
||||
fn add(self, rhs: Self) -> Self::Output {
|
||||
match (self, rhs) {
|
||||
(Pos::Multi(l), Pos::Multi(r)) => Pos::Multi(l.into_iter().chain(r).collect()),
|
||||
(Pos::None, any) => any,
|
||||
(any, Pos::None) => any,
|
||||
(Pos::Multi(v), single) => Pos::Multi(v.into_iter().chain([single]).collect()),
|
||||
(single, Pos::Multi(v)) => Pos::Multi([single].into_iter().chain(v).collect()),
|
||||
(l, r) => Pos::Multi(vec![l, r]),
|
||||
}
|
||||
}
|
||||
}
|
||||
impl AddAssign for Pos {
|
||||
fn add_assign(&mut self, rhs: Self) {
|
||||
let mut tmp = Pos::None;
|
||||
std::mem::swap(&mut tmp, self);
|
||||
*self = tmp + rhs;
|
||||
}
|
||||
}
|
||||
|
||||
/// Exact source code location. Includes where the code was loaded from, what
|
||||
/// the original source code was, and a byte range.
|
||||
@@ -77,7 +108,7 @@ impl SrcRange {
|
||||
}
|
||||
/// Create a dud [SourceRange] for testing. Its value is unspecified and
|
||||
/// volatile.
|
||||
pub async fn mock(i: &Interner) -> Self { Self { range: 0..1, path: sym!(test; i).await } }
|
||||
pub async fn mock() -> Self { Self { range: 0..1, path: sym!(test) } }
|
||||
/// Path the source text was loaded from
|
||||
pub fn path(&self) -> Sym { self.path.clone() }
|
||||
/// Byte range
|
||||
@@ -102,8 +133,8 @@ impl SrcRange {
|
||||
}
|
||||
}
|
||||
pub fn zw(path: Sym, pos: u32) -> Self { Self { path, range: pos..pos } }
|
||||
pub async fn from_api(api: &api::SourceRange, i: &Interner) -> Self {
|
||||
Self { path: Sym::from_api(api.path, i).await, range: api.range.clone() }
|
||||
pub async fn from_api(api: &api::SourceRange) -> Self {
|
||||
Self { path: Sym::from_api(api.path).await, range: api.range.clone() }
|
||||
}
|
||||
pub fn to_api(&self) -> api::SourceRange {
|
||||
api::SourceRange { path: self.path.to_api(), range: self.range.clone() }
|
||||
@@ -131,24 +162,19 @@ pub struct CodeGenInfo {
|
||||
/// formatted like a Rust namespace
|
||||
pub generator: Sym,
|
||||
/// Unformatted user message with relevant circumstances and parameters
|
||||
pub details: Tok<String>,
|
||||
pub details: IStr,
|
||||
}
|
||||
impl CodeGenInfo {
|
||||
/// A codegen marker with no user message and parameters
|
||||
pub async fn new_short(generator: Sym, i: &Interner) -> Self {
|
||||
Self { generator, details: i.i("").await }
|
||||
}
|
||||
pub async fn new_short(generator: Sym) -> Self { Self { generator, details: is("").await } }
|
||||
/// A codegen marker with a user message or parameters
|
||||
pub async fn new_details(generator: Sym, details: impl AsRef<str>, i: &Interner) -> Self {
|
||||
Self { generator, details: i.i(details.as_ref()).await }
|
||||
pub async fn new_details(generator: Sym, details: impl AsRef<str>) -> Self {
|
||||
Self { generator, details: is(details.as_ref()).await }
|
||||
}
|
||||
/// Syntactic location
|
||||
pub fn pos(&self) -> Pos { Pos::Gen(self.clone()) }
|
||||
pub async fn from_api(api: &api::CodeGenInfo, i: &Interner) -> Self {
|
||||
Self {
|
||||
generator: Sym::from_api(api.generator, i).await,
|
||||
details: Tok::from_api(api.details, i).await,
|
||||
}
|
||||
pub async fn from_api(api: &api::CodeGenInfo) -> Self {
|
||||
Self { generator: Sym::from_api(api.generator).await, details: es(api.details).await }
|
||||
}
|
||||
pub fn to_api(&self) -> api::CodeGenInfo {
|
||||
api::CodeGenInfo { generator: self.generator.to_api(), details: self.details.to_api() }
|
||||
|
||||
@@ -1,35 +1,74 @@
|
||||
use std::any::Any;
|
||||
use std::cell::RefCell;
|
||||
use std::fmt::Arguments;
|
||||
use std::fs::File;
|
||||
use std::io::{Write, stderr};
|
||||
use std::io::Write;
|
||||
use std::rc::Rc;
|
||||
|
||||
pub use api::LogStrategy;
|
||||
use itertools::Itertools;
|
||||
use futures::future::LocalBoxFuture;
|
||||
use task_local::task_local;
|
||||
|
||||
use crate::api;
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct Logger(api::LogStrategy);
|
||||
impl Logger {
|
||||
pub fn new(strat: api::LogStrategy) -> Self { Self(strat) }
|
||||
pub fn log(&self, msg: impl AsRef<str>) { writeln!(self, "{}", msg.as_ref()) }
|
||||
pub fn strat(&self) -> api::LogStrategy { self.0.clone() }
|
||||
pub fn log_buf(&self, event: impl AsRef<str>, buf: &[u8]) {
|
||||
if std::env::var("ORCHID_LOG_BUFFERS").is_ok_and(|v| !v.is_empty()) {
|
||||
writeln!(self, "{}: [{}]", event.as_ref(), buf.iter().map(|b| format!("{b:02x}")).join(" "))
|
||||
task_local! {
|
||||
static DEFAULT_WRITER: RefCell<Box<dyn Write>>
|
||||
}
|
||||
|
||||
/// Set the stream used for [api::LogStrategy::Default]. If not set,
|
||||
/// [std::io::stderr] will be used.
|
||||
pub async fn with_default_stream<F: Future>(stderr: impl Write + 'static, fut: F) -> F::Output {
|
||||
DEFAULT_WRITER.scope(RefCell::new(Box::new(stderr)), fut).await
|
||||
}
|
||||
|
||||
pub trait LogWriter {
|
||||
fn write_fmt<'a>(&'a self, fmt: Arguments<'a>) -> LocalBoxFuture<'a, ()>;
|
||||
}
|
||||
|
||||
pub trait Logger: Any {
|
||||
fn writer(&self, category: &str) -> Rc<dyn LogWriter>;
|
||||
fn strat(&self, category: &str) -> api::LogStrategy;
|
||||
fn is_active(&self, category: &str) -> bool {
|
||||
!matches!(self.strat(category), api::LogStrategy::Discard)
|
||||
}
|
||||
}
|
||||
|
||||
task_local! {
|
||||
static LOGGER: Rc<dyn Logger>;
|
||||
}
|
||||
|
||||
pub async fn with_logger<F: Future>(logger: impl Logger + 'static, fut: F) -> F::Output {
|
||||
LOGGER.scope(Rc::new(logger), fut).await
|
||||
}
|
||||
|
||||
pub fn log(category: &str) -> Rc<dyn LogWriter> {
|
||||
LOGGER.try_with(|l| l.writer(category)).expect("Logger not set!")
|
||||
}
|
||||
|
||||
pub fn get_logger() -> Rc<dyn Logger> { LOGGER.try_with(|l| l.clone()).expect("Logger not set!") }
|
||||
|
||||
pub mod test {
|
||||
use std::fmt::Arguments;
|
||||
use std::rc::Rc;
|
||||
|
||||
use futures::future::LocalBoxFuture;
|
||||
|
||||
use crate::clone;
|
||||
use crate::logging::{LogWriter, Logger};
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct TestLogger(Rc<dyn Fn(String) -> LocalBoxFuture<'static, ()>>);
|
||||
impl LogWriter for TestLogger {
|
||||
fn write_fmt<'a>(&'a self, fmt: Arguments<'a>) -> LocalBoxFuture<'a, ()> {
|
||||
(self.0)(fmt.to_string())
|
||||
}
|
||||
}
|
||||
pub fn write_fmt(&self, fmt: Arguments) {
|
||||
match &self.0 {
|
||||
api::LogStrategy::Discard => (),
|
||||
api::LogStrategy::StdErr => {
|
||||
stderr().write_fmt(fmt).expect("Could not write to stderr!");
|
||||
stderr().flush().expect("Could not flush stderr")
|
||||
},
|
||||
api::LogStrategy::File(f) => {
|
||||
let mut file = (File::options().write(true).create(true).truncate(true).open(f))
|
||||
.expect("Could not open logfile");
|
||||
file.write_fmt(fmt).expect("Could not write to logfile");
|
||||
},
|
||||
impl Logger for TestLogger {
|
||||
fn strat(&self, _category: &str) -> orchid_api::LogStrategy { orchid_api::LogStrategy::Default }
|
||||
fn writer(&self, _category: &str) -> std::rc::Rc<dyn LogWriter> { Rc::new(self.clone()) }
|
||||
}
|
||||
impl TestLogger {
|
||||
pub fn new(f: impl AsyncFn(String) + 'static) -> Self {
|
||||
let f = Rc::new(f);
|
||||
Self(Rc::new(move |s| clone!(f; Box::pin(async move { f(s).await }))))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,7 +6,8 @@ use orchid_api_traits::{Decode, Encode};
|
||||
|
||||
pub async fn send_msg(mut write: Pin<&mut impl AsyncWrite>, msg: &[u8]) -> io::Result<()> {
|
||||
let mut len_buf = vec![];
|
||||
u32::try_from(msg.len()).unwrap().encode(Pin::new(&mut len_buf)).await;
|
||||
let len_prefix = u32::try_from(msg.len()).expect("Message over 4GB not permitted on channel");
|
||||
len_prefix.encode_vec(&mut len_buf);
|
||||
write.write_all(&len_buf).await?;
|
||||
write.write_all(msg).await?;
|
||||
write.flush().await
|
||||
@@ -15,7 +16,7 @@ pub async fn send_msg(mut write: Pin<&mut impl AsyncWrite>, msg: &[u8]) -> io::R
|
||||
pub async fn recv_msg(mut read: Pin<&mut impl AsyncRead>) -> io::Result<Vec<u8>> {
|
||||
let mut len_buf = [0u8; (u32::BITS / 8) as usize];
|
||||
read.read_exact(&mut len_buf).await?;
|
||||
let len = u32::decode(Pin::new(&mut &len_buf[..])).await;
|
||||
let len = u32::decode(Pin::new(&mut &len_buf[..])).await?;
|
||||
let mut msg = vec![0u8; len as usize];
|
||||
read.read_exact(&mut msg).await?;
|
||||
Ok(msg)
|
||||
|
||||
@@ -12,65 +12,60 @@ use itertools::Itertools;
|
||||
use trait_set::trait_set;
|
||||
|
||||
use crate::api;
|
||||
use crate::interner::{InternMarker, Interner, Tok};
|
||||
use crate::interner::{IStr, IStrv, es, ev, is, iv};
|
||||
|
||||
trait_set! {
|
||||
/// Traits that all name iterators should implement
|
||||
pub trait NameIter = Iterator<Item = Tok<String>> + DoubleEndedIterator + ExactSizeIterator;
|
||||
pub trait NameIter = Iterator<Item = IStr> + DoubleEndedIterator + ExactSizeIterator;
|
||||
}
|
||||
|
||||
/// A token path which may be empty. [VName] is the non-empty version
|
||||
#[derive(Clone, Default, Hash, PartialEq, Eq)]
|
||||
pub struct VPath(Vec<Tok<String>>);
|
||||
pub struct VPath(Vec<IStr>);
|
||||
impl VPath {
|
||||
/// Collect segments into a vector
|
||||
pub fn new(items: impl IntoIterator<Item = Tok<String>>) -> Self {
|
||||
Self(items.into_iter().collect())
|
||||
}
|
||||
pub fn new(items: impl IntoIterator<Item = IStr>) -> Self { Self(items.into_iter().collect()) }
|
||||
/// Number of path segments
|
||||
pub fn len(&self) -> usize { self.0.len() }
|
||||
/// Whether there are any path segments. In other words, whether this is a
|
||||
/// valid name
|
||||
pub fn is_empty(&self) -> bool { self.len() == 0 }
|
||||
/// Prepend some tokens to the path
|
||||
pub fn prefix(self, items: impl IntoIterator<Item = Tok<String>>) -> Self {
|
||||
pub fn prefix(self, items: impl IntoIterator<Item = IStr>) -> Self {
|
||||
Self(items.into_iter().chain(self.0).collect())
|
||||
}
|
||||
/// Append some tokens to the path
|
||||
pub fn suffix(self, items: impl IntoIterator<Item = Tok<String>>) -> Self {
|
||||
pub fn suffix(self, items: impl IntoIterator<Item = IStr>) -> Self {
|
||||
Self(self.0.into_iter().chain(items).collect())
|
||||
}
|
||||
/// Partition the string by `::` namespace separators
|
||||
pub async fn parse(s: &str, i: &Interner) -> Self {
|
||||
Self(if s.is_empty() { vec![] } else { join_all(s.split("::").map(|s| i.i(s))).await })
|
||||
pub async fn parse(s: &str) -> Self {
|
||||
Self(if s.is_empty() { vec![] } else { join_all(s.split("::").map(is)).await })
|
||||
}
|
||||
/// Walk over the segments
|
||||
pub fn str_iter(&self) -> impl Iterator<Item = &'_ str> {
|
||||
Box::new(self.0.iter().map(|s| s.as_str()))
|
||||
}
|
||||
pub fn str_iter(&self) -> impl Iterator<Item = &'_ str> { Box::new(self.0.iter().map(|s| &**s)) }
|
||||
/// Try to convert into non-empty version
|
||||
pub fn into_name(self) -> Result<VName, EmptyNameError> { VName::new(self.0) }
|
||||
/// Add a token to the path. Since now we know that it can't be empty, turn it
|
||||
/// into a name.
|
||||
pub fn name_with_suffix(self, name: Tok<String>) -> VName {
|
||||
pub fn name_with_suffix(self, name: IStr) -> VName {
|
||||
VName(self.into_iter().chain([name]).collect())
|
||||
}
|
||||
/// Add a token to the beginning of the. Since now we know that it can't be
|
||||
/// empty, turn it into a name.
|
||||
pub fn name_with_prefix(self, name: Tok<String>) -> VName {
|
||||
pub fn name_with_prefix(self, name: IStr) -> VName {
|
||||
VName([name].into_iter().chain(self).collect())
|
||||
}
|
||||
|
||||
/// Convert a fs path to a vpath
|
||||
pub async fn from_path(path: &Path, ext: &str, i: &Interner) -> Option<(Self, bool)> {
|
||||
async fn to_vpath(p: &Path, i: &Interner) -> Option<VPath> {
|
||||
let tok_opt_v =
|
||||
join_all(p.iter().map(|c| OptionFuture::from(c.to_str().map(|s| i.i(s))))).await;
|
||||
pub async fn from_path(path: &Path, ext: &str) -> Option<(Self, bool)> {
|
||||
async fn to_vpath(p: &Path) -> Option<VPath> {
|
||||
let tok_opt_v = join_all(p.iter().map(|c| OptionFuture::from(c.to_str().map(is)))).await;
|
||||
tok_opt_v.into_iter().collect::<Option<_>>().map(VPath)
|
||||
}
|
||||
match path.extension().map(|s| s.to_str()) {
|
||||
Some(Some(s)) if s == ext => Some((to_vpath(&path.with_extension(""), i).await?, true)),
|
||||
None => Some((to_vpath(path, i).await?, false)),
|
||||
Some(Some(s)) if s == ext => Some((to_vpath(&path.with_extension("")).await?, true)),
|
||||
None => Some((to_vpath(path).await?, false)),
|
||||
Some(_) => None,
|
||||
}
|
||||
}
|
||||
@@ -83,30 +78,28 @@ impl fmt::Display for VPath {
|
||||
write!(f, "{}", self.str_iter().join("::"))
|
||||
}
|
||||
}
|
||||
impl FromIterator<Tok<String>> for VPath {
|
||||
fn from_iter<T: IntoIterator<Item = Tok<String>>>(iter: T) -> Self {
|
||||
Self(iter.into_iter().collect())
|
||||
}
|
||||
impl FromIterator<IStr> for VPath {
|
||||
fn from_iter<T: IntoIterator<Item = IStr>>(iter: T) -> Self { Self(iter.into_iter().collect()) }
|
||||
}
|
||||
impl IntoIterator for VPath {
|
||||
type Item = Tok<String>;
|
||||
type Item = IStr;
|
||||
type IntoIter = vec::IntoIter<Self::Item>;
|
||||
fn into_iter(self) -> Self::IntoIter { self.0.into_iter() }
|
||||
}
|
||||
impl Borrow<[Tok<String>]> for VPath {
|
||||
fn borrow(&self) -> &[Tok<String>] { &self.0[..] }
|
||||
impl Borrow<[IStr]> for VPath {
|
||||
fn borrow(&self) -> &[IStr] { &self.0[..] }
|
||||
}
|
||||
impl Deref for VPath {
|
||||
type Target = [Tok<String>];
|
||||
type Target = [IStr];
|
||||
fn deref(&self) -> &Self::Target { self.borrow() }
|
||||
}
|
||||
|
||||
impl<T> Index<T> for VPath
|
||||
where [Tok<String>]: Index<T>
|
||||
where [IStr]: Index<T>
|
||||
{
|
||||
type Output = <[Tok<String>] as Index<T>>::Output;
|
||||
type Output = <[IStr] as Index<T>>::Output;
|
||||
|
||||
fn index(&self, index: T) -> &Self::Output { &Borrow::<[Tok<String>]>::borrow(self)[index] }
|
||||
fn index(&self, index: T) -> &Self::Output { &Borrow::<[IStr]>::borrow(self)[index] }
|
||||
}
|
||||
|
||||
/// A mutable representation of a namespaced identifier of at least one segment.
|
||||
@@ -116,50 +109,43 @@ where [Tok<String>]: Index<T>
|
||||
/// See also [Sym] for the immutable representation, and [VPath] for possibly
|
||||
/// empty values
|
||||
#[derive(Clone, Hash, PartialEq, Eq)]
|
||||
pub struct VName(Vec<Tok<String>>);
|
||||
pub struct VName(Vec<IStr>);
|
||||
impl VName {
|
||||
/// Assert that the sequence isn't empty and wrap it in [VName] to represent
|
||||
/// this invariant
|
||||
pub fn new(items: impl IntoIterator<Item = Tok<String>>) -> Result<Self, EmptyNameError> {
|
||||
pub fn new(items: impl IntoIterator<Item = IStr>) -> Result<Self, EmptyNameError> {
|
||||
let data: Vec<_> = items.into_iter().collect();
|
||||
if data.is_empty() { Err(EmptyNameError) } else { Ok(Self(data)) }
|
||||
}
|
||||
pub async fn deintern(
|
||||
name: impl IntoIterator<Item = api::TStr>,
|
||||
i: &Interner,
|
||||
) -> Result<Self, EmptyNameError> {
|
||||
Self::new(join_all(name.into_iter().map(|m| Tok::from_api(m, i))).await)
|
||||
pub async fn deintern(name: impl IntoIterator<Item = api::TStr>) -> Result<Self, EmptyNameError> {
|
||||
Self::new(join_all(name.into_iter().map(es)).await)
|
||||
}
|
||||
/// Unwrap the enclosed vector
|
||||
pub fn into_vec(self) -> Vec<Tok<String>> { self.0 }
|
||||
pub fn into_vec(self) -> Vec<IStr> { self.0 }
|
||||
/// Get a reference to the enclosed vector
|
||||
pub fn vec(&self) -> &Vec<Tok<String>> { &self.0 }
|
||||
pub fn vec(&self) -> &Vec<IStr> { &self.0 }
|
||||
/// Mutable access to the underlying vector. To ensure correct results, this
|
||||
/// must never be empty.
|
||||
pub fn vec_mut(&mut self) -> &mut Vec<Tok<String>> { &mut self.0 }
|
||||
pub fn vec_mut(&mut self) -> &mut Vec<IStr> { &mut self.0 }
|
||||
/// Intern the name and return a [Sym]
|
||||
pub async fn to_sym(&self, i: &Interner) -> Sym { Sym(i.i(&self.0[..]).await) }
|
||||
pub async fn to_sym(&self) -> Sym { Sym(iv(&self.0[..]).await) }
|
||||
/// If this name has only one segment, return it
|
||||
pub fn as_root(&self) -> Option<Tok<String>> { self.0.iter().exactly_one().ok().cloned() }
|
||||
pub fn as_root(&self) -> Option<IStr> { self.0.iter().exactly_one().ok().cloned() }
|
||||
/// Prepend the segments to this name
|
||||
#[must_use = "This is a pure function"]
|
||||
pub fn prefix(self, items: impl IntoIterator<Item = Tok<String>>) -> Self {
|
||||
pub fn prefix(self, items: impl IntoIterator<Item = IStr>) -> Self {
|
||||
Self(items.into_iter().chain(self.0).collect())
|
||||
}
|
||||
/// Append the segments to this name
|
||||
#[must_use = "This is a pure function"]
|
||||
pub fn suffix(self, items: impl IntoIterator<Item = Tok<String>>) -> Self {
|
||||
pub fn suffix(self, items: impl IntoIterator<Item = IStr>) -> Self {
|
||||
Self(self.0.into_iter().chain(items).collect())
|
||||
}
|
||||
/// Read a `::` separated namespaced name
|
||||
pub async fn parse(s: &str, i: &Interner) -> Result<Self, EmptyNameError> {
|
||||
Self::new(VPath::parse(s, i).await)
|
||||
}
|
||||
pub async fn literal(s: &'static str, i: &Interner) -> Self {
|
||||
Self::parse(s, i).await.expect("empty literal !?")
|
||||
}
|
||||
pub async fn parse(s: &str) -> Result<Self, EmptyNameError> { Self::new(VPath::parse(s).await) }
|
||||
pub async fn literal(s: &'static str) -> Self { Self::parse(s).await.expect("empty literal !?") }
|
||||
/// Obtain an iterator over the segments of the name
|
||||
pub fn iter(&self) -> impl Iterator<Item = Tok<String>> + '_ { self.0.iter().cloned() }
|
||||
pub fn iter(&self) -> impl Iterator<Item = IStr> + '_ { self.0.iter().cloned() }
|
||||
}
|
||||
impl fmt::Debug for VName {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "VName({self})") }
|
||||
@@ -170,22 +156,22 @@ impl fmt::Display for VName {
|
||||
}
|
||||
}
|
||||
impl IntoIterator for VName {
|
||||
type Item = Tok<String>;
|
||||
type Item = IStr;
|
||||
type IntoIter = vec::IntoIter<Self::Item>;
|
||||
fn into_iter(self) -> Self::IntoIter { self.0.into_iter() }
|
||||
}
|
||||
impl<T> Index<T> for VName
|
||||
where [Tok<String>]: Index<T>
|
||||
where [IStr]: Index<T>
|
||||
{
|
||||
type Output = <[Tok<String>] as Index<T>>::Output;
|
||||
type Output = <[IStr] as Index<T>>::Output;
|
||||
|
||||
fn index(&self, index: T) -> &Self::Output { &self.deref()[index] }
|
||||
}
|
||||
impl Borrow<[Tok<String>]> for VName {
|
||||
fn borrow(&self) -> &[Tok<String>] { self.0.borrow() }
|
||||
impl Borrow<[IStr]> for VName {
|
||||
fn borrow(&self) -> &[IStr] { self.0.borrow() }
|
||||
}
|
||||
impl Deref for VName {
|
||||
type Target = [Tok<String>];
|
||||
type Target = [IStr];
|
||||
fn deref(&self) -> &Self::Target { self.borrow() }
|
||||
}
|
||||
|
||||
@@ -193,11 +179,9 @@ impl Deref for VName {
|
||||
/// empty sequence
|
||||
#[derive(Debug, Copy, Clone, Default, Hash, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub struct EmptyNameError;
|
||||
impl TryFrom<&[Tok<String>]> for VName {
|
||||
impl TryFrom<&[IStr]> for VName {
|
||||
type Error = EmptyNameError;
|
||||
fn try_from(value: &[Tok<String>]) -> Result<Self, Self::Error> {
|
||||
Self::new(value.iter().cloned())
|
||||
}
|
||||
fn try_from(value: &[IStr]) -> Result<Self, Self::Error> { Self::new(value.iter().cloned()) }
|
||||
}
|
||||
|
||||
/// An interned representation of a namespaced identifier.
|
||||
@@ -206,37 +190,34 @@ impl TryFrom<&[Tok<String>]> for VName {
|
||||
///
|
||||
/// See also [VName]
|
||||
#[derive(Clone, Hash, PartialEq, Eq)]
|
||||
pub struct Sym(Tok<Vec<Tok<String>>>);
|
||||
pub struct Sym(IStrv);
|
||||
impl Sym {
|
||||
/// Assert that the sequence isn't empty, intern it and wrap it in a [Sym] to
|
||||
/// represent this invariant
|
||||
pub async fn new(
|
||||
v: impl IntoIterator<Item = Tok<String>>,
|
||||
i: &Interner,
|
||||
) -> Result<Self, EmptyNameError> {
|
||||
pub async fn new(v: impl IntoIterator<Item = IStr>) -> Result<Self, EmptyNameError> {
|
||||
let items = v.into_iter().collect_vec();
|
||||
Self::from_tok(i.i(&items).await)
|
||||
Self::from_tok(iv(&items).await)
|
||||
}
|
||||
/// Read a `::` separated namespaced name.
|
||||
pub async fn parse(s: &str, i: &Interner) -> Result<Self, EmptyNameError> {
|
||||
Ok(Sym(i.i(&VName::parse(s, i).await?.into_vec()).await))
|
||||
pub async fn parse(s: &str) -> Result<Self, EmptyNameError> {
|
||||
Ok(Sym(iv(&VName::parse(s).await?.into_vec()).await))
|
||||
}
|
||||
/// Assert that a token isn't empty, and wrap it in a [Sym]
|
||||
pub fn from_tok(t: Tok<Vec<Tok<String>>>) -> Result<Self, EmptyNameError> {
|
||||
pub fn from_tok(t: IStrv) -> Result<Self, EmptyNameError> {
|
||||
if t.is_empty() { Err(EmptyNameError) } else { Ok(Self(t)) }
|
||||
}
|
||||
/// Grab the interner token
|
||||
pub fn tok(&self) -> Tok<Vec<Tok<String>>> { self.0.clone() }
|
||||
pub fn tok(&self) -> IStrv { self.0.clone() }
|
||||
/// Get a number unique to this name suitable for arbitrary ordering.
|
||||
pub fn id(&self) -> NonZeroU64 { self.0.to_api().get_id() }
|
||||
pub fn id(&self) -> NonZeroU64 { self.0.to_api().0 }
|
||||
/// Extern the sym for editing
|
||||
pub fn to_vname(&self) -> VName { VName(self[..].to_vec()) }
|
||||
pub async fn from_api(marker: api::TStrv, i: &Interner) -> Sym {
|
||||
Self::from_tok(Tok::from_api(marker, i).await).expect("Empty sequence found for serialized Sym")
|
||||
pub async fn from_api(marker: api::TStrv) -> Sym {
|
||||
Self::from_tok(ev(marker).await).expect("Empty sequence found for serialized Sym")
|
||||
}
|
||||
pub fn to_api(&self) -> api::TStrv { self.tok().to_api() }
|
||||
pub async fn suffix(&self, tokv: impl IntoIterator<Item = Tok<String>>, i: &Interner) -> Sym {
|
||||
Self::new(self.0.iter().cloned().chain(tokv), i).await.unwrap()
|
||||
pub async fn suffix(&self, tokv: impl IntoIterator<Item = IStr>) -> Sym {
|
||||
Self::new(self.0.iter().cloned().chain(tokv)).await.unwrap()
|
||||
}
|
||||
}
|
||||
impl fmt::Debug for Sym {
|
||||
@@ -248,17 +229,17 @@ impl fmt::Display for Sym {
|
||||
}
|
||||
}
|
||||
impl<T> Index<T> for Sym
|
||||
where [Tok<String>]: Index<T>
|
||||
where [IStr]: Index<T>
|
||||
{
|
||||
type Output = <[Tok<String>] as Index<T>>::Output;
|
||||
type Output = <[IStr] as Index<T>>::Output;
|
||||
|
||||
fn index(&self, index: T) -> &Self::Output { &self.deref()[index] }
|
||||
}
|
||||
impl Borrow<[Tok<String>]> for Sym {
|
||||
fn borrow(&self) -> &[Tok<String>] { &self.0[..] }
|
||||
impl Borrow<[IStr]> for Sym {
|
||||
fn borrow(&self) -> &[IStr] { &self.0[..] }
|
||||
}
|
||||
impl Deref for Sym {
|
||||
type Target = [Tok<String>];
|
||||
type Target = [IStr];
|
||||
fn deref(&self) -> &Self::Target { self.borrow() }
|
||||
}
|
||||
|
||||
@@ -266,16 +247,14 @@ impl Deref for Sym {
|
||||
/// handled together in datastructures. The names can never be empty
|
||||
#[allow(clippy::len_without_is_empty)] // never empty
|
||||
pub trait NameLike:
|
||||
'static + Clone + Eq + Hash + fmt::Debug + fmt::Display + Borrow<[Tok<String>]>
|
||||
'static + Clone + Eq + Hash + fmt::Debug + fmt::Display + Borrow<[IStr]>
|
||||
{
|
||||
/// Convert into held slice
|
||||
fn as_slice(&self) -> &[Tok<String>] { Borrow::<[Tok<String>]>::borrow(self) }
|
||||
fn as_slice(&self) -> &[IStr] { Borrow::<[IStr]>::borrow(self) }
|
||||
/// Get iterator over tokens
|
||||
fn segs(&self) -> impl NameIter + '_ { self.as_slice().iter().cloned() }
|
||||
/// Get iterator over string segments
|
||||
fn str_iter(&self) -> impl Iterator<Item = &'_ str> + '_ {
|
||||
self.as_slice().iter().map(|t| t.as_str())
|
||||
}
|
||||
fn str_iter(&self) -> impl Iterator<Item = &'_ str> + '_ { self.as_slice().iter().map(|t| &**t) }
|
||||
/// Fully resolve the name for printing
|
||||
#[must_use]
|
||||
fn to_strv(&self) -> Vec<String> { self.segs().map(|s| s.to_string()).collect() }
|
||||
@@ -286,19 +265,19 @@ pub trait NameLike:
|
||||
NonZeroUsize::try_from(self.segs().count()).expect("NameLike never empty")
|
||||
}
|
||||
/// Like slice's `split_first` except we know that it always returns Some
|
||||
fn split_first_seg(&self) -> (Tok<String>, &[Tok<String>]) {
|
||||
fn split_first_seg(&self) -> (IStr, &[IStr]) {
|
||||
let (foot, torso) = self.as_slice().split_last().expect("NameLike never empty");
|
||||
(foot.clone(), torso)
|
||||
}
|
||||
/// Like slice's `split_last` except we know that it always returns Some
|
||||
fn split_last_seg(&self) -> (Tok<String>, &[Tok<String>]) {
|
||||
fn split_last_seg(&self) -> (IStr, &[IStr]) {
|
||||
let (foot, torso) = self.as_slice().split_last().expect("NameLike never empty");
|
||||
(foot.clone(), torso)
|
||||
}
|
||||
/// Get the first element
|
||||
fn first_seg(&self) -> Tok<String> { self.split_first_seg().0 }
|
||||
fn first_seg(&self) -> IStr { self.split_first_seg().0 }
|
||||
/// Get the last element
|
||||
fn last_seg(&self) -> Tok<String> { self.split_last_seg().0 }
|
||||
fn last_seg(&self) -> IStr { self.split_last_seg().0 }
|
||||
}
|
||||
|
||||
impl NameLike for Sym {}
|
||||
@@ -311,17 +290,15 @@ impl NameLike for VName {}
|
||||
/// cloning the token.
|
||||
#[macro_export]
|
||||
macro_rules! sym {
|
||||
($seg1:tt $( :: $seg:tt)* ; $i:expr) => { async {
|
||||
($seg1:tt $( :: $seg:tt)*) => {
|
||||
$crate::name::Sym::from_tok(
|
||||
$i.i(&[
|
||||
$i.i(stringify!($seg1)).await
|
||||
$( , $i.i(stringify!($seg)).await )*
|
||||
$crate::interner::iv(&[
|
||||
$crate::interner::is(stringify!($seg1)).await
|
||||
$( , $crate::interner::is(stringify!($seg)).await )*
|
||||
])
|
||||
.await
|
||||
).unwrap()
|
||||
}
|
||||
};
|
||||
(@NAME $seg:tt) => {}
|
||||
}
|
||||
|
||||
/// Create a [VName] literal.
|
||||
@@ -329,12 +306,12 @@ macro_rules! sym {
|
||||
/// The components are interned much like in [sym].
|
||||
#[macro_export]
|
||||
macro_rules! vname {
|
||||
($seg1:tt $( :: $seg:tt)* ; $i:expr) => { async {
|
||||
($seg1:tt $( :: $seg:tt)*) => {
|
||||
$crate::name::VName::new([
|
||||
$i.i(stringify!($seg1)).await
|
||||
$( , $i.i(stringify!($seg)).await )*
|
||||
$crate::interner::is(stringify!($seg1)).await
|
||||
$( , $crate::interner::is(stringify!($seg)).await )*
|
||||
]).unwrap()
|
||||
} };
|
||||
};
|
||||
}
|
||||
|
||||
/// Create a [VPath] literal.
|
||||
@@ -342,54 +319,45 @@ macro_rules! vname {
|
||||
/// The components are interned much like in [sym].
|
||||
#[macro_export]
|
||||
macro_rules! vpath {
|
||||
($seg1:tt $( :: $seg:tt)+ ; $i:expr) => { async {
|
||||
($seg1:tt $( :: $seg:tt)*) => {
|
||||
$crate::name::VPath(vec![
|
||||
$i.i(stringify!($seg1)).await
|
||||
$( , $i.i(stringify!($seg)).await )+
|
||||
$crate::interner::is(stringify!($seg1)).await
|
||||
$( , $crate::interner::is(stringify!($seg)).await )*
|
||||
])
|
||||
} };
|
||||
};
|
||||
() => {
|
||||
$crate::name::VPath(vec![])
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
pub mod test {
|
||||
use std::borrow::Borrow;
|
||||
|
||||
use test_executors::spin_on;
|
||||
|
||||
use super::{NameLike, Sym, VName};
|
||||
use crate::interner::{Interner, Tok};
|
||||
use crate::interner::{IStr, is};
|
||||
use crate::name::VPath;
|
||||
|
||||
#[test]
|
||||
fn recur() {
|
||||
spin_on(async {
|
||||
let i = Interner::new_master();
|
||||
let myname = vname!(foo::bar; i).await;
|
||||
let _borrowed_slice: &[Tok<String>] = myname.borrow();
|
||||
let _deref_pathslice: &[Tok<String>] = &myname;
|
||||
let _as_slice_out: &[Tok<String>] = myname.as_slice();
|
||||
})
|
||||
pub async fn recur() {
|
||||
let myname = vname!(foo::bar);
|
||||
let _borrowed_slice: &[IStr] = myname.borrow();
|
||||
let _deref_pathslice: &[IStr] = &myname;
|
||||
let _as_slice_out: &[IStr] = myname.as_slice();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn literals() {
|
||||
spin_on(async {
|
||||
let i = Interner::new_master();
|
||||
assert_eq!(
|
||||
sym!(foo::bar::baz; i).await,
|
||||
Sym::new([i.i("foo").await, i.i("bar").await, i.i("baz").await], &i).await.unwrap()
|
||||
);
|
||||
assert_eq!(
|
||||
vname!(foo::bar::baz; i).await,
|
||||
VName::new([i.i("foo").await, i.i("bar").await, i.i("baz").await]).unwrap()
|
||||
);
|
||||
assert_eq!(
|
||||
vpath!(foo::bar::baz; i).await,
|
||||
VPath::new([i.i("foo").await, i.i("bar").await, i.i("baz").await])
|
||||
);
|
||||
})
|
||||
/// Tests that literals are correctly interned as equal
|
||||
pub async fn literals() {
|
||||
assert_eq!(
|
||||
sym!(foo::bar::baz),
|
||||
Sym::new([is("foo").await, is("bar").await, is("baz").await]).await.unwrap()
|
||||
);
|
||||
assert_eq!(
|
||||
vname!(foo::bar::baz),
|
||||
VName::new([is("foo").await, is("bar").await, is("baz").await]).unwrap()
|
||||
);
|
||||
assert_eq!(
|
||||
vpath!(foo::bar::baz),
|
||||
VPath::new([is("foo").await, is("bar").await, is("baz").await])
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,7 @@ use std::ops::Range;
|
||||
use ordered_float::NotNan;
|
||||
|
||||
use crate::error::{OrcErrv, mk_errv};
|
||||
use crate::interner::Interner;
|
||||
use crate::interner::is;
|
||||
use crate::location::SrcRange;
|
||||
use crate::name::Sym;
|
||||
|
||||
@@ -55,14 +55,9 @@ pub struct NumError {
|
||||
pub kind: NumErrorKind,
|
||||
}
|
||||
|
||||
pub async fn num_to_errv(
|
||||
NumError { kind, range }: NumError,
|
||||
offset: u32,
|
||||
source: &Sym,
|
||||
i: &Interner,
|
||||
) -> OrcErrv {
|
||||
pub async fn num_to_errv(NumError { kind, range }: NumError, offset: u32, source: &Sym) -> OrcErrv {
|
||||
mk_errv(
|
||||
i.i("Failed to parse number").await,
|
||||
is("Failed to parse number").await,
|
||||
match kind {
|
||||
NumErrorKind::NaN => "NaN emerged during parsing",
|
||||
NumErrorKind::InvalidDigit => "non-digit character encountered",
|
||||
|
||||
@@ -7,28 +7,13 @@ use futures::future::join_all;
|
||||
use itertools::Itertools;
|
||||
|
||||
use crate::api;
|
||||
use crate::error::{OrcErrv, OrcRes, Reporter, mk_errv};
|
||||
use crate::error::{OrcErrv, OrcRes, mk_errv, report};
|
||||
use crate::format::{FmtCtx, FmtUnit, Format, fmt};
|
||||
use crate::interner::{Interner, Tok};
|
||||
use crate::interner::{IStr, es, is};
|
||||
use crate::location::SrcRange;
|
||||
use crate::name::{Sym, VName, VPath};
|
||||
use crate::tree::{ExprRepr, ExtraTok, Paren, TokTree, Token, ttv_fmt, ttv_range};
|
||||
|
||||
pub trait ParseCtx {
|
||||
#[must_use]
|
||||
fn i(&self) -> &Interner;
|
||||
#[must_use]
|
||||
fn rep(&self) -> &Reporter;
|
||||
}
|
||||
pub struct ParseCtxImpl<'a> {
|
||||
pub i: &'a Interner,
|
||||
pub r: &'a Reporter,
|
||||
}
|
||||
impl ParseCtx for ParseCtxImpl<'_> {
|
||||
fn i(&self) -> &Interner { self.i }
|
||||
fn rep(&self) -> &Reporter { self.r }
|
||||
}
|
||||
|
||||
pub fn name_start(c: char) -> bool { c.is_alphabetic() || c == '_' }
|
||||
pub fn name_char(c: char) -> bool { name_start(c) || c.is_numeric() }
|
||||
pub fn op_char(c: char) -> bool { !name_char(c) && !c.is_whitespace() && !"()[]{}\\".contains(c) }
|
||||
@@ -103,22 +88,22 @@ impl<A: ExprRepr, X: ExtraTok> Format for Snippet<'_, A, X> {
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Comment {
|
||||
pub text: Tok<String>,
|
||||
pub text: IStr,
|
||||
pub sr: SrcRange,
|
||||
}
|
||||
impl Comment {
|
||||
// XXX: which of these four are actually used?
|
||||
pub async fn from_api(c: &api::Comment, src: Sym, i: &Interner) -> Self {
|
||||
Self { text: i.ex(c.text).await, sr: SrcRange::new(c.range.clone(), &src) }
|
||||
pub async fn from_api(c: &api::Comment, src: Sym) -> Self {
|
||||
Self { text: es(c.text).await, sr: SrcRange::new(c.range.clone(), &src) }
|
||||
}
|
||||
pub async fn from_tk(tk: &TokTree<impl ExprRepr, impl ExtraTok>, i: &Interner) -> Option<Self> {
|
||||
pub async fn from_tk(tk: &TokTree<impl ExprRepr, impl ExtraTok>) -> Option<Self> {
|
||||
match &tk.tok {
|
||||
Token::Comment(text) => Some(Self { text: i.i(&**text).await, sr: tk.sr.clone() }),
|
||||
Token::Comment(text) => Some(Self { text: text.clone(), sr: tk.sr.clone() }),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
pub fn to_tk<R: ExprRepr, X: ExtraTok>(&self) -> TokTree<R, X> {
|
||||
TokTree { tok: Token::Comment(self.text.rc().clone()), sr: self.sr.clone() }
|
||||
TokTree { tok: Token::Comment(self.text.clone()), sr: self.sr.clone() }
|
||||
}
|
||||
pub fn to_api(&self) -> api::Comment {
|
||||
api::Comment { range: self.sr.range(), text: self.text.to_api() }
|
||||
@@ -130,7 +115,6 @@ impl fmt::Display for Comment {
|
||||
}
|
||||
|
||||
pub async fn line_items<'a, A: ExprRepr, X: ExtraTok>(
|
||||
ctx: &impl ParseCtx,
|
||||
snip: Snippet<'a, A, X>,
|
||||
) -> Vec<Parsed<'a, Vec<Comment>, A, X>> {
|
||||
let mut items = Vec::new();
|
||||
@@ -145,9 +129,10 @@ pub async fn line_items<'a, A: ExprRepr, X: ExtraTok>(
|
||||
None => comments.extend(line.cur),
|
||||
Some(i) => {
|
||||
let (cmts, tail) = line.split_at(i);
|
||||
let comments = join_all(comments.drain(..).chain(cmts.cur).map(|t| async {
|
||||
Comment::from_tk(t, ctx.i()).await.expect("All are comments checked above")
|
||||
}))
|
||||
let comments = join_all(
|
||||
(comments.drain(..).chain(cmts.cur))
|
||||
.map(|t| async { Comment::from_tk(t).await.expect("All are comments checked above") }),
|
||||
)
|
||||
.await;
|
||||
items.push(Parsed { output: comments, tail });
|
||||
},
|
||||
@@ -157,26 +142,21 @@ pub async fn line_items<'a, A: ExprRepr, X: ExtraTok>(
|
||||
}
|
||||
|
||||
pub async fn try_pop_no_fluff<'a, A: ExprRepr, X: ExtraTok>(
|
||||
ctx: &impl ParseCtx,
|
||||
snip: Snippet<'a, A, X>,
|
||||
) -> ParseRes<'a, &'a TokTree<A, X>, A, X> {
|
||||
match snip.skip_fluff().pop_front() {
|
||||
Some((output, tail)) => Ok(Parsed { output, tail }),
|
||||
None => Err(mk_errv(
|
||||
ctx.i().i("Unexpected end").await,
|
||||
"Line ends abruptly; more tokens were expected",
|
||||
[snip.sr()],
|
||||
)),
|
||||
None =>
|
||||
Err(mk_errv(is("Unexpected end").await, "Line ends abruptly; more tokens were expected", [
|
||||
snip.sr(),
|
||||
])),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn expect_end(
|
||||
ctx: &impl ParseCtx,
|
||||
snip: Snippet<'_, impl ExprRepr, impl ExtraTok>,
|
||||
) -> OrcRes<()> {
|
||||
pub async fn expect_end(snip: Snippet<'_, impl ExprRepr, impl ExtraTok>) -> OrcRes<()> {
|
||||
match snip.skip_fluff().get(0) {
|
||||
Some(surplus) => Err(mk_errv(
|
||||
ctx.i().i("Extra code after end of line").await,
|
||||
is("Extra code after end of line").await,
|
||||
"Code found after the end of the line",
|
||||
[surplus.sr.pos()],
|
||||
)),
|
||||
@@ -185,28 +165,26 @@ pub async fn expect_end(
|
||||
}
|
||||
|
||||
pub async fn expect_tok<'a, A: ExprRepr, X: ExtraTok>(
|
||||
ctx: &impl ParseCtx,
|
||||
snip: Snippet<'a, A, X>,
|
||||
tok: Tok<String>,
|
||||
tok: IStr,
|
||||
) -> ParseRes<'a, (), A, X> {
|
||||
let Parsed { output: head, tail } = try_pop_no_fluff(ctx, snip).await?;
|
||||
let Parsed { output: head, tail } = try_pop_no_fluff(snip).await?;
|
||||
match &head.tok {
|
||||
Token::Name(n) if *n == tok => Ok(Parsed { output: (), tail }),
|
||||
t => Err(mk_errv(
|
||||
ctx.i().i("Expected specific keyword").await,
|
||||
format!("Expected {tok} but found {:?}", fmt(t, ctx.i()).await),
|
||||
is("Expected specific keyword").await,
|
||||
format!("Expected {tok} but found {:?}", fmt(t).await),
|
||||
[head.sr()],
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn token_errv<A: ExprRepr, X: ExtraTok>(
|
||||
ctx: &impl ParseCtx,
|
||||
tok: &TokTree<A, X>,
|
||||
description: &'static str,
|
||||
message: impl FnOnce(&str) -> String,
|
||||
) -> OrcErrv {
|
||||
mk_errv(ctx.i().i(description).await, message(&fmt(tok, ctx.i()).await), [tok.sr.pos()])
|
||||
mk_errv(is(description).await, message(&fmt(tok).await), [tok.sr.pos()])
|
||||
}
|
||||
|
||||
pub struct Parsed<'a, T, H: ExprRepr, X: ExtraTok> {
|
||||
@@ -217,33 +195,27 @@ pub struct Parsed<'a, T, H: ExprRepr, X: ExtraTok> {
|
||||
pub type ParseRes<'a, T, H, X> = OrcRes<Parsed<'a, T, H, X>>;
|
||||
|
||||
pub async fn parse_multiname<'a, A: ExprRepr, X: ExtraTok>(
|
||||
ctx: &impl ParseCtx,
|
||||
tail: Snippet<'a, A, X>,
|
||||
) -> ParseRes<'a, Vec<Import>, A, X> {
|
||||
let Some((tt, tail)) = tail.skip_fluff().pop_front() else {
|
||||
return Err(mk_errv(
|
||||
ctx.i().i("Expected token").await,
|
||||
is("Expected token").await,
|
||||
"Expected a name, a parenthesized list of names, or a globstar.",
|
||||
[tail.sr().pos()],
|
||||
));
|
||||
};
|
||||
let ret = rec(tt, ctx).await;
|
||||
let ret = rec(tt).await;
|
||||
#[allow(clippy::type_complexity)] // it's an internal function
|
||||
pub async fn rec<A: ExprRepr, X: ExtraTok>(
|
||||
tt: &TokTree<A, X>,
|
||||
ctx: &impl ParseCtx,
|
||||
) -> OrcRes<Vec<(Vec<Tok<String>>, Option<Tok<String>>, SrcRange)>> {
|
||||
) -> OrcRes<Vec<(Vec<IStr>, Option<IStr>, SrcRange)>> {
|
||||
let ttpos = tt.sr.pos();
|
||||
match &tt.tok {
|
||||
Token::NS(ns, body) => {
|
||||
if !ns.starts_with(name_start) {
|
||||
ctx.rep().report(mk_errv(
|
||||
ctx.i().i("Unexpected name prefix").await,
|
||||
"Only names can precede ::",
|
||||
[ttpos],
|
||||
))
|
||||
report(mk_errv(is("Unexpected name prefix").await, "Only names can precede ::", [ttpos]))
|
||||
};
|
||||
let out = Box::pin(rec(body, ctx)).await?;
|
||||
let out = Box::pin(rec(body)).await?;
|
||||
Ok(out.into_iter().update(|i| i.0.push(ns.clone())).collect_vec())
|
||||
},
|
||||
Token::Name(ntok) => {
|
||||
@@ -255,21 +227,19 @@ pub async fn parse_multiname<'a, A: ExprRepr, X: ExtraTok>(
|
||||
let mut o = Vec::new();
|
||||
let mut body = Snippet::new(tt, b);
|
||||
while let Some((output, tail)) = body.pop_front() {
|
||||
match rec(output, ctx).boxed_local().await {
|
||||
match rec(output).boxed_local().await {
|
||||
Ok(names) => o.extend(names),
|
||||
Err(e) => ctx.rep().report(e),
|
||||
Err(e) => report(e),
|
||||
}
|
||||
body = tail;
|
||||
}
|
||||
Ok(o)
|
||||
},
|
||||
t => {
|
||||
return Err(mk_errv(
|
||||
ctx.i().i("Unrecognized name end").await,
|
||||
format!("Names cannot end with {:?} tokens", fmt(t, ctx.i()).await),
|
||||
[ttpos],
|
||||
));
|
||||
},
|
||||
t => Err(mk_errv(
|
||||
is("Unrecognized name end").await,
|
||||
format!("Names cannot end with {:?} tokens", fmt(t).await),
|
||||
[ttpos],
|
||||
)),
|
||||
}
|
||||
}
|
||||
ret.map(|output| {
|
||||
@@ -285,7 +255,7 @@ pub async fn parse_multiname<'a, A: ExprRepr, X: ExtraTok>(
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct Import {
|
||||
pub path: VPath,
|
||||
pub name: Option<Tok<String>>,
|
||||
pub name: Option<IStr>,
|
||||
pub sr: SrcRange,
|
||||
}
|
||||
impl Import {
|
||||
@@ -296,14 +266,14 @@ impl Import {
|
||||
None => self.path.into_name().expect("Import cannot be empty"),
|
||||
}
|
||||
}
|
||||
pub fn new(sr: SrcRange, path: VPath, name: Tok<String>) -> Self {
|
||||
pub fn new(sr: SrcRange, path: VPath, name: IStr) -> Self {
|
||||
Import { path, name: Some(name), sr }
|
||||
}
|
||||
pub fn new_glob(sr: SrcRange, path: VPath) -> Self { Import { path, name: None, sr } }
|
||||
}
|
||||
impl Display for Import {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{}::{}", self.path.iter().join("::"), self.name.as_ref().map_or("*", |t| t.as_str()))
|
||||
write!(f, "{}::{}", self.path.iter().join("::"), self.name.as_ref().map_or("*", |t| &**t))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,342 +1,573 @@
|
||||
use std::cell::RefCell;
|
||||
use std::future::Future;
|
||||
use std::marker::PhantomData;
|
||||
use std::mem;
|
||||
use std::ops::{BitAnd, Deref};
|
||||
use std::pin::Pin;
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicBool, Ordering};
|
||||
use std::pin::{Pin, pin};
|
||||
use std::rc::Rc;
|
||||
use std::{io, mem};
|
||||
|
||||
use async_fn_stream::try_stream;
|
||||
use bound::Bound;
|
||||
use derive_destructure::destructure;
|
||||
use dyn_clone::{DynClone, clone_box};
|
||||
use futures::channel::mpsc;
|
||||
use futures::channel::mpsc::{self, Receiver, Sender, channel};
|
||||
use futures::channel::oneshot;
|
||||
use futures::future::LocalBoxFuture;
|
||||
use futures::lock::Mutex;
|
||||
use futures::{SinkExt, StreamExt};
|
||||
use futures::lock::{Mutex, MutexGuard};
|
||||
use futures::{
|
||||
AsyncRead, AsyncWrite, AsyncWriteExt, FutureExt, SinkExt, Stream, StreamExt, stream_select,
|
||||
};
|
||||
use hashbrown::HashMap;
|
||||
use orchid_api_traits::{Channel, Coding, Decode, Encode, MsgSet, Request};
|
||||
use trait_set::trait_set;
|
||||
use orchid_api_traits::{Decode, Encode, Request, UnderRoot};
|
||||
|
||||
use crate::clone;
|
||||
use crate::logging::Logger;
|
||||
use crate::localset::LocalSet;
|
||||
|
||||
#[must_use = "Receipts indicate that a required action has been performed within a function. \
|
||||
Most likely this should be returned somewhere."]
|
||||
pub struct Receipt<'a>(PhantomData<&'a mut ()>);
|
||||
|
||||
trait_set! {
|
||||
pub trait SendFn<T: MsgSet> =
|
||||
for<'a> FnMut(&'a [u8], ReqNot<T>) -> LocalBoxFuture<'a, ()>
|
||||
+ DynClone + 'static;
|
||||
pub trait ReqFn<T: MsgSet> =
|
||||
for<'a> FnMut(RequestHandle<'a, T>, <T::In as Channel>::Req)
|
||||
-> LocalBoxFuture<'a, Receipt<'a>>
|
||||
+ DynClone + 'static;
|
||||
pub trait NotifFn<T: MsgSet> =
|
||||
FnMut(<T::In as Channel>::Notif, ReqNot<T>) -> LocalBoxFuture<'static, ()>
|
||||
+ DynClone + 'static;
|
||||
impl Receipt<'_> {
|
||||
/// Only call this function from a custom implementation of [RepWriter]
|
||||
pub fn _new() -> Self { Self(PhantomData) }
|
||||
}
|
||||
|
||||
fn get_id(message: &[u8]) -> (u64, &[u8]) {
|
||||
(u64::from_be_bytes(message[..8].to_vec().try_into().unwrap()), &message[8..])
|
||||
/// Write guard to outbound for the purpose of serializing a request. Only one
|
||||
/// can exist at a time. Dropping this object should panic.
|
||||
pub trait ReqWriter<'a> {
|
||||
/// Access to the underlying channel. This may be buffered.
|
||||
fn writer(&mut self) -> Pin<&mut dyn AsyncWrite>;
|
||||
/// Finalize the request, release the outbound channel, then queue for the
|
||||
/// reply on the inbound channel.
|
||||
fn send(self: Box<Self>) -> LocalBoxFuture<'a, io::Result<Box<dyn RepReader<'a> + 'a>>>;
|
||||
}
|
||||
|
||||
pub trait ReqHandlish {
|
||||
fn defer(&self, cb: impl Future<Output = ()> + 'static)
|
||||
where Self: Sized {
|
||||
self.defer_objsafe(Box::pin(cb));
|
||||
}
|
||||
fn defer_objsafe(&self, val: Pin<Box<dyn Future<Output = ()>>>);
|
||||
}
|
||||
impl ReqHandlish for &'_ dyn ReqHandlish {
|
||||
fn defer_objsafe(&self, val: Pin<Box<dyn Future<Output = ()>>>) { (**self).defer_objsafe(val) }
|
||||
/// Write guard to inbound for the purpose of deserializing a reply. While held,
|
||||
/// no inbound requests or other replies can be processed.
|
||||
///
|
||||
/// Dropping this object should panic even if [RepReader::finish] returns
|
||||
/// synchronously, because the API isn't cancellation safe in general so it is a
|
||||
/// programmer error in all cases to drop an object related to it without proper
|
||||
/// cleanup.
|
||||
pub trait RepReader<'a> {
|
||||
/// Access to the underlying channel. The length of the message is inferred
|
||||
/// from the number of bytes read so this must not be buffered.
|
||||
fn reader(&mut self) -> Pin<&mut dyn AsyncRead>;
|
||||
/// Finish reading the request
|
||||
fn finish(self: Box<Self>) -> LocalBoxFuture<'a, ()>;
|
||||
}
|
||||
|
||||
#[derive(destructure)]
|
||||
pub struct RequestHandle<'a, MS: MsgSet> {
|
||||
defer: RefCell<Vec<Pin<Box<dyn Future<Output = ()>>>>>,
|
||||
fulfilled: AtomicBool,
|
||||
id: u64,
|
||||
_reqlt: PhantomData<&'a mut ()>,
|
||||
parent: ReqNot<MS>,
|
||||
/// Write guard to outbound for the purpose of serializing a notification.
|
||||
///
|
||||
/// Dropping this object should panic for the same reason [RepReader] panics
|
||||
pub trait MsgWriter<'a> {
|
||||
/// Access to the underlying channel. This may be buffered.
|
||||
fn writer(&mut self) -> Pin<&mut dyn AsyncWrite>;
|
||||
/// Send the notification
|
||||
fn finish(self: Box<Self>) -> LocalBoxFuture<'a, io::Result<()>>;
|
||||
}
|
||||
impl<'a, MS: MsgSet + 'static> RequestHandle<'a, MS> {
|
||||
fn new(parent: ReqNot<MS>, id: u64) -> Self {
|
||||
Self { defer: RefCell::default(), fulfilled: false.into(), _reqlt: PhantomData, parent, id }
|
||||
}
|
||||
pub fn reqnot(&self) -> ReqNot<MS> { self.parent.clone() }
|
||||
pub async fn handle<U: Request>(&self, _: &U, rep: &U::Response) -> Receipt<'a> {
|
||||
self.respond(rep).await
|
||||
}
|
||||
pub fn will_handle_as<U: Request>(&self, _: &U) -> ReqTypToken<U> { ReqTypToken(PhantomData) }
|
||||
pub async fn handle_as<U: Request>(&self, _: ReqTypToken<U>, rep: &U::Response) -> Receipt<'a> {
|
||||
self.respond(rep).await
|
||||
}
|
||||
pub async fn respond(&self, response: &impl Encode) -> Receipt<'a> {
|
||||
assert!(!self.fulfilled.swap(true, Ordering::Relaxed), "Already responded to {}", self.id);
|
||||
let mut buf = (!self.id).to_be_bytes().to_vec();
|
||||
response.encode(Pin::new(&mut buf)).await;
|
||||
let mut send = clone_box(&*self.reqnot().0.lock().await.send);
|
||||
(send)(&buf, self.parent.clone()).await;
|
||||
let deferred = mem::take(&mut *self.defer.borrow_mut());
|
||||
for item in deferred {
|
||||
item.await
|
||||
}
|
||||
Receipt(PhantomData)
|
||||
}
|
||||
|
||||
/// For initiating outbound requests and notifications
|
||||
pub trait Client {
|
||||
fn start_request(&self) -> LocalBoxFuture<'_, io::Result<Box<dyn ReqWriter<'_> + '_>>>;
|
||||
fn start_notif(&self) -> LocalBoxFuture<'_, io::Result<Box<dyn MsgWriter<'_> + '_>>>;
|
||||
}
|
||||
impl<MS: MsgSet> ReqHandlish for RequestHandle<'_, MS> {
|
||||
fn defer_objsafe(&self, val: Pin<Box<dyn Future<Output = ()>>>) {
|
||||
self.defer.borrow_mut().push(val)
|
||||
|
||||
impl<T: Client + ?Sized> ClientExt for T {}
|
||||
/// Extension trait with convenience methods that handle outbound request and
|
||||
/// notif lifecycle and typing
|
||||
#[allow(async_fn_in_trait)]
|
||||
pub trait ClientExt: Client {
|
||||
async fn request<T: Request + UnderRoot<Root: Encode>>(&self, t: T) -> io::Result<T::Response> {
|
||||
let mut req = self.start_request().await?;
|
||||
t.into_root().encode(req.writer().as_mut()).await?;
|
||||
let mut rep = req.send().await?;
|
||||
let response = T::Response::decode(rep.reader()).await;
|
||||
rep.finish().await;
|
||||
response
|
||||
}
|
||||
}
|
||||
impl<MS: MsgSet> Drop for RequestHandle<'_, MS> {
|
||||
fn drop(&mut self) {
|
||||
let done = self.fulfilled.load(Ordering::Relaxed);
|
||||
debug_assert!(done, "Request {} dropped without response", self.id)
|
||||
async fn notify<T: UnderRoot<Root: Encode>>(&self, t: T) -> io::Result<()> {
|
||||
let mut notif = self.start_notif().await?;
|
||||
t.into_root().encode(notif.writer().as_mut()).await?;
|
||||
notif.finish().await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
pub struct ReqTypToken<T>(PhantomData<T>);
|
||||
|
||||
pub struct ReqNotData<T: MsgSet> {
|
||||
id: u64,
|
||||
send: Box<dyn SendFn<T>>,
|
||||
notif: Box<dyn NotifFn<T>>,
|
||||
req: Box<dyn ReqFn<T>>,
|
||||
responses: HashMap<u64, mpsc::Sender<Vec<u8>>>,
|
||||
pub trait ReqReader<'a> {
|
||||
fn reader(&mut self) -> Pin<&mut dyn AsyncRead>;
|
||||
fn finish(self: Box<Self>) -> LocalBoxFuture<'a, Box<dyn ReqHandle<'a> + 'a>>;
|
||||
}
|
||||
|
||||
/// Wraps a raw message buffer to save on copying.
|
||||
/// Dereferences to the tail of the message buffer, cutting off the ID
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct RawReply(Vec<u8>);
|
||||
impl Deref for RawReply {
|
||||
type Target = [u8];
|
||||
fn deref(&self) -> &Self::Target { get_id(&self.0[..]).1 }
|
||||
}
|
||||
|
||||
pub struct ReqNot<T: MsgSet>(Arc<Mutex<ReqNotData<T>>>, Logger);
|
||||
impl<T: MsgSet> ReqNot<T> {
|
||||
pub fn new(
|
||||
logger: Logger,
|
||||
send: impl SendFn<T>,
|
||||
notif: impl NotifFn<T>,
|
||||
req: impl ReqFn<T>,
|
||||
) -> Self {
|
||||
Self(
|
||||
Arc::new(Mutex::new(ReqNotData {
|
||||
id: 1,
|
||||
send: Box::new(send),
|
||||
notif: Box::new(notif),
|
||||
req: Box::new(req),
|
||||
responses: HashMap::new(),
|
||||
})),
|
||||
logger,
|
||||
)
|
||||
impl<'a, T: ReqReader<'a> + ?Sized> ReqReaderExt<'a> for T {}
|
||||
#[allow(async_fn_in_trait)]
|
||||
pub trait ReqReaderExt<'a>: ReqReader<'a> {
|
||||
async fn read_req<R: Decode>(&mut self) -> io::Result<R> { R::decode(self.reader()).await }
|
||||
async fn reply<R: Request>(
|
||||
self: Box<Self>,
|
||||
req: impl Evidence<R>,
|
||||
rep: &R::Response,
|
||||
) -> io::Result<Receipt<'a>> {
|
||||
self.finish().await.reply(req, rep).await
|
||||
}
|
||||
|
||||
/// Can be called from a polling thread or dispatched in any other way
|
||||
pub async fn receive(&self, message: &[u8]) {
|
||||
let mut g = self.0.lock().await;
|
||||
let (id, payload) = get_id(message);
|
||||
if id == 0 {
|
||||
let mut notif_cb = clone_box(&*g.notif);
|
||||
mem::drop(g);
|
||||
let notif_val = <T::In as Channel>::Notif::decode(Pin::new(&mut &payload[..])).await;
|
||||
notif_cb(notif_val, self.clone()).await
|
||||
} else if 0 < id.bitand(1 << 63) {
|
||||
let mut sender = g.responses.remove(&!id).expect("Received response for invalid message");
|
||||
sender.send(message.to_vec()).await.unwrap()
|
||||
} else {
|
||||
let message = <T::In as Channel>::Req::decode(Pin::new(&mut &payload[..])).await;
|
||||
let mut req_cb = clone_box(&*g.req);
|
||||
mem::drop(g);
|
||||
let rn = self.clone();
|
||||
req_cb(RequestHandle::new(rn, id), message).await;
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn notify<N: Coding + Into<<T::Out as Channel>::Notif>>(&self, notif: N) {
|
||||
let mut send = clone_box(&*self.0.lock().await.send);
|
||||
let mut buf = vec![0; 8];
|
||||
let msg: <T::Out as Channel>::Notif = notif.into();
|
||||
msg.encode(Pin::new(&mut buf)).await;
|
||||
send(&buf, self.clone()).await
|
||||
async fn start_reply(self: Box<Self>) -> io::Result<Box<dyn RepWriter<'a> + 'a>> {
|
||||
self.finish().await.start_reply().await
|
||||
}
|
||||
}
|
||||
|
||||
pub trait DynRequester {
|
||||
type Transfer;
|
||||
fn logger(&self) -> &Logger;
|
||||
/// Encode and send a request, then receive the response buffer.
|
||||
fn raw_request(&self, data: Self::Transfer) -> LocalBoxFuture<'_, RawReply>;
|
||||
pub trait ReqHandle<'a> {
|
||||
fn start_reply(self: Box<Self>) -> LocalBoxFuture<'a, io::Result<Box<dyn RepWriter<'a> + 'a>>>;
|
||||
}
|
||||
|
||||
pub struct MappedRequester<'a, T: 'a>(Box<dyn Fn(T) -> LocalBoxFuture<'a, RawReply> + 'a>, Logger);
|
||||
impl<'a, T> MappedRequester<'a, T> {
|
||||
fn new<U: DynRequester + 'a, F: Fn(T) -> U::Transfer + 'a>(
|
||||
req: U,
|
||||
cb: F,
|
||||
logger: Logger,
|
||||
) -> Self {
|
||||
let req_arc = Arc::new(req);
|
||||
let cb_arc = Arc::new(cb);
|
||||
MappedRequester(
|
||||
Box::new(move |t| {
|
||||
Box::pin(clone!(req_arc, cb_arc; async move { req_arc.raw_request(cb_arc(t)).await}))
|
||||
}),
|
||||
logger,
|
||||
)
|
||||
impl<'a, T: ReqHandle<'a> + ?Sized> ReqHandleExt<'a> for T {}
|
||||
#[allow(async_fn_in_trait)]
|
||||
pub trait ReqHandleExt<'a>: ReqHandle<'a> {
|
||||
async fn reply<Req: Request>(
|
||||
self: Box<Self>,
|
||||
_: impl Evidence<Req>,
|
||||
rep: &Req::Response,
|
||||
) -> io::Result<Receipt<'a>> {
|
||||
let mut reply = self.start_reply().await?;
|
||||
rep.encode(reply.writer()).await?;
|
||||
reply.finish().await
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> DynRequester for MappedRequester<'_, T> {
|
||||
type Transfer = T;
|
||||
fn logger(&self) -> &Logger { &self.1 }
|
||||
fn raw_request(&self, data: Self::Transfer) -> LocalBoxFuture<'_, RawReply> { self.0(data) }
|
||||
pub trait RepWriter<'a> {
|
||||
fn writer(&mut self) -> Pin<&mut dyn AsyncWrite>;
|
||||
fn finish(self: Box<Self>) -> LocalBoxFuture<'a, io::Result<Receipt<'a>>>;
|
||||
}
|
||||
|
||||
impl<T: MsgSet> DynRequester for ReqNot<T> {
|
||||
type Transfer = <T::Out as Channel>::Req;
|
||||
fn logger(&self) -> &Logger { &self.1 }
|
||||
fn raw_request(&self, req: Self::Transfer) -> LocalBoxFuture<'_, RawReply> {
|
||||
pub trait MsgReader<'a> {
|
||||
fn reader(&mut self) -> Pin<&mut dyn AsyncRead>;
|
||||
fn finish(self: Box<Self>) -> LocalBoxFuture<'a, ()>;
|
||||
}
|
||||
impl<'a, T: ?Sized + MsgReader<'a>> MsgReaderExt<'a> for T {}
|
||||
#[allow(async_fn_in_trait)]
|
||||
pub trait MsgReaderExt<'a>: MsgReader<'a> {
|
||||
async fn read<N: Decode>(mut self: Box<Self>) -> io::Result<N> {
|
||||
let n = N::decode(self.reader()).await;
|
||||
self.finish().await;
|
||||
n
|
||||
}
|
||||
}
|
||||
|
||||
/// A form of [Evidence] that doesn't require the value to be kept around
|
||||
pub struct Witness<T>(PhantomData<T>);
|
||||
impl<T> Witness<T> {
|
||||
pub fn of(_: &T) -> Self { Self(PhantomData) }
|
||||
}
|
||||
impl<T> Copy for Witness<T> {}
|
||||
impl<T> Clone for Witness<T> {
|
||||
fn clone(&self) -> Self { *self }
|
||||
}
|
||||
|
||||
/// A proxy for the type of a value either previously saved into a [Witness] or
|
||||
/// still available.
|
||||
pub trait Evidence<T> {}
|
||||
impl<T> Evidence<T> for &'_ T {}
|
||||
impl<T> Evidence<T> for Witness<T> {}
|
||||
|
||||
type IoRef<T> = Pin<Box<T>>;
|
||||
type IoLock<T> = Rc<Mutex<Pin<Box<T>>>>;
|
||||
type IoGuard<T> = Bound<MutexGuard<'static, Pin<Box<T>>>, IoLock<T>>;
|
||||
|
||||
/// An incoming request. This holds a lock on the ingress channel.
|
||||
pub struct IoReqReader<'a> {
|
||||
prefix: &'a [u8],
|
||||
read: IoGuard<dyn AsyncRead>,
|
||||
write: &'a Mutex<IoRef<dyn AsyncWrite>>,
|
||||
}
|
||||
impl<'a> ReqReader<'a> for IoReqReader<'a> {
|
||||
fn reader(&mut self) -> Pin<&mut dyn AsyncRead> { self.read.as_mut() }
|
||||
fn finish(self: Box<Self>) -> LocalBoxFuture<'a, Box<dyn ReqHandle<'a> + 'a>> {
|
||||
Box::pin(async {
|
||||
Box::new(IoReqHandle { prefix: self.prefix, write: self.write }) as Box<dyn ReqHandle<'a>>
|
||||
})
|
||||
}
|
||||
}
|
||||
pub struct IoReqHandle<'a> {
|
||||
prefix: &'a [u8],
|
||||
write: &'a Mutex<IoRef<dyn AsyncWrite>>,
|
||||
}
|
||||
impl<'a> ReqHandle<'a> for IoReqHandle<'a> {
|
||||
fn start_reply(self: Box<Self>) -> LocalBoxFuture<'a, io::Result<Box<dyn RepWriter<'a> + 'a>>> {
|
||||
Box::pin(async move {
|
||||
let mut g = self.0.lock().await;
|
||||
let id = g.id;
|
||||
g.id += 1;
|
||||
let mut buf = id.to_be_bytes().to_vec();
|
||||
req.encode(Pin::new(&mut buf)).await;
|
||||
let (send, mut recv) = mpsc::channel(1);
|
||||
g.responses.insert(id, send);
|
||||
let mut send = clone_box(&*g.send);
|
||||
mem::drop(g);
|
||||
let rn = self.clone();
|
||||
send(&buf, rn).await;
|
||||
let items = recv.next().await;
|
||||
RawReply(items.unwrap())
|
||||
let mut write = self.write.lock().await;
|
||||
write.as_mut().write_all(self.prefix).await?;
|
||||
Ok(Box::new(IoRepWriter { write }) as Box<dyn RepWriter<'a>>)
|
||||
})
|
||||
}
|
||||
}
|
||||
pub struct IoRepWriter<'a> {
|
||||
write: MutexGuard<'a, IoRef<dyn AsyncWrite>>,
|
||||
}
|
||||
impl<'a> RepWriter<'a> for IoRepWriter<'a> {
|
||||
fn writer(&mut self) -> Pin<&mut dyn AsyncWrite> { self.write.as_mut() }
|
||||
fn finish(mut self: Box<Self>) -> LocalBoxFuture<'a, io::Result<Receipt<'a>>> {
|
||||
Box::pin(async move {
|
||||
self.writer().flush().await?;
|
||||
Ok(Receipt(PhantomData))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
pub trait Requester: DynRequester {
|
||||
#[must_use = "These types are subject to change with protocol versions. \
|
||||
If you don't want to use the return value, At a minimum, force the type."]
|
||||
fn request<R: Request + Into<Self::Transfer>>(
|
||||
&self,
|
||||
data: R,
|
||||
) -> impl Future<Output = R::Response>;
|
||||
fn map<'a, U>(self, cb: impl Fn(U) -> Self::Transfer + 'a) -> MappedRequester<'a, U>
|
||||
where Self: Sized + 'a {
|
||||
let logger = self.logger().clone();
|
||||
MappedRequester::new(self, cb, logger)
|
||||
pub struct IoMsgReader<'a> {
|
||||
_pd: PhantomData<&'a mut ()>,
|
||||
read: IoGuard<dyn AsyncRead>,
|
||||
}
|
||||
impl<'a> MsgReader<'a> for IoMsgReader<'a> {
|
||||
fn reader(&mut self) -> Pin<&mut dyn AsyncRead> { self.read.as_mut() }
|
||||
fn finish(self: Box<Self>) -> LocalBoxFuture<'static, ()> { Box::pin(async {}) }
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
struct ReplySub {
|
||||
id: u64,
|
||||
ack: oneshot::Sender<()>,
|
||||
cb: oneshot::Sender<IoGuard<dyn AsyncRead>>,
|
||||
}
|
||||
|
||||
struct IoClient {
|
||||
output: IoLock<dyn AsyncWrite>,
|
||||
id: Rc<RefCell<u64>>,
|
||||
subscribe: Rc<Sender<ReplySub>>,
|
||||
}
|
||||
impl IoClient {
|
||||
fn new(output: IoLock<dyn AsyncWrite>) -> (Receiver<ReplySub>, Self) {
|
||||
let (req, rep) = mpsc::channel(0);
|
||||
(rep, Self { output, id: Rc::new(RefCell::new(0)), subscribe: Rc::new(req) })
|
||||
}
|
||||
async fn lock_out(&self) -> IoGuard<dyn AsyncWrite> {
|
||||
Bound::async_new(self.output.clone(), async |o| o.lock().await).await
|
||||
}
|
||||
}
|
||||
impl Client for IoClient {
|
||||
fn start_notif(&self) -> LocalBoxFuture<'_, io::Result<Box<dyn MsgWriter<'_> + '_>>> {
|
||||
Box::pin(async {
|
||||
let mut o = self.lock_out().await;
|
||||
0u64.encode(o.as_mut()).await?;
|
||||
Ok(Box::new(IoNotifWriter { o }) as Box<dyn MsgWriter>)
|
||||
})
|
||||
}
|
||||
fn start_request(&self) -> LocalBoxFuture<'_, io::Result<Box<dyn ReqWriter<'_> + '_>>> {
|
||||
Box::pin(async {
|
||||
let id = {
|
||||
let mut id_g = self.id.borrow_mut();
|
||||
*id_g += 1;
|
||||
*id_g
|
||||
};
|
||||
let (cb, reply) = oneshot::channel();
|
||||
let (ack, got_ack) = oneshot::channel();
|
||||
self.subscribe.as_ref().clone().send(ReplySub { id, ack, cb }).await.unwrap();
|
||||
got_ack.await.unwrap();
|
||||
let mut w = self.lock_out().await;
|
||||
id.encode(w.as_mut()).await?;
|
||||
Ok(Box::new(IoReqWriter { reply, w }) as Box<dyn ReqWriter>)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl<This: DynRequester + ?Sized> Requester for This {
|
||||
async fn request<R: Request + Into<Self::Transfer>>(&self, data: R) -> R::Response {
|
||||
let req = format!("{data:?}");
|
||||
let rep = R::Response::decode(Pin::new(&mut &self.raw_request(data.into()).await[..])).await;
|
||||
let req_str = req.to_string();
|
||||
if !req_str.starts_with("AtomPrint") && !req_str.starts_with("ExtAtomPrint") {
|
||||
writeln!(self.logger(), "Request {req} got response {rep:?}");
|
||||
struct IoReqWriter {
|
||||
reply: oneshot::Receiver<IoGuard<dyn AsyncRead>>,
|
||||
w: IoGuard<dyn AsyncWrite>,
|
||||
}
|
||||
impl<'a> ReqWriter<'a> for IoReqWriter {
|
||||
fn writer(&mut self) -> Pin<&mut dyn AsyncWrite> { self.w.as_mut() }
|
||||
fn send(self: Box<Self>) -> LocalBoxFuture<'a, io::Result<Box<dyn RepReader<'a> + 'a>>> {
|
||||
Box::pin(async {
|
||||
let Self { reply, mut w } = *self;
|
||||
w.flush().await?;
|
||||
mem::drop(w);
|
||||
let i = reply.await.expect("Client dropped before reply received");
|
||||
Ok(Box::new(IoRepReader { i }) as Box<dyn RepReader>)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
struct IoRepReader {
|
||||
i: IoGuard<dyn AsyncRead>,
|
||||
}
|
||||
impl<'a> RepReader<'a> for IoRepReader {
|
||||
fn reader(&mut self) -> Pin<&mut dyn AsyncRead> { self.i.as_mut() }
|
||||
fn finish(self: Box<Self>) -> LocalBoxFuture<'static, ()> { Box::pin(async {}) }
|
||||
}
|
||||
|
||||
#[derive(destructure)]
|
||||
struct IoNotifWriter {
|
||||
o: IoGuard<dyn AsyncWrite>,
|
||||
}
|
||||
impl<'a> MsgWriter<'a> for IoNotifWriter {
|
||||
fn writer(&mut self) -> Pin<&mut dyn AsyncWrite> { self.o.as_mut() }
|
||||
fn finish(mut self: Box<Self>) -> LocalBoxFuture<'static, io::Result<()>> {
|
||||
Box::pin(async move { self.o.flush().await })
|
||||
}
|
||||
}
|
||||
|
||||
pub struct CommCtx {
|
||||
exit: Sender<()>,
|
||||
}
|
||||
|
||||
impl CommCtx {
|
||||
pub async fn exit(self) { self.exit.clone().send(()).await.expect("quit channel dropped"); }
|
||||
}
|
||||
|
||||
/// Establish bidirectional request-notification communication over a duplex
|
||||
/// channel. The returned [IoClient] can be used for notifications immediately,
|
||||
/// but requests can only be received while the future is running. The future
|
||||
/// will only resolve when [CommCtx::quit] is called. The generic type
|
||||
/// parameters are associated with the client and serve to ensure with a runtime
|
||||
/// check that the correct message families are sent in the correct directions
|
||||
/// across the channel.
|
||||
pub fn io_comm(
|
||||
o: Rc<Mutex<Pin<Box<dyn AsyncWrite>>>>,
|
||||
i: Mutex<Pin<Box<dyn AsyncRead>>>,
|
||||
) -> (impl Client + 'static, CommCtx, IoCommServer) {
|
||||
let i = Rc::new(i);
|
||||
let (onsub, client) = IoClient::new(o.clone());
|
||||
let (exit, onexit) = channel(1);
|
||||
(client, CommCtx { exit }, IoCommServer { o, i, onsub, onexit })
|
||||
}
|
||||
pub struct IoCommServer {
|
||||
o: Rc<Mutex<Pin<Box<dyn AsyncWrite>>>>,
|
||||
i: Rc<Mutex<Pin<Box<dyn AsyncRead>>>>,
|
||||
onsub: Receiver<ReplySub>,
|
||||
onexit: Receiver<()>,
|
||||
}
|
||||
impl IoCommServer {
|
||||
pub async fn listen(
|
||||
self,
|
||||
notif: impl for<'a> AsyncFn(Box<dyn MsgReader<'a> + 'a>) -> io::Result<()>,
|
||||
req: impl for<'a> AsyncFn(Box<dyn ReqReader<'a> + 'a>) -> io::Result<Receipt<'a>>,
|
||||
) -> io::Result<()> {
|
||||
let Self { o, i, onexit, onsub } = self;
|
||||
enum Event {
|
||||
Input(u64, IoGuard<dyn AsyncRead>),
|
||||
Sub(ReplySub),
|
||||
Exit,
|
||||
}
|
||||
rep
|
||||
let exiting = RefCell::new(false);
|
||||
let input_stream = try_stream(async |mut h| {
|
||||
loop {
|
||||
let mut g = Bound::async_new(i.clone(), async |i| i.lock().await).await;
|
||||
match u64::decode(g.as_mut()).await {
|
||||
Ok(id) => h.emit(Event::Input(id, g)).await,
|
||||
Err(e)
|
||||
if matches!(
|
||||
e.kind(),
|
||||
io::ErrorKind::BrokenPipe
|
||||
| io::ErrorKind::ConnectionAborted
|
||||
| io::ErrorKind::UnexpectedEof
|
||||
) =>
|
||||
h.emit(Event::Exit).await,
|
||||
Err(e) => return Err(e),
|
||||
}
|
||||
}
|
||||
});
|
||||
let (mut add_pending_req, fork_future) = LocalSet::new();
|
||||
let mut fork_stream = pin!(fork_future.fuse().into_stream());
|
||||
let mut pending_replies = HashMap::new();
|
||||
'body: {
|
||||
let mut shared = pin!(stream_select!(
|
||||
pin!(input_stream) as Pin<&mut dyn Stream<Item = io::Result<Event>>>,
|
||||
onsub.map(|sub| Ok(Event::Sub(sub))),
|
||||
fork_stream.as_mut().map(|res| {
|
||||
res.map(|()| panic!("this substream cannot exit while the loop is running"))
|
||||
}),
|
||||
onexit.map(|()| Ok(Event::Exit)),
|
||||
));
|
||||
while let Some(next) = shared.next().await {
|
||||
match next {
|
||||
Err(e) => break 'body Err(e),
|
||||
Ok(Event::Exit) => {
|
||||
*exiting.borrow_mut() = true;
|
||||
let mut out = o.lock().await;
|
||||
out.as_mut().flush().await?;
|
||||
out.as_mut().close().await?;
|
||||
break;
|
||||
},
|
||||
Ok(Event::Sub(ReplySub { id, ack, cb })) => {
|
||||
pending_replies.insert(id, cb);
|
||||
ack.send(()).unwrap();
|
||||
},
|
||||
Ok(Event::Input(0, read)) => {
|
||||
let notif = ¬if;
|
||||
let notif_job =
|
||||
async move { notif(Box::new(IoMsgReader { _pd: PhantomData, read })).await };
|
||||
add_pending_req.send(Box::pin(notif_job)).await.unwrap();
|
||||
},
|
||||
// MSB == 0 is a request, !id where MSB == 1 is the corresponding response
|
||||
Ok(Event::Input(id, read)) if (id & (1 << (u64::BITS - 1))) == 0 => {
|
||||
let (o, req) = (o.clone(), &req);
|
||||
let req_job = async move {
|
||||
let mut prefix = Vec::new();
|
||||
(!id).encode_vec(&mut prefix);
|
||||
let _ = req(Box::new(IoReqReader { prefix: &pin!(prefix), read, write: &o })).await;
|
||||
Ok(())
|
||||
};
|
||||
add_pending_req.send(Box::pin(req_job)).await.unwrap();
|
||||
},
|
||||
Ok(Event::Input(id, read)) => {
|
||||
let cb = pending_replies.remove(&!id).expect("Reply to unrecognized request");
|
||||
cb.send(read).unwrap_or_else(|_| panic!("Failed to send reply"));
|
||||
},
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}?;
|
||||
mem::drop(add_pending_req);
|
||||
while let Some(next) = fork_stream.next().await {
|
||||
next?
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: MsgSet> Clone for ReqNot<T> {
|
||||
fn clone(&self) -> Self { Self(self.0.clone(), self.1.clone()) }
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use std::cell::RefCell;
|
||||
use std::rc::Rc;
|
||||
use std::sync::Arc;
|
||||
|
||||
use futures::FutureExt;
|
||||
use futures::channel::mpsc;
|
||||
use futures::lock::Mutex;
|
||||
use orchid_api_derive::Coding;
|
||||
use orchid_api_traits::{Channel, Request};
|
||||
use futures::{SinkExt, StreamExt, join};
|
||||
use orchid_api_derive::{Coding, Hierarchy};
|
||||
use orchid_api_traits::Request;
|
||||
use test_executors::spin_on;
|
||||
use unsync_pipe::pipe;
|
||||
|
||||
use super::{MsgSet, ReqNot};
|
||||
use crate::logging::Logger;
|
||||
use crate::reqnot::Requester as _;
|
||||
use crate::{api, clone};
|
||||
use crate::logging::test::TestLogger;
|
||||
use crate::logging::with_logger;
|
||||
use crate::reqnot::{ClientExt, MsgReaderExt, ReqReaderExt, io_comm};
|
||||
|
||||
#[derive(Clone, Debug, Coding, PartialEq)]
|
||||
pub struct TestReq(u8);
|
||||
impl Request for TestReq {
|
||||
type Response = u8;
|
||||
}
|
||||
|
||||
pub struct TestChan;
|
||||
impl Channel for TestChan {
|
||||
type Notif = u8;
|
||||
type Req = TestReq;
|
||||
}
|
||||
|
||||
pub struct TestMsgSet;
|
||||
impl MsgSet for TestMsgSet {
|
||||
type In = TestChan;
|
||||
type Out = TestChan;
|
||||
}
|
||||
#[derive(Clone, Debug, PartialEq, Coding, Hierarchy)]
|
||||
#[extendable]
|
||||
struct TestNotif(u64);
|
||||
|
||||
#[test]
|
||||
fn notification() {
|
||||
spin_on(async {
|
||||
let logger = Logger::new(api::LogStrategy::StdErr);
|
||||
let received = Arc::new(Mutex::new(None));
|
||||
let receiver = ReqNot::<TestMsgSet>::new(
|
||||
logger.clone(),
|
||||
|_, _| panic!("Should not send anything"),
|
||||
clone!(received; move |notif, _| clone!(received; async move {
|
||||
*received.lock().await = Some(notif);
|
||||
}.boxed_local())),
|
||||
|_, _| panic!("Not receiving a request"),
|
||||
let logger = TestLogger::new(async |s| eprint!("{s}"));
|
||||
spin_on(with_logger(logger, async {
|
||||
let (in1, out2) = pipe(1024);
|
||||
let (in2, out1) = pipe(1024);
|
||||
let (received, mut on_receive) = mpsc::channel(2);
|
||||
let (_, recv_ctx, recv_srv) =
|
||||
io_comm(Rc::new(Mutex::new(Box::pin(in2))), Mutex::new(Box::pin(out2)));
|
||||
let (sender, ..) = io_comm(Rc::new(Mutex::new(Box::pin(in1))), Mutex::new(Box::pin(out1)));
|
||||
join!(
|
||||
async {
|
||||
recv_srv
|
||||
.listen(
|
||||
async |notif| {
|
||||
received.clone().send(notif.read::<TestNotif>().await?).await.unwrap();
|
||||
Ok(())
|
||||
},
|
||||
async |_| panic!("Should receive notif, not request"),
|
||||
)
|
||||
.await
|
||||
.unwrap()
|
||||
},
|
||||
async {
|
||||
sender.notify(TestNotif(3)).await.unwrap();
|
||||
assert_eq!(on_receive.next().await, Some(TestNotif(3)));
|
||||
sender.notify(TestNotif(4)).await.unwrap();
|
||||
assert_eq!(on_receive.next().await, Some(TestNotif(4)));
|
||||
recv_ctx.exit().await;
|
||||
}
|
||||
);
|
||||
let sender = ReqNot::<TestMsgSet>::new(
|
||||
logger,
|
||||
clone!(receiver; move |d, _| clone!(receiver; Box::pin(async move {
|
||||
receiver.receive(d).await
|
||||
}))),
|
||||
|_, _| panic!("Should not receive notif"),
|
||||
|_, _| panic!("Should not receive request"),
|
||||
);
|
||||
sender.notify(3).await;
|
||||
assert_eq!(*received.lock().await, Some(3));
|
||||
sender.notify(4).await;
|
||||
assert_eq!(*received.lock().await, Some(4));
|
||||
})
|
||||
}))
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Coding, Hierarchy)]
|
||||
#[extendable]
|
||||
struct DummyRequest(u64);
|
||||
impl Request for DummyRequest {
|
||||
type Response = u64;
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn request() {
|
||||
spin_on(async {
|
||||
let logger = Logger::new(api::LogStrategy::StdErr);
|
||||
let receiver = Rc::new(Mutex::<Option<ReqNot<TestMsgSet>>>::new(None));
|
||||
let sender = Rc::new(ReqNot::<TestMsgSet>::new(
|
||||
logger.clone(),
|
||||
clone!(receiver; move |d, _| clone!(receiver; Box::pin(async move {
|
||||
receiver.lock().await.as_ref().unwrap().receive(d).await
|
||||
}))),
|
||||
|_, _| panic!("Should not receive notif"),
|
||||
|_, _| panic!("Should not receive request"),
|
||||
));
|
||||
*receiver.lock().await = Some(ReqNot::new(
|
||||
logger,
|
||||
clone!(sender; move |d, _| clone!(sender; Box::pin(async move {
|
||||
sender.receive(d).await
|
||||
}))),
|
||||
|_, _| panic!("Not receiving notifs"),
|
||||
|hand, req| {
|
||||
Box::pin(async move {
|
||||
assert_eq!(req, TestReq(5));
|
||||
hand.respond(&6u8).await
|
||||
})
|
||||
let logger = TestLogger::new(async |s| eprint!("{s}"));
|
||||
spin_on(with_logger(logger, async {
|
||||
let (in1, out2) = pipe(1024);
|
||||
let (in2, out1) = pipe(1024);
|
||||
let (_, srv_ctx, srv) =
|
||||
io_comm(Rc::new(Mutex::new(Box::pin(in2))), Mutex::new(Box::pin(out2)));
|
||||
let (client, client_ctx, client_srv) =
|
||||
io_comm(Rc::new(Mutex::new(Box::pin(in1))), Mutex::new(Box::pin(out1)));
|
||||
join!(
|
||||
async {
|
||||
srv
|
||||
.listen(
|
||||
async |_| panic!("No notifs expected"),
|
||||
async |mut req| {
|
||||
let val = req.read_req::<DummyRequest>().await?;
|
||||
req.reply(&val, &(val.0 + 1)).await
|
||||
},
|
||||
)
|
||||
.await
|
||||
.unwrap()
|
||||
},
|
||||
));
|
||||
let response = sender.request(TestReq(5)).await;
|
||||
assert_eq!(response, 6);
|
||||
})
|
||||
async {
|
||||
client_srv
|
||||
.listen(
|
||||
async |_| panic!("Not expecting ingress notif"),
|
||||
async |_| panic!("Not expecting ingress req"),
|
||||
)
|
||||
.await
|
||||
.unwrap()
|
||||
},
|
||||
async {
|
||||
let response = client.request(DummyRequest(5)).await.unwrap();
|
||||
assert_eq!(response, 6);
|
||||
srv_ctx.exit().await;
|
||||
client_ctx.exit().await;
|
||||
}
|
||||
);
|
||||
}))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn exit() {
|
||||
let logger = TestLogger::new(async |s| eprint!("{s}"));
|
||||
spin_on(with_logger(logger, async {
|
||||
let (input1, output1) = pipe(1024);
|
||||
let (input2, output2) = pipe(1024);
|
||||
let (reply_client, reply_context, reply_server) =
|
||||
io_comm(Rc::new(Mutex::new(Box::pin(input1))), Mutex::new(Box::pin(output2)));
|
||||
let (req_client, req_context, req_server) =
|
||||
io_comm(Rc::new(Mutex::new(Box::pin(input2))), Mutex::new(Box::pin(output1)));
|
||||
let reply_context = RefCell::new(Some(reply_context));
|
||||
let (exit, onexit) = futures::channel::oneshot::channel::<()>();
|
||||
join!(
|
||||
async move {
|
||||
reply_server
|
||||
.listen(
|
||||
async |hand| {
|
||||
let _notif = hand.read::<TestNotif>().await.unwrap();
|
||||
let context = reply_context.borrow_mut().take().unwrap();
|
||||
context.exit().await;
|
||||
Ok(())
|
||||
},
|
||||
async |mut hand| {
|
||||
let req = hand.read_req::<DummyRequest>().await?;
|
||||
hand.reply(&req, &(req.0 + 1)).await
|
||||
},
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
exit.send(()).unwrap();
|
||||
let _client = reply_client;
|
||||
},
|
||||
async move {
|
||||
req_server
|
||||
.listen(
|
||||
async |_| panic!("Only the other server expected notifs"),
|
||||
async |_| panic!("Only the other server expected requests"),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
let _ctx = req_context;
|
||||
},
|
||||
async move {
|
||||
req_client.request(DummyRequest(0)).await.unwrap();
|
||||
req_client.notify(TestNotif(0)).await.unwrap();
|
||||
onexit.await.unwrap();
|
||||
}
|
||||
)
|
||||
}));
|
||||
}
|
||||
}
|
||||
|
||||
44
orchid-base/src/stash.rs
Normal file
44
orchid-base/src/stash.rs
Normal file
@@ -0,0 +1,44 @@
|
||||
//! A pattern for running async code from sync destructors and other
|
||||
//! unfortunately sync callbacks
|
||||
//!
|
||||
//! We create a task_local vecdeque which is moved into a thread_local whenever
|
||||
//! the task is being polled. A call to [stash] pushes the future onto this
|
||||
//! deque. Before [with_stash] returns, it pops everything from the deque
|
||||
//! individually and awaits each of them, pushing any additionally stashed
|
||||
//! futures onto the back of the same deque.
|
||||
|
||||
use std::cell::RefCell;
|
||||
use std::collections::VecDeque;
|
||||
use std::pin::Pin;
|
||||
|
||||
use task_local::task_local;
|
||||
|
||||
#[derive(Default)]
|
||||
struct StashedFutures {
|
||||
queue: RefCell<VecDeque<Pin<Box<dyn Future<Output = ()>>>>>,
|
||||
}
|
||||
|
||||
task_local! {
|
||||
static STASHED_FUTURES: StashedFutures;
|
||||
}
|
||||
|
||||
/// Complete the argument future, and any futures spawned from it via [stash].
|
||||
/// This is useful mostly to guarantee that messaging destructors have run.
|
||||
pub async fn with_stash<F: Future>(fut: F) -> F::Output {
|
||||
STASHED_FUTURES
|
||||
.scope(StashedFutures::default(), async {
|
||||
let val = fut.await;
|
||||
while let Some(fut) = STASHED_FUTURES.with(|sf| sf.queue.borrow_mut().pop_front()) {
|
||||
fut.await;
|
||||
}
|
||||
val
|
||||
})
|
||||
.await
|
||||
}
|
||||
|
||||
/// Schedule a future to be run before the next [with_stash] guard ends. This is
|
||||
/// most useful for sending messages from destructors.
|
||||
pub fn stash<F: Future<Output = ()> + 'static>(fut: F) {
|
||||
(STASHED_FUTURES.try_with(|sf| sf.queue.borrow_mut().push_back(Box::pin(fut))))
|
||||
.expect("No stash! Timely completion cannot be guaranteed")
|
||||
}
|
||||
@@ -14,7 +14,7 @@ use trait_set::trait_set;
|
||||
|
||||
use crate::error::OrcErrv;
|
||||
use crate::format::{FmtCtx, FmtUnit, Format, Variants};
|
||||
use crate::interner::{Interner, Tok};
|
||||
use crate::interner::{IStr, es};
|
||||
use crate::location::{Pos, SrcRange};
|
||||
use crate::name::{Sym, VName, VPath};
|
||||
use crate::parse::Snippet;
|
||||
@@ -28,7 +28,6 @@ pub trait TokenVariant<ApiEquiv: Clone + Debug + Coding>: Format + Clone + fmt::
|
||||
api: &ApiEquiv,
|
||||
ctx: &mut Self::FromApiCtx<'_>,
|
||||
pos: SrcRange,
|
||||
i: &Interner,
|
||||
) -> impl Future<Output = Self>;
|
||||
#[must_use]
|
||||
fn into_api(self, ctx: &mut Self::ToApiCtx<'_>) -> impl Future<Output = ApiEquiv>;
|
||||
@@ -36,7 +35,7 @@ pub trait TokenVariant<ApiEquiv: Clone + Debug + Coding>: Format + Clone + fmt::
|
||||
impl<T: Clone + Debug + Coding> TokenVariant<T> for Never {
|
||||
type FromApiCtx<'a> = ();
|
||||
type ToApiCtx<'a> = ();
|
||||
async fn from_api(_: &T, _: &mut Self::FromApiCtx<'_>, _: SrcRange, _: &Interner) -> Self {
|
||||
async fn from_api(_: &T, _: &mut Self::FromApiCtx<'_>, _: SrcRange) -> Self {
|
||||
panic!("Cannot deserialize Never")
|
||||
}
|
||||
async fn into_api(self, _: &mut Self::ToApiCtx<'_>) -> T { match self {} }
|
||||
@@ -108,20 +107,19 @@ impl<H: ExprRepr, X: ExtraTok> TokTree<H, X> {
|
||||
hctx: &mut H::FromApiCtx<'_>,
|
||||
xctx: &mut X::FromApiCtx<'_>,
|
||||
src: &Sym,
|
||||
i: &Interner,
|
||||
) -> Self {
|
||||
let pos = SrcRange::new(tt.range.clone(), src);
|
||||
let tok = match_mapping!(&tt.token, api::Token => Token::<H, X> {
|
||||
BR,
|
||||
NS(n => Tok::from_api(*n, i).await,
|
||||
b => Box::new(Self::from_api(b, hctx, xctx, src, i).boxed_local().await)),
|
||||
Bottom(e => OrcErrv::from_api(e, i).await),
|
||||
LambdaHead(arg => Box::new(Self::from_api(arg, hctx, xctx, src, i).boxed_local().await)),
|
||||
Name(n => Tok::from_api(*n, i).await),
|
||||
S(*par, b => ttv_from_api(b, hctx, xctx, src, i).await),
|
||||
Comment(c.clone()),
|
||||
NewExpr(expr => X::from_api(expr, xctx, pos.clone(), i).await),
|
||||
Handle(tk => H::from_api(tk, hctx, pos.clone(), i).await)
|
||||
NS(n => es(*n).await,
|
||||
b => Box::new(Self::from_api(b, hctx, xctx, src).boxed_local().await)),
|
||||
Bottom(e => OrcErrv::from_api(e).await),
|
||||
LambdaHead(arg => Box::new(Self::from_api(arg, hctx, xctx, src).boxed_local().await)),
|
||||
Name(n => es(*n).await),
|
||||
S(*par, b => ttv_from_api(b, hctx, xctx, src).await),
|
||||
Comment(c => es(*c).await),
|
||||
NewExpr(expr => X::from_api(expr, xctx, pos.clone()).await),
|
||||
Handle(tk => H::from_api(tk, hctx, pos.clone()).await)
|
||||
});
|
||||
Self { sr: pos, tok }
|
||||
}
|
||||
@@ -135,7 +133,7 @@ impl<H: ExprRepr, X: ExtraTok> TokTree<H, X> {
|
||||
BR,
|
||||
NS(n.to_api(), b => Box::new(b.into_api(hctx, xctx).boxed_local().await)),
|
||||
Bottom(e.to_api()),
|
||||
Comment(c.clone()),
|
||||
Comment(c.to_api()),
|
||||
LambdaHead(arg => Box::new(arg.into_api(hctx, xctx).boxed_local().await)),
|
||||
Name(nn.to_api()),
|
||||
S(p, b => ttv_into_api(b, hctx, xctx).boxed_local().await),
|
||||
@@ -145,8 +143,8 @@ impl<H: ExprRepr, X: ExtraTok> TokTree<H, X> {
|
||||
api::TokenTree { range: self.sr.range.clone(), token }
|
||||
}
|
||||
|
||||
pub fn is_kw(&self, tk: Tok<String>) -> bool { self.tok.is_kw(tk) }
|
||||
pub fn as_name(&self) -> Option<Tok<String>> {
|
||||
pub fn is_kw(&self, tk: IStr) -> bool { self.tok.is_kw(tk) }
|
||||
pub fn as_name(&self) -> Option<IStr> {
|
||||
if let Token::Name(n) = &self.tok { Some(n.clone()) } else { None }
|
||||
}
|
||||
pub fn as_multiname(&self) -> Result<VName, &TokTree<H, X>> {
|
||||
@@ -193,11 +191,10 @@ pub async fn ttv_from_api<H: ExprRepr, X: ExtraTok>(
|
||||
hctx: &mut H::FromApiCtx<'_>,
|
||||
xctx: &mut X::FromApiCtx<'_>,
|
||||
src: &Sym,
|
||||
i: &Interner,
|
||||
) -> Vec<TokTree<H, X>> {
|
||||
stream(async |mut cx| {
|
||||
for tok in tokv {
|
||||
cx.emit(TokTree::<H, X>::from_api(tok.borrow(), hctx, xctx, src, i).boxed_local().await).await
|
||||
cx.emit(TokTree::<H, X>::from_api(tok.borrow(), hctx, xctx, src).boxed_local().await).await
|
||||
}
|
||||
})
|
||||
.collect()
|
||||
@@ -240,14 +237,14 @@ pub enum Token<H: ExprRepr, X: ExtraTok> {
|
||||
/// Information about the code addressed to the human reader or dev tooling
|
||||
/// It has no effect on the behaviour of the program unless it's explicitly
|
||||
/// read via reflection
|
||||
Comment(Rc<String>),
|
||||
Comment(IStr),
|
||||
/// The part of a lambda between `\` and `.` enclosing the argument. The body
|
||||
/// stretches to the end of the enclosing parens or the end of the const line
|
||||
LambdaHead(Box<TokTree<H, X>>),
|
||||
/// A binding, operator, or a segment of a namespaced::name
|
||||
Name(Tok<String>),
|
||||
Name(IStr),
|
||||
/// A namespace prefix, like `my_ns::` followed by a token
|
||||
NS(Tok<String>, Box<TokTree<H, X>>),
|
||||
NS(IStr, Box<TokTree<H, X>>),
|
||||
/// A line break
|
||||
BR,
|
||||
/// `()`, `[]`, or `{}`
|
||||
@@ -263,7 +260,7 @@ pub enum Token<H: ExprRepr, X: ExtraTok> {
|
||||
}
|
||||
impl<H: ExprRepr, X: ExtraTok> Token<H, X> {
|
||||
pub fn at(self, sr: SrcRange) -> TokTree<H, X> { TokTree { sr, tok: self } }
|
||||
pub fn is_kw(&self, tk: Tok<String>) -> bool { matches!(self, Token::Name(n) if *n == tk) }
|
||||
pub fn is_kw(&self, tk: IStr) -> bool { matches!(self, Token::Name(n) if *n == tk) }
|
||||
pub fn as_s(&self, par: Paren) -> Option<&[TokTree<H, X>]> {
|
||||
match self {
|
||||
Self::S(p, b) if *p == par => Some(b),
|
||||
@@ -307,7 +304,7 @@ pub async fn ttv_fmt<'a: 'b, 'b>(
|
||||
ttv: impl IntoIterator<Item = &'b TokTree<impl ExprRepr + 'a, impl ExtraTok + 'a>>,
|
||||
c: &(impl FmtCtx + ?Sized),
|
||||
) -> FmtUnit {
|
||||
FmtUnit::sequence(" ", None, join_all(ttv.into_iter().map(|t| t.print(c))).await)
|
||||
FmtUnit::sequence("", " ", "", None, join_all(ttv.into_iter().map(|t| t.print(c))).await)
|
||||
}
|
||||
|
||||
pub fn indent(s: &str) -> String { s.replace("\n", "\n ") }
|
||||
|
||||
@@ -9,19 +9,19 @@ use crate::proj_error::{ErrorSansOrigin, ErrorSansOriginObj};
|
||||
/// as the file system. Cheap to clone.
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
|
||||
pub enum Loaded {
|
||||
/// Conceptually equivalent to a sourcefile
|
||||
Code(Arc<String>),
|
||||
/// Conceptually equivalent to the list of *.orc files in a folder, without
|
||||
/// the extension
|
||||
Collection(Arc<Vec<Tok<String>>>),
|
||||
/// Conceptually equivalent to a sourcefile
|
||||
Code(Arc<String>),
|
||||
/// Conceptually equivalent to the list of *.orc files in a folder, without
|
||||
/// the extension
|
||||
Collection(Arc<Vec<IStr>>),
|
||||
}
|
||||
impl Loaded {
|
||||
/// Is the loaded item source code (not a collection)?
|
||||
pub fn is_code(&self) -> bool { matches!(self, Loaded::Code(_)) }
|
||||
/// Collect the elements in a collection rreport
|
||||
pub fn collection(items: impl IntoIterator<Item = Tok<String>>) -> Self {
|
||||
Self::Collection(Arc::new(items.into_iter().collect()))
|
||||
}
|
||||
/// Is the loaded item source code (not a collection)?
|
||||
pub fn is_code(&self) -> bool { matches!(self, Loaded::Code(_)) }
|
||||
/// Collect the elements in a collection rreport
|
||||
pub fn collection(items: impl IntoIterator<Item = IStr>) -> Self {
|
||||
Self::Collection(Arc::new(items.into_iter().collect()))
|
||||
}
|
||||
}
|
||||
|
||||
/// Returned by any source loading callback
|
||||
@@ -30,66 +30,62 @@ pub type FSResult = Result<Loaded, ErrorSansOriginObj>;
|
||||
/// Type that indicates the type of an entry without reading the contents
|
||||
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq)]
|
||||
pub enum FSKind {
|
||||
/// Invalid path or read error
|
||||
None,
|
||||
/// Source code
|
||||
Code,
|
||||
/// Internal tree node
|
||||
Collection,
|
||||
/// Invalid path or read error
|
||||
None,
|
||||
/// Source code
|
||||
Code,
|
||||
/// Internal tree node
|
||||
Collection,
|
||||
}
|
||||
|
||||
/// Distinguished error for missing code
|
||||
#[derive(Clone, PartialEq, Eq)]
|
||||
pub struct CodeNotFound(pub VPath);
|
||||
impl CodeNotFound {
|
||||
/// Instantiate error
|
||||
pub fn new(path: VPath) -> Self { Self(path) }
|
||||
/// Instantiate error
|
||||
pub fn new(path: VPath) -> Self { Self(path) }
|
||||
}
|
||||
impl ErrorSansOrigin for CodeNotFound {
|
||||
const DESCRIPTION: &'static str = "No source code for path";
|
||||
fn message(&self) -> String { format!("{} not found", self.0) }
|
||||
const DESCRIPTION: &'static str = "No source code for path";
|
||||
fn message(&self) -> String { format!("{} not found", self.0) }
|
||||
}
|
||||
|
||||
/// A simplified view of a file system for the purposes of source code loading.
|
||||
/// This includes the real FS and source code, but also various in-memory
|
||||
/// formats and other sources for libraries and dependencies.
|
||||
pub trait VirtFS {
|
||||
/// Implementation of [VirtFS::read]
|
||||
fn get(&self, path: &[Tok<String>], full_path: &PathSlice) -> FSResult;
|
||||
/// Discover information about a path without reading it.
|
||||
///
|
||||
/// Implement this if your vfs backend can do expensive operations
|
||||
fn kind(&self, path: &PathSlice) -> FSKind {
|
||||
match self.read(path) {
|
||||
Err(_) => FSKind::None,
|
||||
Ok(Loaded::Code(_)) => FSKind::Code,
|
||||
Ok(Loaded::Collection(_)) => FSKind::Collection,
|
||||
}
|
||||
}
|
||||
/// Convert a path into a human-readable string that is meaningful in the
|
||||
/// target context.
|
||||
fn display(&self, path: &[Tok<String>]) -> Option<String>;
|
||||
/// Convert the FS handler into a type-erased version of itself for packing in
|
||||
/// a tree.
|
||||
fn rc(self) -> Rc<dyn VirtFS>
|
||||
where Self: Sized + 'static {
|
||||
Rc::new(self)
|
||||
}
|
||||
/// Read a path, returning either a text file, a directory listing or an
|
||||
/// error. Wrapper for [VirtFS::get]
|
||||
fn read(&self, path: &PathSlice) -> FSResult { self.get(path, path) }
|
||||
/// Implementation of [VirtFS::read]
|
||||
fn get(&self, path: &[IStr], full_path: &PathSlice) -> FSResult;
|
||||
/// Discover information about a path without reading it.
|
||||
///
|
||||
/// Implement this if your vfs backend can do expensive operations
|
||||
fn kind(&self, path: &PathSlice) -> FSKind {
|
||||
match self.read(path) {
|
||||
Err(_) => FSKind::None,
|
||||
Ok(Loaded::Code(_)) => FSKind::Code,
|
||||
Ok(Loaded::Collection(_)) => FSKind::Collection,
|
||||
}
|
||||
}
|
||||
/// Convert a path into a human-readable string that is meaningful in the
|
||||
/// target context.
|
||||
fn display(&self, path: &[IStr]) -> Option<String>;
|
||||
/// Convert the FS handler into a type-erased version of itself for packing in
|
||||
/// a tree.
|
||||
fn rc(self) -> Rc<dyn VirtFS>
|
||||
where Self: Sized + 'static {
|
||||
Rc::new(self)
|
||||
}
|
||||
/// Read a path, returning either a text file, a directory listing or an
|
||||
/// error. Wrapper for [VirtFS::get]
|
||||
fn read(&self, path: &PathSlice) -> FSResult { self.get(path, path) }
|
||||
}
|
||||
|
||||
impl VirtFS for &dyn VirtFS {
|
||||
fn get(&self, path: &[Tok<String>], full_path: &PathSlice) -> FSResult {
|
||||
(*self).get(path, full_path)
|
||||
}
|
||||
fn display(&self, path: &[Tok<String>]) -> Option<String> { (*self).display(path) }
|
||||
fn get(&self, path: &[IStr], full_path: &PathSlice) -> FSResult { (*self).get(path, full_path) }
|
||||
fn display(&self, path: &[IStr]) -> Option<String> { (*self).display(path) }
|
||||
}
|
||||
|
||||
impl<T: VirtFS + ?Sized> VirtFS for Rc<T> {
|
||||
fn get(&self, path: &[Tok<String>], full_path: &PathSlice) -> FSResult {
|
||||
(**self).get(path, full_path)
|
||||
}
|
||||
fn display(&self, path: &[Tok<String>]) -> Option<String> { (**self).display(path) }
|
||||
fn get(&self, path: &[IStr], full_path: &PathSlice) -> FSResult { (**self).get(path, full_path) }
|
||||
fn display(&self, path: &[IStr]) -> Option<String> { (**self).display(path) }
|
||||
}
|
||||
|
||||
@@ -32,7 +32,7 @@ impl<'a> Combine for &'a dyn VirtFS {
|
||||
pub type DeclTree = ModEntry<Rc<dyn VirtFS>, (), ()>;
|
||||
|
||||
impl VirtFS for DeclTree {
|
||||
fn get(&self, path: &[Tok<String>], full_path: &PathSlice) -> FSResult {
|
||||
fn get(&self, path: &[IStr], full_path: &PathSlice) -> FSResult {
|
||||
match &self.member {
|
||||
ModMember::Item(it) => it.get(path, full_path),
|
||||
ModMember::Sub(module) => match path.split_first() {
|
||||
@@ -44,7 +44,7 @@ impl VirtFS for DeclTree {
|
||||
}
|
||||
}
|
||||
|
||||
fn display(&self, path: &[Tok<String>]) -> Option<String> {
|
||||
fn display(&self, path: &[IStr]) -> Option<String> {
|
||||
let (head, tail) = path.split_first()?;
|
||||
match &self.member {
|
||||
ModMember::Item(it) => it.display(path),
|
||||
@@ -54,16 +54,16 @@ impl VirtFS for DeclTree {
|
||||
}
|
||||
|
||||
impl VirtFS for String {
|
||||
fn display(&self, _: &[Tok<String>]) -> Option<String> { None }
|
||||
fn get(&self, path: &[Tok<String>], full_path: &PathSlice) -> FSResult {
|
||||
fn display(&self, _: &[IStr]) -> Option<String> { None }
|
||||
fn get(&self, path: &[IStr], full_path: &PathSlice) -> FSResult {
|
||||
(path.is_empty().then(|| Loaded::Code(Arc::new(self.as_str().to_string()))))
|
||||
.ok_or_else(|| CodeNotFound::new(full_path.to_vpath()).pack())
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> VirtFS for &'a str {
|
||||
fn display(&self, _: &[Tok<String>]) -> Option<String> { None }
|
||||
fn get(&self, path: &[Tok<String>], full_path: &PathSlice) -> FSResult {
|
||||
fn display(&self, _: &[IStr]) -> Option<String> { None }
|
||||
fn get(&self, path: &[IStr], full_path: &PathSlice) -> FSResult {
|
||||
(path.is_empty().then(|| Loaded::Code(Arc::new(self.to_string()))))
|
||||
.ok_or_else(|| CodeNotFound::new(full_path.to_vpath()).pack())
|
||||
}
|
||||
|
||||
@@ -99,14 +99,14 @@ impl DirNode {
|
||||
}
|
||||
}
|
||||
|
||||
fn mk_pathbuf(&self, path: &[Tok<String>]) -> PathBuf {
|
||||
fn mk_pathbuf(&self, path: &[IStr]) -> PathBuf {
|
||||
let mut fpath = self.root.clone();
|
||||
path.iter().for_each(|seg| fpath.push(seg.as_str()));
|
||||
fpath
|
||||
}
|
||||
}
|
||||
impl VirtFS for DirNode {
|
||||
fn get(&self, path: &[Tok<String>], full_path: &PathSlice) -> FSResult {
|
||||
fn get(&self, path: &[IStr], full_path: &PathSlice) -> FSResult {
|
||||
let fpath = self.mk_pathbuf(path);
|
||||
let mut binding = self.cached.borrow_mut();
|
||||
let (_, res) = (binding.raw_entry_mut().from_key(&fpath))
|
||||
@@ -114,7 +114,7 @@ impl VirtFS for DirNode {
|
||||
res.clone()
|
||||
}
|
||||
|
||||
fn display(&self, path: &[Tok<String>]) -> Option<String> {
|
||||
fn display(&self, path: &[IStr]) -> Option<String> {
|
||||
let pathbuf = self.mk_pathbuf(path).with_extension(self.ext());
|
||||
Some(pathbuf.to_string_lossy().to_string())
|
||||
}
|
||||
|
||||
@@ -56,7 +56,7 @@ impl EmbeddedFS {
|
||||
}
|
||||
|
||||
impl VirtFS for EmbeddedFS {
|
||||
fn get(&self, path: &[Tok<String>], full_path: &PathSlice) -> FSResult {
|
||||
fn get(&self, path: &[IStr], full_path: &PathSlice) -> FSResult {
|
||||
if path.is_empty() {
|
||||
return Ok(Loaded::collection(self.tree.keys(|_| true)));
|
||||
}
|
||||
@@ -67,7 +67,7 @@ impl VirtFS for EmbeddedFS {
|
||||
ModMember::Sub(sub) => Loaded::collection(sub.keys(|_| true)),
|
||||
})
|
||||
}
|
||||
fn display(&self, path: &[Tok<String>]) -> Option<String> {
|
||||
fn display(&self, path: &[IStr]) -> Option<String> {
|
||||
let Self { gen, suffix, .. } = self;
|
||||
Some(format!("{}{suffix} in {gen}", path.iter().join("/")))
|
||||
}
|
||||
|
||||
@@ -21,18 +21,18 @@ impl<'a> PrefixFS<'a> {
|
||||
add: VPath::parse(add.as_ref()),
|
||||
}
|
||||
}
|
||||
fn proc_path(&self, path: &[Tok<String>]) -> Option<Vec<Tok<String>>> {
|
||||
fn proc_path(&self, path: &[IStr]) -> Option<Vec<IStr>> {
|
||||
let path = path.strip_prefix(self.remove.as_slice())?;
|
||||
Some(self.add.0.iter().chain(path).cloned().collect_vec())
|
||||
}
|
||||
}
|
||||
impl<'a> VirtFS for PrefixFS<'a> {
|
||||
fn get(&self, path: &[Tok<String>], full_path: &PathSlice) -> super::FSResult {
|
||||
fn get(&self, path: &[IStr], full_path: &PathSlice) -> super::FSResult {
|
||||
let path =
|
||||
self.proc_path(path).ok_or_else(|| CodeNotFound::new(full_path.to_vpath()).pack())?;
|
||||
self.wrapped.get(&path, full_path)
|
||||
}
|
||||
fn display(&self, path: &[Tok<String>]) -> Option<String> {
|
||||
fn display(&self, path: &[IStr]) -> Option<String> {
|
||||
self.wrapped.display(&self.proc_path(path)?)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,18 +7,19 @@ edition = "2024"
|
||||
|
||||
[dependencies]
|
||||
async-fn-stream = { version = "0.1.0", path = "../async-fn-stream" }
|
||||
async-lock = "3.4.1"
|
||||
async-once-cell = "0.5.4"
|
||||
bound = "0.6.0"
|
||||
derive_destructure = "1.0.0"
|
||||
dyn-clone = "1.0.20"
|
||||
futures = { version = "0.3.31", features = [
|
||||
"std",
|
||||
"async-await",
|
||||
], default-features = false }
|
||||
hashbrown = "0.16.0"
|
||||
futures = { version = "0.3.31", default-features = false, features = [
|
||||
"std",
|
||||
"async-await",
|
||||
] }
|
||||
futures-locks = "0.7.1"
|
||||
hashbrown = "0.16.1"
|
||||
include_dir = { version = "0.7.4", optional = true }
|
||||
itertools = "0.14.0"
|
||||
konst = "0.4.1"
|
||||
konst = "0.4.3"
|
||||
lazy_static = "1.5.0"
|
||||
memo-map = "0.3.3"
|
||||
never = "0.1.0"
|
||||
@@ -27,12 +28,12 @@ orchid-api = { version = "0.1.0", path = "../orchid-api" }
|
||||
orchid-api-derive = { version = "0.1.0", path = "../orchid-api-derive" }
|
||||
orchid-api-traits = { version = "0.1.0", path = "../orchid-api-traits" }
|
||||
orchid-base = { version = "0.1.0", path = "../orchid-base" }
|
||||
ordered-float = "5.0.0"
|
||||
pastey = "0.1.1"
|
||||
some_executor = "0.6.1"
|
||||
ordered-float = "5.1.0"
|
||||
pastey = "0.2.1"
|
||||
substack = "1.1.1"
|
||||
tokio = { version = "1.47.1", optional = true, features = [] }
|
||||
tokio-util = { version = "0.7.16", optional = true, features = ["compat"] }
|
||||
task-local = "0.1.0"
|
||||
tokio = { version = "1.49.0", optional = true, features = [] }
|
||||
tokio-util = { version = "0.7.17", optional = true, features = ["compat"] }
|
||||
|
||||
trait-set = "0.3.0"
|
||||
|
||||
|
||||
@@ -12,23 +12,22 @@ use futures::future::LocalBoxFuture;
|
||||
use futures::{AsyncRead, AsyncWrite, FutureExt, StreamExt, stream};
|
||||
use orchid_api_derive::Coding;
|
||||
use orchid_api_traits::{Coding, Decode, Encode, Request, enc_vec};
|
||||
use orchid_base::clone;
|
||||
use orchid_base::error::{OrcErrv, OrcRes, mk_errv, mk_errv_floating};
|
||||
use orchid_base::format::{FmtCtx, FmtUnit, Format};
|
||||
use orchid_base::interner::Interner;
|
||||
use orchid_base::format::{FmtCtx, FmtUnit, Format, fmt};
|
||||
use orchid_base::interner::is;
|
||||
use orchid_base::location::Pos;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::reqnot::Requester;
|
||||
use trait_set::trait_set;
|
||||
|
||||
use crate::api;
|
||||
use crate::conv::ToExpr;
|
||||
use crate::entrypoint::request;
|
||||
// use crate::error::{ProjectError, ProjectResult};
|
||||
use crate::expr::{Expr, ExprData, ExprHandle, ExprKind};
|
||||
use crate::gen_expr::GExpr;
|
||||
use crate::system::{DynSystemCard, SysCtx, atom_info_for, downcast_atom};
|
||||
use crate::system::{DynSystemCard, atom_by_idx, atom_info_for, cted, downcast_atom};
|
||||
|
||||
#[derive(Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord, Coding)]
|
||||
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord, Coding)]
|
||||
pub struct AtomTypeId(pub NonZeroU32);
|
||||
|
||||
pub trait AtomCard: 'static + Sized {
|
||||
@@ -91,26 +90,25 @@ pub struct ForeignAtom {
|
||||
}
|
||||
impl ForeignAtom {
|
||||
pub fn pos(&self) -> Pos { self.pos.clone() }
|
||||
pub fn ctx(&self) -> &SysCtx { &self.expr.ctx }
|
||||
pub fn ex(self) -> Expr {
|
||||
let (handle, pos) = (self.expr.clone(), self.pos.clone());
|
||||
let data = ExprData { pos, kind: ExprKind::Atom(ForeignAtom { ..self }) };
|
||||
Expr::new(handle, data)
|
||||
Expr::from_data(handle, data)
|
||||
}
|
||||
pub(crate) fn new(handle: Rc<ExprHandle>, atom: api::Atom, pos: Pos) -> Self {
|
||||
ForeignAtom { atom, expr: handle, pos }
|
||||
}
|
||||
pub async fn request<M: AtomMethod>(&self, m: M) -> Option<M::Response> {
|
||||
let rep = (self.ctx().reqnot().request(api::Fwd(
|
||||
let rep = (request(api::Fwd(
|
||||
self.atom.clone(),
|
||||
Sym::parse(M::NAME, self.ctx().i()).await.unwrap().tok().to_api(),
|
||||
enc_vec(&m).await,
|
||||
Sym::parse(M::NAME).await.unwrap().tok().to_api(),
|
||||
enc_vec(&m),
|
||||
)))
|
||||
.await?;
|
||||
Some(M::Response::decode(Pin::new(&mut &rep[..])).await)
|
||||
Some(M::Response::decode_slice(&mut &rep[..]))
|
||||
}
|
||||
pub async fn downcast<T: AtomicFeatures>(self) -> Result<TypAtom<T>, NotTypAtom> {
|
||||
TypAtom::downcast(self.ex().handle()).await
|
||||
pub async fn downcast<T: AtomicFeatures>(self) -> Result<TAtom<T>, NotTypAtom> {
|
||||
TAtom::downcast(self.ex().handle()).await
|
||||
}
|
||||
}
|
||||
impl fmt::Display for ForeignAtom {
|
||||
@@ -121,40 +119,42 @@ impl fmt::Debug for ForeignAtom {
|
||||
}
|
||||
impl Format for ForeignAtom {
|
||||
async fn print<'a>(&'a self, _c: &'a (impl FmtCtx + ?Sized + 'a)) -> FmtUnit {
|
||||
FmtUnit::from_api(&self.ctx().reqnot().request(api::ExtAtomPrint(self.atom.clone())).await)
|
||||
FmtUnit::from_api(&request(api::ExtAtomPrint(self.atom.clone())).await)
|
||||
}
|
||||
}
|
||||
impl ToExpr for ForeignAtom {
|
||||
async fn to_expr(self) -> GExpr { self.ex().to_expr().await }
|
||||
async fn to_expr(self) -> Expr
|
||||
where Self: Sized {
|
||||
self.ex()
|
||||
}
|
||||
async fn to_gen(self) -> GExpr { self.ex().to_gen().await }
|
||||
}
|
||||
|
||||
pub struct NotTypAtom {
|
||||
pub pos: Pos,
|
||||
pub expr: Expr,
|
||||
pub typ: Box<dyn AtomDynfo>,
|
||||
pub ctx: SysCtx,
|
||||
}
|
||||
impl NotTypAtom {
|
||||
pub async fn mk_err(&self) -> OrcErrv {
|
||||
mk_errv(
|
||||
self.ctx.i().i("Not the expected type").await,
|
||||
format!("This expression is not a {}", self.typ.name()),
|
||||
is("Not the expected type").await,
|
||||
format!("The expression {} is not a {}", fmt(&self.expr).await, self.typ.name()),
|
||||
[self.pos.clone()],
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
pub trait AtomMethod: Request {
|
||||
pub trait AtomMethod: Request + Coding {
|
||||
const NAME: &str;
|
||||
}
|
||||
pub trait Supports<M: AtomMethod>: AtomCard {
|
||||
fn handle(&self, ctx: SysCtx, req: M) -> impl Future<Output = <M as Request>::Response>;
|
||||
fn handle(&self, req: M) -> impl Future<Output = <M as Request>::Response>;
|
||||
}
|
||||
|
||||
trait_set! {
|
||||
trait AtomReqCb<A> = for<'a> Fn(
|
||||
&'a A,
|
||||
SysCtx,
|
||||
Pin<&'a mut dyn AsyncRead>,
|
||||
Pin<&'a mut dyn AsyncWrite>,
|
||||
) -> LocalBoxFuture<'a, ()>
|
||||
@@ -171,24 +171,20 @@ impl<A: AtomCard> MethodSetBuilder<A> {
|
||||
assert!(!M::NAME.is_empty(), "AtomMethod::NAME cannoot be empty");
|
||||
self.handlers.push((
|
||||
M::NAME,
|
||||
Rc::new(
|
||||
move |a: &A, ctx: SysCtx, req: Pin<&mut dyn AsyncRead>, rep: Pin<&mut dyn AsyncWrite>| {
|
||||
async { Supports::<M>::handle(a, ctx, M::decode(req).await).await.encode(rep).await }
|
||||
.boxed_local()
|
||||
},
|
||||
),
|
||||
Rc::new(move |a: &A, req: Pin<&mut dyn AsyncRead>, rep: Pin<&mut dyn AsyncWrite>| {
|
||||
async {
|
||||
Supports::<M>::handle(a, M::decode(req).await.unwrap()).await.encode(rep).await.unwrap()
|
||||
}
|
||||
.boxed_local()
|
||||
}),
|
||||
));
|
||||
self
|
||||
}
|
||||
|
||||
pub async fn pack(&self, ctx: SysCtx) -> MethodSet<A> {
|
||||
pub async fn pack(&self) -> MethodSet<A> {
|
||||
MethodSet {
|
||||
handlers: stream::iter(self.handlers.iter())
|
||||
.then(|(k, v)| {
|
||||
clone!(ctx; async move {
|
||||
(Sym::parse(k, ctx.i()).await.unwrap(), v.clone())
|
||||
})
|
||||
})
|
||||
.then(async |(k, v)| (Sym::parse(k).await.unwrap(), v.clone()))
|
||||
.collect()
|
||||
.await,
|
||||
}
|
||||
@@ -202,7 +198,6 @@ impl<A: AtomCard> MethodSet<A> {
|
||||
pub(crate) async fn dispatch<'a>(
|
||||
&'a self,
|
||||
atom: &'a A,
|
||||
ctx: SysCtx,
|
||||
key: Sym,
|
||||
req: Pin<&'a mut dyn AsyncRead>,
|
||||
rep: Pin<&'a mut dyn AsyncWrite>,
|
||||
@@ -210,7 +205,7 @@ impl<A: AtomCard> MethodSet<A> {
|
||||
match self.handlers.get(&key) {
|
||||
None => false,
|
||||
Some(handler) => {
|
||||
handler(atom, ctx, req, rep).await;
|
||||
handler(atom, req, rep).await;
|
||||
true
|
||||
},
|
||||
}
|
||||
@@ -222,58 +217,50 @@ impl<A: AtomCard> Default for MethodSetBuilder<A> {
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct TypAtom<A: AtomicFeatures> {
|
||||
pub struct TAtom<A: AtomicFeatures> {
|
||||
pub untyped: ForeignAtom,
|
||||
pub value: A::Data,
|
||||
}
|
||||
impl<A: AtomicFeatures> TypAtom<A> {
|
||||
pub fn ctx(&self) -> &SysCtx { self.untyped.ctx() }
|
||||
pub fn i(&self) -> &Interner { self.ctx().i() }
|
||||
impl<A: AtomicFeatures> TAtom<A> {
|
||||
pub fn ex(&self) -> Expr { self.untyped.clone().ex() }
|
||||
pub fn pos(&self) -> Pos { self.untyped.pos() }
|
||||
pub async fn downcast(expr: Rc<ExprHandle>) -> Result<Self, NotTypAtom> {
|
||||
match Expr::from_handle(expr).atom().await {
|
||||
Err(expr) => Err(NotTypAtom {
|
||||
ctx: expr.handle().get_ctx(),
|
||||
pos: expr.data().await.pos.clone(),
|
||||
expr,
|
||||
typ: Box::new(A::info()),
|
||||
}),
|
||||
Err(expr) =>
|
||||
Err(NotTypAtom { pos: expr.data().await.pos.clone(), expr, typ: Box::new(A::info()) }),
|
||||
Ok(atm) => match downcast_atom::<A>(atm).await {
|
||||
Ok(tatom) => Ok(tatom),
|
||||
Err(fa) => Err(NotTypAtom {
|
||||
pos: fa.pos.clone(),
|
||||
ctx: fa.ctx().clone(),
|
||||
expr: fa.ex(),
|
||||
typ: Box::new(A::info()),
|
||||
}),
|
||||
Err(fa) => Err(NotTypAtom { pos: fa.pos.clone(), expr: fa.ex(), typ: Box::new(A::info()) }),
|
||||
},
|
||||
}
|
||||
}
|
||||
pub async fn request<M: AtomMethod>(&self, req: M) -> M::Response
|
||||
where A: Supports<M> {
|
||||
M::Response::decode(Pin::new(
|
||||
&mut &(self.untyped.ctx().reqnot().request(api::Fwd(
|
||||
M::Response::decode_slice(
|
||||
&mut &(request(api::Fwd(
|
||||
self.untyped.atom.clone(),
|
||||
Sym::parse(M::NAME, self.untyped.ctx().i()).await.unwrap().tok().to_api(),
|
||||
enc_vec(&req).await,
|
||||
Sym::parse(M::NAME).await.unwrap().tok().to_api(),
|
||||
enc_vec(&req),
|
||||
)))
|
||||
.await
|
||||
.unwrap()[..],
|
||||
))
|
||||
.await
|
||||
)
|
||||
}
|
||||
}
|
||||
impl<A: AtomicFeatures> Deref for TypAtom<A> {
|
||||
impl<A: AtomicFeatures> Deref for TAtom<A> {
|
||||
type Target = A::Data;
|
||||
fn deref(&self) -> &Self::Target { &self.value }
|
||||
}
|
||||
impl<A: AtomicFeatures> ToExpr for TypAtom<A> {
|
||||
async fn to_expr(self) -> GExpr { self.untyped.to_expr().await }
|
||||
impl<A: AtomicFeatures> ToExpr for TAtom<A> {
|
||||
async fn to_gen(self) -> GExpr { self.untyped.to_gen().await }
|
||||
}
|
||||
impl<A: AtomicFeatures> Format for TAtom<A> {
|
||||
async fn print<'a>(&'a self, c: &'a (impl FmtCtx + ?Sized + 'a)) -> FmtUnit {
|
||||
self.untyped.print(c).await
|
||||
}
|
||||
}
|
||||
|
||||
pub struct AtomCtx<'a>(pub &'a [u8], pub Option<api::AtomId>, pub SysCtx);
|
||||
impl FmtCtx for AtomCtx<'_> {
|
||||
fn i(&self) -> &Interner { self.2.i() }
|
||||
}
|
||||
pub struct AtomCtx<'a>(pub &'a [u8], pub Option<api::AtomId>);
|
||||
|
||||
pub trait AtomDynfo: 'static {
|
||||
fn tid(&self) -> TypeId;
|
||||
@@ -295,24 +282,19 @@ pub trait AtomDynfo: 'static {
|
||||
ctx: AtomCtx<'a>,
|
||||
write: Pin<&'b mut dyn AsyncWrite>,
|
||||
) -> LocalBoxFuture<'a, Option<Vec<Expr>>>;
|
||||
fn deserialize<'a>(
|
||||
&'a self,
|
||||
ctx: SysCtx,
|
||||
data: &'a [u8],
|
||||
refs: &'a [Expr],
|
||||
) -> LocalBoxFuture<'a, api::Atom>;
|
||||
fn deserialize<'a>(&'a self, data: &'a [u8], refs: &'a [Expr]) -> LocalBoxFuture<'a, api::Atom>;
|
||||
fn drop<'a>(&'a self, ctx: AtomCtx<'a>) -> LocalBoxFuture<'a, ()>;
|
||||
}
|
||||
|
||||
trait_set! {
|
||||
pub trait AtomFactoryFn = FnOnce(SysCtx) -> LocalBoxFuture<'static, api::Atom> + DynClone;
|
||||
pub trait AtomFactoryFn = FnOnce() -> LocalBoxFuture<'static, api::Atom> + DynClone;
|
||||
}
|
||||
pub struct AtomFactory(Box<dyn AtomFactoryFn>);
|
||||
impl AtomFactory {
|
||||
pub fn new(f: impl AsyncFnOnce(SysCtx) -> api::Atom + Clone + 'static) -> Self {
|
||||
Self(Box::new(|ctx| f(ctx).boxed_local()))
|
||||
pub fn new(f: impl AsyncFnOnce() -> api::Atom + Clone + 'static) -> Self {
|
||||
Self(Box::new(|| f().boxed_local()))
|
||||
}
|
||||
pub async fn build(self, ctx: SysCtx) -> api::Atom { (self.0)(ctx).await }
|
||||
pub async fn build(self) -> api::Atom { (self.0)().await }
|
||||
}
|
||||
impl Clone for AtomFactory {
|
||||
fn clone(&self) -> Self { AtomFactory(clone_box(&*self.0)) }
|
||||
@@ -329,10 +311,19 @@ impl Format for AtomFactory {
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn err_not_callable(i: &Interner) -> OrcErrv {
|
||||
mk_errv_floating(i.i("This atom is not callable").await, "Attempted to apply value as function")
|
||||
pub async fn err_not_callable() -> OrcErrv {
|
||||
mk_errv_floating(is("This atom is not callable").await, "Attempted to apply value as function")
|
||||
}
|
||||
|
||||
pub async fn err_not_command(i: &Interner) -> OrcErrv {
|
||||
mk_errv_floating(i.i("This atom is not a command").await, "Settled on an inactionable value")
|
||||
pub async fn err_not_command() -> OrcErrv {
|
||||
mk_errv_floating(is("This atom is not a command").await, "Settled on an inactionable value")
|
||||
}
|
||||
|
||||
/// Read the type ID prefix from an atom, return type information and the rest
|
||||
/// of the data
|
||||
pub(crate) fn resolve_atom_type(atom: &api::Atom) -> (Box<dyn AtomDynfo>, AtomTypeId, &[u8]) {
|
||||
let mut data = &atom.data.0[..];
|
||||
let tid = AtomTypeId::decode_slice(&mut data);
|
||||
let atom_record = atom_by_idx(cted().inst().card(), tid).expect("Unrecognized atom type ID");
|
||||
(atom_record, tid, data)
|
||||
}
|
||||
|
||||
@@ -1,49 +1,53 @@
|
||||
use std::any::{Any, TypeId, type_name};
|
||||
use std::borrow::Cow;
|
||||
use std::cell::RefCell;
|
||||
use std::future::Future;
|
||||
use std::marker::PhantomData;
|
||||
use std::num::NonZero;
|
||||
use std::ops::Deref;
|
||||
use std::pin::Pin;
|
||||
use std::sync::atomic::AtomicU64;
|
||||
use std::rc::Rc;
|
||||
|
||||
use async_lock::{RwLock, RwLockReadGuard};
|
||||
use async_once_cell::OnceCell;
|
||||
use dyn_clone::{DynClone, clone_box};
|
||||
use futures::future::{LocalBoxFuture, ready};
|
||||
use futures::{AsyncRead, AsyncWrite, FutureExt};
|
||||
use futures_locks::{RwLock, RwLockReadGuard};
|
||||
use itertools::Itertools;
|
||||
use memo_map::MemoMap;
|
||||
use never::Never;
|
||||
use orchid_api_traits::{Decode, Encode, enc_vec};
|
||||
use orchid_base::error::OrcRes;
|
||||
use orchid_base::format::{FmtCtx, FmtCtxImpl, FmtUnit, take_first};
|
||||
use orchid_base::logging::log;
|
||||
use orchid_base::name::Sym;
|
||||
use task_local::task_local;
|
||||
|
||||
use crate::api;
|
||||
use crate::atom::{
|
||||
AtomCard, AtomCtx, AtomDynfo, AtomFactory, Atomic, AtomicFeaturesImpl, AtomicVariant, MethodSet,
|
||||
MethodSetBuilder, TypAtom, err_not_callable, err_not_command, get_info,
|
||||
MethodSetBuilder, TAtom, err_not_callable, err_not_command, get_info,
|
||||
};
|
||||
use crate::expr::Expr;
|
||||
use crate::gen_expr::{GExpr, bot};
|
||||
use crate::system::{SysCtx, SysCtxEntry};
|
||||
use crate::system_ctor::CtedObj;
|
||||
use crate::system::{cted, sys_id};
|
||||
|
||||
pub struct OwnedVariant;
|
||||
impl AtomicVariant for OwnedVariant {}
|
||||
impl<A: OwnedAtom + Atomic<Variant = OwnedVariant>> AtomicFeaturesImpl<OwnedVariant> for A {
|
||||
fn _factory(self) -> AtomFactory {
|
||||
AtomFactory::new(async move |ctx| {
|
||||
let serial =
|
||||
ctx.get_or_default::<ObjStore>().next_id.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
|
||||
let atom_id = api::AtomId(NonZero::new(serial + 1).unwrap());
|
||||
let (typ_id, _) = get_info::<A>(ctx.get::<CtedObj>().inst().card());
|
||||
let mut data = enc_vec(&typ_id).await;
|
||||
AtomFactory::new(async move || {
|
||||
let obj_store = get_obj_store();
|
||||
let atom_id = {
|
||||
let mut id = obj_store.next_id.borrow_mut();
|
||||
*id += 1;
|
||||
api::AtomId(NonZero::new(*id + 1).unwrap())
|
||||
};
|
||||
let (typ_id, _) = get_info::<A>(cted().inst().card());
|
||||
let mut data = enc_vec(&typ_id);
|
||||
self.encode(Pin::<&mut Vec<u8>>::new(&mut data)).await;
|
||||
let g = ctx.get_or_default::<ObjStore>().objects.read().await;
|
||||
g.insert(atom_id, Box::new(self));
|
||||
std::mem::drop(g);
|
||||
api::Atom { drop: Some(atom_id), data: api::AtomData(data), owner: ctx.sys_id() }
|
||||
obj_store.objects.read().await.insert(atom_id, Box::new(self));
|
||||
api::Atom { drop: Some(atom_id), data: api::AtomData(data), owner: sys_id() }
|
||||
})
|
||||
}
|
||||
fn _info() -> Self::_Info { OwnedAtomDynfo { msbuild: A::reg_reqs(), ms: OnceCell::new() } }
|
||||
@@ -53,16 +57,16 @@ impl<A: OwnedAtom + Atomic<Variant = OwnedVariant>> AtomicFeaturesImpl<OwnedVari
|
||||
/// While an atom read guard is held, no atom can be removed.
|
||||
pub(crate) struct AtomReadGuard<'a> {
|
||||
id: api::AtomId,
|
||||
guard: RwLockReadGuard<'a, MemoMap<api::AtomId, Box<dyn DynOwnedAtom>>>,
|
||||
_lock: PhantomData<&'a ()>,
|
||||
guard: RwLockReadGuard<MemoMap<api::AtomId, Box<dyn DynOwnedAtom>>>,
|
||||
}
|
||||
impl<'a> AtomReadGuard<'a> {
|
||||
async fn new(id: api::AtomId, ctx: &'a SysCtx) -> Self {
|
||||
let guard = ctx.get_or_default::<ObjStore>().objects.read().await;
|
||||
async fn new(id: api::AtomId) -> Self {
|
||||
let guard = get_obj_store().objects.read().await;
|
||||
if guard.get(&id).is_none() {
|
||||
let valid = guard.iter().map(|i| i.0).collect_vec();
|
||||
panic!("Received invalid atom ID: {id:?} not in {valid:?}");
|
||||
panic!("Received invalid atom ID: {id:?}");
|
||||
}
|
||||
Self { id, guard }
|
||||
Self { id, guard, _lock: PhantomData }
|
||||
}
|
||||
}
|
||||
impl Deref for AtomReadGuard<'_> {
|
||||
@@ -71,8 +75,8 @@ impl Deref for AtomReadGuard<'_> {
|
||||
}
|
||||
|
||||
/// Remove an atom from the store
|
||||
pub(crate) async fn take_atom(id: api::AtomId, ctx: &SysCtx) -> Box<dyn DynOwnedAtom> {
|
||||
let mut g = ctx.get_or_default::<ObjStore>().objects.write().await;
|
||||
pub(crate) async fn take_atom(id: api::AtomId) -> Box<dyn DynOwnedAtom> {
|
||||
let mut g = get_obj_store().objects.write().await;
|
||||
g.remove(&id).unwrap_or_else(|| panic!("Received invalid atom ID: {}", id.0))
|
||||
}
|
||||
|
||||
@@ -85,67 +89,56 @@ impl<T: OwnedAtom> AtomDynfo for OwnedAtomDynfo<T> {
|
||||
fn name(&self) -> &'static str { type_name::<T>() }
|
||||
fn decode<'a>(&'a self, AtomCtx(data, ..): AtomCtx<'a>) -> LocalBoxFuture<'a, Box<dyn Any>> {
|
||||
Box::pin(async {
|
||||
Box::new(<T as AtomCard>::Data::decode(Pin::new(&mut &data[..])).await) as Box<dyn Any>
|
||||
Box::new(<T as AtomCard>::Data::decode_slice(&mut &data[..])) as Box<dyn Any>
|
||||
})
|
||||
}
|
||||
fn call(&self, AtomCtx(_, id, ctx): AtomCtx, arg: Expr) -> LocalBoxFuture<'_, GExpr> {
|
||||
Box::pin(async move { take_atom(id.unwrap(), &ctx).await.dyn_call(arg).await })
|
||||
fn call(&self, AtomCtx(_, id): AtomCtx, arg: Expr) -> LocalBoxFuture<'_, GExpr> {
|
||||
Box::pin(async move { take_atom(id.unwrap()).await.dyn_call(arg).await })
|
||||
}
|
||||
fn call_ref<'a>(
|
||||
&'a self,
|
||||
AtomCtx(_, id, ctx): AtomCtx<'a>,
|
||||
arg: Expr,
|
||||
) -> LocalBoxFuture<'a, GExpr> {
|
||||
Box::pin(async move { AtomReadGuard::new(id.unwrap(), &ctx).await.dyn_call_ref(arg).await })
|
||||
fn call_ref<'a>(&'a self, AtomCtx(_, id): AtomCtx<'a>, arg: Expr) -> LocalBoxFuture<'a, GExpr> {
|
||||
Box::pin(async move { AtomReadGuard::new(id.unwrap()).await.dyn_call_ref(arg).await })
|
||||
}
|
||||
fn print(&self, AtomCtx(_, id, ctx): AtomCtx<'_>) -> LocalBoxFuture<'_, FmtUnit> {
|
||||
Box::pin(
|
||||
async move { AtomReadGuard::new(id.unwrap(), &ctx).await.dyn_print(ctx.clone()).await },
|
||||
)
|
||||
fn print(&self, AtomCtx(_, id): AtomCtx<'_>) -> LocalBoxFuture<'_, FmtUnit> {
|
||||
Box::pin(async move { AtomReadGuard::new(id.unwrap()).await.dyn_print().await })
|
||||
}
|
||||
fn handle_req<'a, 'b: 'a, 'c: 'a>(
|
||||
&'a self,
|
||||
AtomCtx(_, id, ctx): AtomCtx,
|
||||
AtomCtx(_, id): AtomCtx,
|
||||
key: Sym,
|
||||
req: Pin<&'b mut dyn AsyncRead>,
|
||||
rep: Pin<&'c mut dyn AsyncWrite>,
|
||||
) -> LocalBoxFuture<'a, bool> {
|
||||
Box::pin(async move {
|
||||
let a = AtomReadGuard::new(id.unwrap(), &ctx).await;
|
||||
let ms = self.ms.get_or_init(self.msbuild.pack(ctx.clone())).await;
|
||||
ms.dispatch(a.as_any_ref().downcast_ref().unwrap(), ctx.clone(), key, req, rep).await
|
||||
let a = AtomReadGuard::new(id.unwrap()).await;
|
||||
let ms = self.ms.get_or_init(self.msbuild.pack()).await;
|
||||
ms.dispatch(a.as_any_ref().downcast_ref().unwrap(), key, req, rep).await
|
||||
})
|
||||
}
|
||||
fn command<'a>(
|
||||
&'a self,
|
||||
AtomCtx(_, id, ctx): AtomCtx<'a>,
|
||||
AtomCtx(_, id): AtomCtx<'a>,
|
||||
) -> LocalBoxFuture<'a, OrcRes<Option<GExpr>>> {
|
||||
Box::pin(async move { take_atom(id.unwrap(), &ctx).await.dyn_command(ctx.clone()).await })
|
||||
Box::pin(async move { take_atom(id.unwrap()).await.dyn_command().await })
|
||||
}
|
||||
fn drop(&self, AtomCtx(_, id, ctx): AtomCtx) -> LocalBoxFuture<'_, ()> {
|
||||
Box::pin(async move { take_atom(id.unwrap(), &ctx).await.dyn_free(ctx.clone()).await })
|
||||
fn drop(&self, AtomCtx(_, id): AtomCtx) -> LocalBoxFuture<'_, ()> {
|
||||
Box::pin(async move { take_atom(id.unwrap()).await.dyn_free().await })
|
||||
}
|
||||
fn serialize<'a, 'b: 'a>(
|
||||
&'a self,
|
||||
AtomCtx(_, id, ctx): AtomCtx<'a>,
|
||||
AtomCtx(_, id): AtomCtx<'a>,
|
||||
mut write: Pin<&'b mut dyn AsyncWrite>,
|
||||
) -> LocalBoxFuture<'a, Option<Vec<Expr>>> {
|
||||
Box::pin(async move {
|
||||
let id = id.unwrap();
|
||||
id.encode(write.as_mut()).await;
|
||||
AtomReadGuard::new(id, &ctx).await.dyn_serialize(ctx.clone(), write).await
|
||||
id.encode(write.as_mut()).await.unwrap();
|
||||
AtomReadGuard::new(id).await.dyn_serialize(write).await
|
||||
})
|
||||
}
|
||||
fn deserialize<'a>(
|
||||
&'a self,
|
||||
ctx: SysCtx,
|
||||
data: &'a [u8],
|
||||
refs: &'a [Expr],
|
||||
) -> LocalBoxFuture<'a, api::Atom> {
|
||||
fn deserialize<'a>(&'a self, data: &'a [u8], refs: &'a [Expr]) -> LocalBoxFuture<'a, api::Atom> {
|
||||
Box::pin(async move {
|
||||
let refs = T::Refs::from_iter(refs.iter().cloned());
|
||||
let obj = T::deserialize(DeserCtxImpl(data, &ctx), refs).await;
|
||||
obj._factory().build(ctx).await
|
||||
let obj = T::deserialize(DeserCtxImpl(data), refs).await;
|
||||
obj._factory().build().await
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -161,14 +154,12 @@ pub trait DeserializeCtx: Sized {
|
||||
t
|
||||
}
|
||||
}
|
||||
fn sys(&self) -> SysCtx;
|
||||
}
|
||||
|
||||
struct DeserCtxImpl<'a>(&'a [u8], &'a SysCtx);
|
||||
struct DeserCtxImpl<'a>(&'a [u8]);
|
||||
impl DeserializeCtx for DeserCtxImpl<'_> {
|
||||
async fn read<T: Decode>(&mut self) -> T { T::decode(Pin::new(&mut self.0)).await }
|
||||
async fn read<T: Decode>(&mut self) -> T { T::decode(Pin::new(&mut self.0)).await.unwrap() }
|
||||
fn is_empty(&self) -> bool { self.0.is_empty() }
|
||||
fn sys(&self) -> SysCtx { self.1.clone() }
|
||||
}
|
||||
|
||||
pub trait RefSet {
|
||||
@@ -219,22 +210,21 @@ pub trait OwnedAtom: Atomic<Variant = OwnedVariant> + Any + Clone + 'static {
|
||||
fn val(&self) -> impl Future<Output = Cow<'_, Self::Data>>;
|
||||
#[allow(unused_variables)]
|
||||
fn call_ref(&self, arg: Expr) -> impl Future<Output = GExpr> {
|
||||
async move { bot(err_not_callable(arg.ctx().i()).await) }
|
||||
async move { bot(err_not_callable().await) }
|
||||
}
|
||||
fn call(self, arg: Expr) -> impl Future<Output = GExpr> {
|
||||
async {
|
||||
let ctx = arg.ctx();
|
||||
let gcl = self.call_ref(arg).await;
|
||||
self.free(ctx).await;
|
||||
self.free().await;
|
||||
gcl
|
||||
}
|
||||
}
|
||||
#[allow(unused_variables)]
|
||||
fn command(self, ctx: SysCtx) -> impl Future<Output = OrcRes<Option<GExpr>>> {
|
||||
async move { Err(err_not_command(ctx.i()).await) }
|
||||
fn command(self) -> impl Future<Output = OrcRes<Option<GExpr>>> {
|
||||
async move { Err(err_not_command().await) }
|
||||
}
|
||||
#[allow(unused_variables)]
|
||||
fn free(self, ctx: SysCtx) -> impl Future<Output = ()> { async {} }
|
||||
fn free(self) -> impl Future<Output = ()> { async {} }
|
||||
#[allow(unused_variables)]
|
||||
fn print_atom<'a>(&'a self, c: &'a (impl FmtCtx + ?Sized + 'a)) -> impl Future<Output = FmtUnit> {
|
||||
async { format!("OwnedAtom({})", type_name::<Self>()).into() }
|
||||
@@ -242,14 +232,13 @@ pub trait OwnedAtom: Atomic<Variant = OwnedVariant> + Any + Clone + 'static {
|
||||
#[allow(unused_variables)]
|
||||
fn serialize(
|
||||
&self,
|
||||
ctx: SysCtx,
|
||||
write: Pin<&mut (impl AsyncWrite + ?Sized)>,
|
||||
) -> impl Future<Output = Self::Refs> {
|
||||
assert_serializable::<Self>();
|
||||
async { panic!("Either implement serialize or set Refs to Never for {}", type_name::<Self>()) }
|
||||
}
|
||||
#[allow(unused_variables)]
|
||||
fn deserialize(ctx: impl DeserializeCtx, refs: Self::Refs) -> impl Future<Output = Self> {
|
||||
fn deserialize(dctx: impl DeserializeCtx, refs: Self::Refs) -> impl Future<Output = Self> {
|
||||
assert_serializable::<Self>();
|
||||
async {
|
||||
panic!("Either implement deserialize or set Refs to Never for {}", type_name::<Self>())
|
||||
@@ -268,12 +257,11 @@ pub trait DynOwnedAtom: DynClone + 'static {
|
||||
fn encode<'a>(&'a self, buffer: Pin<&'a mut dyn AsyncWrite>) -> LocalBoxFuture<'a, ()>;
|
||||
fn dyn_call_ref(&self, arg: Expr) -> LocalBoxFuture<'_, GExpr>;
|
||||
fn dyn_call(self: Box<Self>, arg: Expr) -> LocalBoxFuture<'static, GExpr>;
|
||||
fn dyn_command(self: Box<Self>, ctx: SysCtx) -> LocalBoxFuture<'static, OrcRes<Option<GExpr>>>;
|
||||
fn dyn_free(self: Box<Self>, ctx: SysCtx) -> LocalBoxFuture<'static, ()>;
|
||||
fn dyn_print(&self, ctx: SysCtx) -> LocalBoxFuture<'_, FmtUnit>;
|
||||
fn dyn_command(self: Box<Self>) -> LocalBoxFuture<'static, OrcRes<Option<GExpr>>>;
|
||||
fn dyn_free(self: Box<Self>) -> LocalBoxFuture<'static, ()>;
|
||||
fn dyn_print(&self) -> LocalBoxFuture<'_, FmtUnit>;
|
||||
fn dyn_serialize<'a>(
|
||||
&'a self,
|
||||
ctx: SysCtx,
|
||||
sink: Pin<&'a mut dyn AsyncWrite>,
|
||||
) -> LocalBoxFuture<'a, Option<Vec<Expr>>>;
|
||||
}
|
||||
@@ -281,7 +269,7 @@ impl<T: OwnedAtom> DynOwnedAtom for T {
|
||||
fn atom_tid(&self) -> TypeId { TypeId::of::<T>() }
|
||||
fn as_any_ref(&self) -> &dyn Any { self }
|
||||
fn encode<'a>(&'a self, buffer: Pin<&'a mut dyn AsyncWrite>) -> LocalBoxFuture<'a, ()> {
|
||||
async { self.val().await.as_ref().encode(buffer).await }.boxed_local()
|
||||
async { self.val().await.as_ref().encode(buffer).await.unwrap() }.boxed_local()
|
||||
}
|
||||
fn dyn_call_ref(&self, arg: Expr) -> LocalBoxFuture<'_, GExpr> {
|
||||
self.call_ref(arg).boxed_local()
|
||||
@@ -289,45 +277,52 @@ impl<T: OwnedAtom> DynOwnedAtom for T {
|
||||
fn dyn_call(self: Box<Self>, arg: Expr) -> LocalBoxFuture<'static, GExpr> {
|
||||
self.call(arg).boxed_local()
|
||||
}
|
||||
fn dyn_command(self: Box<Self>, ctx: SysCtx) -> LocalBoxFuture<'static, OrcRes<Option<GExpr>>> {
|
||||
self.command(ctx).boxed_local()
|
||||
fn dyn_command(self: Box<Self>) -> LocalBoxFuture<'static, OrcRes<Option<GExpr>>> {
|
||||
self.command().boxed_local()
|
||||
}
|
||||
fn dyn_free(self: Box<Self>, ctx: SysCtx) -> LocalBoxFuture<'static, ()> {
|
||||
self.free(ctx).boxed_local()
|
||||
}
|
||||
fn dyn_print(&self, ctx: SysCtx) -> LocalBoxFuture<'_, FmtUnit> {
|
||||
async move { self.print_atom(&FmtCtxImpl { i: ctx.i() }).await }.boxed_local()
|
||||
fn dyn_free(self: Box<Self>) -> LocalBoxFuture<'static, ()> { self.free().boxed_local() }
|
||||
fn dyn_print(&self) -> LocalBoxFuture<'_, FmtUnit> {
|
||||
async move { self.print_atom(&FmtCtxImpl::default()).await }.boxed_local()
|
||||
}
|
||||
fn dyn_serialize<'a>(
|
||||
&'a self,
|
||||
ctx: SysCtx,
|
||||
sink: Pin<&'a mut dyn AsyncWrite>,
|
||||
) -> LocalBoxFuture<'a, Option<Vec<Expr>>> {
|
||||
match TypeId::of::<Never>() == TypeId::of::<<Self as OwnedAtom>::Refs>() {
|
||||
true => ready(None).boxed_local(),
|
||||
false => async { Some(self.serialize(ctx, sink).await.to_vec()) }.boxed_local(),
|
||||
false => async { Some(self.serialize(sink).await.to_vec()) }.boxed_local(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
pub(crate) struct ObjStore {
|
||||
pub(crate) next_id: AtomicU64,
|
||||
pub(crate) next_id: RefCell<u64>,
|
||||
pub(crate) objects: RwLock<MemoMap<api::AtomId, Box<dyn DynOwnedAtom>>>,
|
||||
}
|
||||
impl SysCtxEntry for ObjStore {}
|
||||
|
||||
pub async fn own<A: OwnedAtom>(typ: TypAtom<A>) -> A {
|
||||
let ctx = typ.untyped.ctx();
|
||||
let g = ctx.get_or_default::<ObjStore>().objects.read().await;
|
||||
task_local! {
|
||||
static OBJ_STORE: Rc<ObjStore>;
|
||||
}
|
||||
|
||||
pub(crate) fn with_obj_store<'a>(fut: LocalBoxFuture<'a, ()>) -> LocalBoxFuture<'a, ()> {
|
||||
Box::pin(OBJ_STORE.scope(Rc::new(ObjStore::default()), fut))
|
||||
}
|
||||
|
||||
pub(crate) fn get_obj_store() -> Rc<ObjStore> {
|
||||
OBJ_STORE.try_with(|store| store.clone()).expect("Owned atom store not initialized")
|
||||
}
|
||||
|
||||
pub async fn own<A: OwnedAtom>(typ: &TAtom<A>) -> A {
|
||||
let g = get_obj_store().objects.read().await;
|
||||
let atom_id = typ.untyped.atom.drop.expect("Owned atoms always have a drop ID");
|
||||
let dyn_atom =
|
||||
g.get(&atom_id).expect("Atom ID invalid; atom type probably not owned by this crate");
|
||||
dyn_atom.as_any_ref().downcast_ref().cloned().expect("The ID should imply a type as well")
|
||||
}
|
||||
|
||||
pub async fn debug_print_obj_store(ctx: &SysCtx, show_atoms: bool) {
|
||||
let store = ctx.get_or_default::<ObjStore>();
|
||||
pub async fn debug_print_obj_store(show_atoms: bool) {
|
||||
let store = get_obj_store();
|
||||
let keys = store.objects.read().await.keys().cloned().collect_vec();
|
||||
let mut message = "Atoms in store:".to_string();
|
||||
if !show_atoms {
|
||||
@@ -341,8 +336,8 @@ pub async fn debug_print_obj_store(ctx: &SysCtx, show_atoms: bool) {
|
||||
};
|
||||
let atom = clone_box(&**atom);
|
||||
std::mem::drop(g);
|
||||
message += &format!("\n{k:?} -> {}", take_first(&atom.dyn_print(ctx.clone()).await, true));
|
||||
message += &format!("\n{k:?} -> {}", take_first(&atom.dyn_print().await, true));
|
||||
}
|
||||
}
|
||||
eprintln!("{message}")
|
||||
writeln!(log("debug"), "{message}").await
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ use futures::{AsyncRead, AsyncWrite, FutureExt};
|
||||
use orchid_api_traits::{Coding, enc_vec};
|
||||
use orchid_base::error::OrcRes;
|
||||
use orchid_base::format::FmtUnit;
|
||||
use orchid_base::logging::log;
|
||||
use orchid_base::name::Sym;
|
||||
|
||||
use crate::api;
|
||||
@@ -17,18 +18,17 @@ use crate::atom::{
|
||||
};
|
||||
use crate::expr::Expr;
|
||||
use crate::gen_expr::{GExpr, bot};
|
||||
use crate::system::SysCtx;
|
||||
use crate::system_ctor::CtedObj;
|
||||
use crate::system::{cted, sys_id};
|
||||
|
||||
pub struct ThinVariant;
|
||||
impl AtomicVariant for ThinVariant {}
|
||||
impl<A: ThinAtom + Atomic<Variant = ThinVariant>> AtomicFeaturesImpl<ThinVariant> for A {
|
||||
fn _factory(self) -> AtomFactory {
|
||||
AtomFactory::new(async move |ctx| {
|
||||
let (id, _) = get_info::<A>(ctx.get::<CtedObj>().inst().card());
|
||||
let mut buf = enc_vec(&id).await;
|
||||
self.encode(Pin::new(&mut buf)).await;
|
||||
api::Atom { drop: None, data: api::AtomData(buf), owner: ctx.sys_id() }
|
||||
AtomFactory::new(async move || {
|
||||
let (id, _) = get_info::<A>(cted().inst().card());
|
||||
let mut buf = enc_vec(&id);
|
||||
self.encode_vec(&mut buf);
|
||||
api::Atom { drop: None, data: api::AtomData(buf), owner: sys_id() }
|
||||
})
|
||||
}
|
||||
fn _info() -> Self::_Info { ThinAtomDynfo { msbuild: Self::reg_reqs(), ms: OnceCell::new() } }
|
||||
@@ -40,37 +40,37 @@ pub struct ThinAtomDynfo<T: ThinAtom> {
|
||||
ms: OnceCell<MethodSet<T>>,
|
||||
}
|
||||
impl<T: ThinAtom> AtomDynfo for ThinAtomDynfo<T> {
|
||||
fn print<'a>(&self, AtomCtx(buf, _, ctx): AtomCtx<'a>) -> LocalBoxFuture<'a, FmtUnit> {
|
||||
Box::pin(async move { T::decode(Pin::new(&mut &buf[..])).await.print(ctx).await })
|
||||
fn print<'a>(&self, AtomCtx(buf, _): AtomCtx<'a>) -> LocalBoxFuture<'a, FmtUnit> {
|
||||
Box::pin(async move { T::decode_slice(&mut &buf[..]).print().await })
|
||||
}
|
||||
fn tid(&self) -> TypeId { TypeId::of::<T>() }
|
||||
fn name(&self) -> &'static str { type_name::<T>() }
|
||||
fn decode<'a>(&'a self, AtomCtx(buf, ..): AtomCtx<'a>) -> LocalBoxFuture<'a, Box<dyn Any>> {
|
||||
Box::pin(async { Box::new(T::decode(Pin::new(&mut &buf[..])).await) as Box<dyn Any> })
|
||||
Box::pin(async { Box::new(T::decode_slice(&mut &buf[..])) as Box<dyn Any> })
|
||||
}
|
||||
fn call<'a>(&'a self, AtomCtx(buf, ..): AtomCtx<'a>, arg: Expr) -> LocalBoxFuture<'a, GExpr> {
|
||||
Box::pin(async move { T::decode(Pin::new(&mut &buf[..])).await.call(arg).await })
|
||||
Box::pin(async move { T::decode_slice(&mut &buf[..]).call(arg).await })
|
||||
}
|
||||
fn call_ref<'a>(&'a self, AtomCtx(buf, ..): AtomCtx<'a>, arg: Expr) -> LocalBoxFuture<'a, GExpr> {
|
||||
Box::pin(async move { T::decode(Pin::new(&mut &buf[..])).await.call(arg).await })
|
||||
Box::pin(async move { T::decode_slice(&mut &buf[..]).call(arg).await })
|
||||
}
|
||||
fn handle_req<'a, 'm1: 'a, 'm2: 'a>(
|
||||
&'a self,
|
||||
AtomCtx(buf, _, sys): AtomCtx<'a>,
|
||||
AtomCtx(buf, _): AtomCtx<'a>,
|
||||
key: Sym,
|
||||
req: Pin<&'m1 mut dyn AsyncRead>,
|
||||
rep: Pin<&'m2 mut dyn AsyncWrite>,
|
||||
) -> LocalBoxFuture<'a, bool> {
|
||||
Box::pin(async move {
|
||||
let ms = self.ms.get_or_init(self.msbuild.pack(sys.clone())).await;
|
||||
ms.dispatch(&T::decode(Pin::new(&mut &buf[..])).await, sys, key, req, rep).await
|
||||
let ms = self.ms.get_or_init(self.msbuild.pack()).await;
|
||||
ms.dispatch(&T::decode_slice(&mut &buf[..]), key, req, rep).await
|
||||
})
|
||||
}
|
||||
fn command<'a>(
|
||||
&'a self,
|
||||
AtomCtx(buf, _, ctx): AtomCtx<'a>,
|
||||
AtomCtx(buf, _): AtomCtx<'a>,
|
||||
) -> LocalBoxFuture<'a, OrcRes<Option<GExpr>>> {
|
||||
async move { T::decode(Pin::new(&mut &buf[..])).await.command(ctx).await }.boxed_local()
|
||||
async move { T::decode_slice(&mut &buf[..]).command().await }.boxed_local()
|
||||
}
|
||||
fn serialize<'a, 'b: 'a>(
|
||||
&'a self,
|
||||
@@ -78,23 +78,18 @@ impl<T: ThinAtom> AtomDynfo for ThinAtomDynfo<T> {
|
||||
write: Pin<&'b mut dyn AsyncWrite>,
|
||||
) -> LocalBoxFuture<'a, Option<Vec<Expr>>> {
|
||||
Box::pin(async {
|
||||
T::decode(Pin::new(&mut &ctx.0[..])).await.encode(write).await;
|
||||
T::decode_slice(&mut &ctx.0[..]).encode(write).await.unwrap();
|
||||
Some(Vec::new())
|
||||
})
|
||||
}
|
||||
fn deserialize<'a>(
|
||||
&'a self,
|
||||
ctx: SysCtx,
|
||||
data: &'a [u8],
|
||||
refs: &'a [Expr],
|
||||
) -> LocalBoxFuture<'a, api::Atom> {
|
||||
fn deserialize<'a>(&'a self, data: &'a [u8], refs: &'a [Expr]) -> LocalBoxFuture<'a, api::Atom> {
|
||||
assert!(refs.is_empty(), "Refs found when deserializing thin atom");
|
||||
Box::pin(async { T::decode(Pin::new(&mut &data[..])).await._factory().build(ctx).await })
|
||||
Box::pin(async { T::decode_slice(&mut &data[..])._factory().build().await })
|
||||
}
|
||||
fn drop<'a>(&'a self, AtomCtx(buf, _, ctx): AtomCtx<'a>) -> LocalBoxFuture<'a, ()> {
|
||||
fn drop<'a>(&'a self, AtomCtx(buf, _): AtomCtx<'a>) -> LocalBoxFuture<'a, ()> {
|
||||
Box::pin(async move {
|
||||
let string_self = T::decode(Pin::new(&mut &buf[..])).await.print(ctx.clone()).await;
|
||||
writeln!(ctx.logger(), "Received drop signal for non-drop atom {string_self:?}");
|
||||
let string_self = T::decode_slice(&mut &buf[..]).print().await;
|
||||
writeln!(log("warn"), "Received drop signal for non-drop atom {string_self:?}").await;
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -104,14 +99,14 @@ pub trait ThinAtom:
|
||||
{
|
||||
#[allow(unused_variables)]
|
||||
fn call(&self, arg: Expr) -> impl Future<Output = GExpr> {
|
||||
async move { bot(err_not_callable(arg.ctx().i()).await) }
|
||||
async move { bot(err_not_callable().await) }
|
||||
}
|
||||
#[allow(unused_variables)]
|
||||
fn command(&self, ctx: SysCtx) -> impl Future<Output = OrcRes<Option<GExpr>>> {
|
||||
async move { Err(err_not_command(ctx.i()).await) }
|
||||
fn command(&self) -> impl Future<Output = OrcRes<Option<GExpr>>> {
|
||||
async move { Err(err_not_command().await) }
|
||||
}
|
||||
#[allow(unused_variables)]
|
||||
fn print(&self, ctx: SysCtx) -> impl Future<Output = FmtUnit> {
|
||||
fn print(&self) -> impl Future<Output = FmtUnit> {
|
||||
async { format!("ThinAtom({})", type_name::<Self>()).into() }
|
||||
}
|
||||
}
|
||||
|
||||
30
orchid-extension/src/binary.rs
Normal file
30
orchid-extension/src/binary.rs
Normal file
@@ -0,0 +1,30 @@
|
||||
use std::rc::Rc;
|
||||
|
||||
use futures::future::LocalBoxFuture;
|
||||
use orchid_base::binary::future_to_vt;
|
||||
|
||||
use crate::api;
|
||||
use crate::entrypoint::ExtensionBuilder;
|
||||
use crate::ext_port::ExtPort;
|
||||
|
||||
pub type ExtCx = api::binary::ExtensionContext;
|
||||
|
||||
struct Spawner(api::binary::Spawner);
|
||||
impl Drop for Spawner {
|
||||
fn drop(&mut self) { (self.0.drop)(self.0.data) }
|
||||
}
|
||||
impl Spawner {
|
||||
pub fn spawn(&self, fut: LocalBoxFuture<'static, ()>) {
|
||||
(self.0.spawn)(self.0.data, future_to_vt(fut))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn orchid_extension_main_body(cx: ExtCx, builder: ExtensionBuilder) {
|
||||
let spawner = Spawner(cx.spawner);
|
||||
builder.build(ExtPort {
|
||||
input: Box::pin(cx.input),
|
||||
output: Box::pin(cx.output),
|
||||
log: Box::pin(cx.log),
|
||||
spawn: Rc::new(move |fut| spawner.spawn(fut)),
|
||||
});
|
||||
}
|
||||
@@ -1,14 +1,16 @@
|
||||
use std::future::Future;
|
||||
use std::pin::Pin;
|
||||
|
||||
use dyn_clone::DynClone;
|
||||
use never::Never;
|
||||
use orchid_base::error::{OrcErrv, OrcRes, mk_errv};
|
||||
use orchid_base::interner::Interner;
|
||||
use orchid_base::interner::is;
|
||||
use orchid_base::location::Pos;
|
||||
use trait_set::trait_set;
|
||||
|
||||
use crate::atom::{AtomicFeatures, ForeignAtom, ToAtom, TypAtom};
|
||||
use crate::atom::{AtomicFeatures, ForeignAtom, TAtom, ToAtom};
|
||||
use crate::expr::Expr;
|
||||
use crate::gen_expr::{GExpr, atom, bot};
|
||||
use crate::system::{SysCtx, downcast_atom};
|
||||
|
||||
pub trait TryFromExpr: Sized {
|
||||
fn try_from_expr(expr: Expr) -> impl Future<Output = OrcRes<Self>>;
|
||||
@@ -24,61 +26,91 @@ impl<T: TryFromExpr, U: TryFromExpr> TryFromExpr for (T, U) {
|
||||
}
|
||||
}
|
||||
|
||||
async fn err_not_atom(pos: Pos, i: &Interner) -> OrcErrv {
|
||||
mk_errv(i.i("Expected an atom").await, "This expression is not an atom", [pos])
|
||||
}
|
||||
|
||||
async fn err_type(pos: Pos, i: &Interner) -> OrcErrv {
|
||||
mk_errv(i.i("Type error").await, "The atom is a different type than expected", [pos])
|
||||
async fn err_not_atom(pos: Pos) -> OrcErrv {
|
||||
mk_errv(is("Expected an atom").await, "This expression is not an atom", [pos])
|
||||
}
|
||||
|
||||
impl TryFromExpr for ForeignAtom {
|
||||
async fn try_from_expr(expr: Expr) -> OrcRes<Self> {
|
||||
match expr.atom().await {
|
||||
Err(ex) => Err(err_not_atom(ex.data().await.pos.clone(), ex.ctx().i()).await),
|
||||
Err(ex) => Err(err_not_atom(ex.data().await.pos.clone()).await),
|
||||
Ok(f) => Ok(f),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<A: AtomicFeatures> TryFromExpr for TypAtom<A> {
|
||||
impl<A: AtomicFeatures> TryFromExpr for TAtom<A> {
|
||||
async fn try_from_expr(expr: Expr) -> OrcRes<Self> {
|
||||
let f = ForeignAtom::try_from_expr(expr).await?;
|
||||
match downcast_atom::<A>(f).await {
|
||||
match f.clone().downcast::<A>().await {
|
||||
Ok(a) => Ok(a),
|
||||
Err(f) => Err(err_type(f.pos(), f.ctx().i()).await),
|
||||
Err(e) => Err(e.mk_err().await),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl TryFromExpr for SysCtx {
|
||||
async fn try_from_expr(expr: Expr) -> OrcRes<Self> { Ok(expr.ctx()) }
|
||||
pub trait ToExpr {
|
||||
fn to_gen(self) -> impl Future<Output = GExpr>;
|
||||
fn to_expr(self) -> impl Future<Output = Expr>
|
||||
where Self: Sized {
|
||||
async { self.to_gen().await.create().await }
|
||||
}
|
||||
}
|
||||
|
||||
pub trait ToExpr {
|
||||
fn to_expr(self) -> impl Future<Output = GExpr>;
|
||||
pub trait ToExprDyn {
|
||||
fn to_gen_dyn<'a>(self: Box<Self>) -> Pin<Box<dyn Future<Output = GExpr> + 'a>>
|
||||
where Self: 'a;
|
||||
|
||||
fn to_expr_dyn<'a>(self: Box<Self>) -> Pin<Box<dyn Future<Output = Expr> + 'a>>
|
||||
where Self: 'a;
|
||||
}
|
||||
impl<T: ToExpr> ToExprDyn for T {
|
||||
fn to_gen_dyn<'a>(self: Box<Self>) -> Pin<Box<dyn Future<Output = GExpr> + 'a>>
|
||||
where Self: 'a {
|
||||
Box::pin(self.to_gen())
|
||||
}
|
||||
fn to_expr_dyn<'a>(self: Box<Self>) -> Pin<Box<dyn Future<Output = Expr> + 'a>>
|
||||
where Self: 'a {
|
||||
Box::pin(self.to_expr())
|
||||
}
|
||||
}
|
||||
trait_set! {
|
||||
pub trait ClonableToExprDyn = ToExprDyn + DynClone;
|
||||
}
|
||||
impl ToExpr for Box<dyn ToExprDyn> {
|
||||
async fn to_gen(self) -> GExpr { self.to_gen_dyn().await }
|
||||
async fn to_expr(self) -> Expr { self.to_expr_dyn().await }
|
||||
}
|
||||
impl ToExpr for Box<dyn ClonableToExprDyn> {
|
||||
async fn to_gen(self) -> GExpr { self.to_gen_dyn().await }
|
||||
async fn to_expr(self) -> Expr { self.to_expr_dyn().await }
|
||||
}
|
||||
impl Clone for Box<dyn ClonableToExprDyn> {
|
||||
fn clone(&self) -> Self { dyn_clone::clone_box(&**self) }
|
||||
}
|
||||
|
||||
impl ToExpr for GExpr {
|
||||
async fn to_expr(self) -> GExpr { self }
|
||||
async fn to_gen(self) -> GExpr { self }
|
||||
async fn to_expr(self) -> Expr { self.create().await }
|
||||
}
|
||||
impl ToExpr for Expr {
|
||||
async fn to_expr(self) -> GExpr { self.slot() }
|
||||
async fn to_gen(self) -> GExpr { self.slot() }
|
||||
async fn to_expr(self) -> Expr { self }
|
||||
}
|
||||
|
||||
impl<T: ToExpr> ToExpr for OrcRes<T> {
|
||||
async fn to_expr(self) -> GExpr {
|
||||
async fn to_gen(self) -> GExpr {
|
||||
match self {
|
||||
Err(e) => bot(e),
|
||||
Ok(t) => t.to_expr().await,
|
||||
Ok(t) => t.to_gen().await,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<A: ToAtom> ToExpr for A {
|
||||
async fn to_expr(self) -> GExpr { atom(self) }
|
||||
async fn to_gen(self) -> GExpr { atom(self) }
|
||||
}
|
||||
|
||||
impl ToExpr for Never {
|
||||
async fn to_expr(self) -> GExpr { match self {} }
|
||||
async fn to_gen(self) -> GExpr { match self {} }
|
||||
}
|
||||
|
||||
@@ -8,7 +8,6 @@ use futures::stream::{self, LocalBoxStream};
|
||||
use futures::{FutureExt, SinkExt, StreamExt};
|
||||
use never::Never;
|
||||
use orchid_base::error::OrcRes;
|
||||
use orchid_base::format::{FmtCtx, FmtUnit};
|
||||
|
||||
use crate::atom::Atomic;
|
||||
use crate::atom_owned::{OwnedAtom, OwnedVariant};
|
||||
@@ -23,7 +22,6 @@ enum Command {
|
||||
}
|
||||
|
||||
struct BuilderCoroutineData {
|
||||
name: Option<String>,
|
||||
receiver: Mutex<LocalBoxStream<'static, Command>>,
|
||||
}
|
||||
|
||||
@@ -35,15 +33,15 @@ impl BuilderCoroutine {
|
||||
match cmd {
|
||||
None => panic!("Before the stream ends, we should have gotten a Halt"),
|
||||
Some(Command::Halt(expr)) => expr,
|
||||
Some(Command::Execute(expr, reply)) => call([
|
||||
lambda(0, [seq([
|
||||
arg(0),
|
||||
call([Replier { reply, builder: self }.to_expr().await, arg(0)]),
|
||||
])]),
|
||||
expr,
|
||||
]),
|
||||
Some(Command::Execute(expr, reply)) => call(
|
||||
lambda(0, [seq(
|
||||
[arg(0)],
|
||||
call(Replier { reply, builder: self }.to_gen().await, [arg(0)]),
|
||||
)]),
|
||||
[expr],
|
||||
),
|
||||
Some(Command::Register(expr, reply)) =>
|
||||
call([Replier { reply, builder: self }.to_expr().await, expr]),
|
||||
call(Replier { reply, builder: self }.to_gen().await, [expr]),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -65,23 +63,13 @@ impl OwnedAtom for Replier {
|
||||
std::mem::drop(self.reply);
|
||||
self.builder.run().await
|
||||
}
|
||||
async fn print_atom<'a>(&'a self, _: &'a (impl FmtCtx + ?Sized + 'a)) -> FmtUnit {
|
||||
match &self.builder.0.name {
|
||||
None => "BuilderCoroutine".into(),
|
||||
Some(name) => format!("BuilderCoroutine({name})").into(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn exec<R: ToExpr>(
|
||||
debug: impl AsRef<str>,
|
||||
f: impl for<'a> AsyncFnOnce(ExecHandle<'a>) -> R + 'static,
|
||||
) -> GExpr {
|
||||
pub async fn exec<R: ToExpr>(f: impl for<'a> AsyncFnOnce(ExecHandle<'a>) -> R + 'static) -> GExpr {
|
||||
let (cmd_snd, cmd_recv) = channel(0);
|
||||
let halt = async { Command::Halt(f(ExecHandle(cmd_snd, PhantomData)).await.to_expr().await) }
|
||||
let halt = async { Command::Halt(f(ExecHandle(cmd_snd, PhantomData)).await.to_gen().await) }
|
||||
.into_stream();
|
||||
let coro = BuilderCoroutine(Rc::new(BuilderCoroutineData {
|
||||
name: Some(debug.as_ref().to_string()),
|
||||
receiver: Mutex::new(stream::select(halt, cmd_recv).boxed_local()),
|
||||
}));
|
||||
coro.run().await
|
||||
@@ -93,12 +81,12 @@ pub struct ExecHandle<'a>(Sender<Command>, PhantomData<&'a ()>);
|
||||
impl ExecHandle<'_> {
|
||||
pub async fn exec<T: TryFromExpr>(&mut self, val: impl ToExpr) -> OrcRes<T> {
|
||||
let (reply_snd, mut reply_recv) = channel(1);
|
||||
self.0.send(Command::Execute(val.to_expr().await, reply_snd)).await.expect(WEIRD_DROP_ERR);
|
||||
self.0.send(Command::Execute(val.to_gen().await, reply_snd)).await.expect(WEIRD_DROP_ERR);
|
||||
T::try_from_expr(reply_recv.next().await.expect(WEIRD_DROP_ERR)).await
|
||||
}
|
||||
pub async fn register(&mut self, val: impl ToExpr) -> Expr {
|
||||
let (reply_snd, mut reply_recv) = channel(1);
|
||||
self.0.send(Command::Register(val.to_expr().await, reply_snd)).await.expect(WEIRD_DROP_ERR);
|
||||
self.0.send(Command::Register(val.to_gen().await, reply_snd)).await.expect(WEIRD_DROP_ERR);
|
||||
reply_recv.next().await.expect(WEIRD_DROP_ERR)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,421 +1,435 @@
|
||||
use std::cell::RefCell;
|
||||
use std::future::Future;
|
||||
use std::mem;
|
||||
use std::num::NonZero;
|
||||
use std::pin::Pin;
|
||||
use std::rc::Rc;
|
||||
use std::{io, mem};
|
||||
|
||||
use async_lock::RwLock;
|
||||
use futures::channel::mpsc::{Receiver, Sender, channel};
|
||||
use futures::future::{LocalBoxFuture, join_all};
|
||||
use futures::lock::Mutex;
|
||||
use futures::{FutureExt, SinkExt, StreamExt, stream, stream_select};
|
||||
use futures::{AsyncRead, AsyncWrite, AsyncWriteExt, StreamExt, stream};
|
||||
use hashbrown::HashMap;
|
||||
use itertools::Itertools;
|
||||
use orchid_api_traits::{Decode, UnderRoot, enc_vec};
|
||||
use orchid_base::builtin::{ExtInit, ExtPort, Spawner};
|
||||
use orchid_api::{ExtHostNotif, ExtHostReq};
|
||||
use orchid_api_traits::{Decode, Encode, Request, UnderRoot, enc_vec};
|
||||
use orchid_base::char_filter::{char_filter_match, char_filter_union, mk_char_filter};
|
||||
use orchid_base::clone;
|
||||
use orchid_base::error::Reporter;
|
||||
use orchid_base::interner::{Interner, Tok};
|
||||
use orchid_base::logging::Logger;
|
||||
use orchid_base::error::try_with_reporter;
|
||||
use orchid_base::interner::{es, is, with_interner};
|
||||
use orchid_base::logging::{log, with_logger};
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::parse::{Comment, Snippet};
|
||||
use orchid_base::reqnot::{ReqNot, RequestHandle, Requester};
|
||||
use orchid_base::reqnot::{
|
||||
Client, ClientExt, CommCtx, MsgReader, MsgReaderExt, Receipt, RepWriter, ReqHandle, ReqHandleExt,
|
||||
ReqReader, ReqReaderExt, Witness, io_comm,
|
||||
};
|
||||
use orchid_base::stash::with_stash;
|
||||
use orchid_base::tree::{TokenVariant, ttv_from_api};
|
||||
use substack::Substack;
|
||||
use trait_set::trait_set;
|
||||
use task_local::task_local;
|
||||
|
||||
use crate::api;
|
||||
use crate::atom::{AtomCtx, AtomDynfo, AtomTypeId};
|
||||
use crate::atom_owned::take_atom;
|
||||
use crate::atom::{AtomCtx, AtomTypeId, resolve_atom_type};
|
||||
use crate::atom_owned::{take_atom, with_obj_store};
|
||||
use crate::expr::{BorrowedExprStore, Expr, ExprHandle};
|
||||
use crate::ext_port::ExtPort;
|
||||
use crate::func_atom::with_funs_ctx;
|
||||
use crate::interner::new_interner;
|
||||
use crate::lexer::{LexContext, ekey_cascade, ekey_not_applicable};
|
||||
use crate::parser::{PTokTree, ParsCtx, get_const, linev_into_api};
|
||||
use crate::system::{SysCtx, atom_by_idx};
|
||||
use crate::system_ctor::{CtedObj, DynSystemCtor};
|
||||
use crate::tree::{LazyMemberFactory, TreeIntoApiCtxImpl};
|
||||
use crate::logger::LoggerImpl;
|
||||
use crate::parser::{PTokTree, ParsCtx, get_const, linev_into_api, with_parsed_const_ctx};
|
||||
use crate::reflection::with_refl_roots;
|
||||
use crate::system::{SysCtx, atom_by_idx, cted, with_sys};
|
||||
use crate::system_ctor::{CtedObj, DynSystemCtor, SystemCtor};
|
||||
use crate::tree::{TreeIntoApiCtxImpl, get_lazy, with_lazy_member_store};
|
||||
|
||||
pub type ExtReq<'a> = RequestHandle<'a, api::ExtMsgSet>;
|
||||
pub type ExtReqNot = ReqNot<api::ExtMsgSet>;
|
||||
|
||||
pub struct ExtensionData {
|
||||
pub name: &'static str,
|
||||
pub systems: &'static [&'static dyn DynSystemCtor],
|
||||
task_local::task_local! {
|
||||
static CLIENT: Rc<dyn Client>;
|
||||
static CTX: Rc<RefCell<Option<CommCtx>>>;
|
||||
}
|
||||
impl ExtensionData {
|
||||
pub fn new(name: &'static str, systems: &'static [&'static dyn DynSystemCtor]) -> Self {
|
||||
Self { name, systems }
|
||||
|
||||
fn get_client() -> Rc<dyn Client> { CLIENT.get() }
|
||||
pub async fn exit() {
|
||||
let cx = CTX.get().borrow_mut().take();
|
||||
cx.unwrap().exit().await
|
||||
}
|
||||
|
||||
/// Sent the client used for global [request] and [notify] functions within the
|
||||
/// runtime of this future
|
||||
pub async fn with_comm<F: Future>(c: Rc<dyn Client>, ctx: CommCtx, fut: F) -> F::Output {
|
||||
CLIENT.scope(c, CTX.scope(Rc::new(RefCell::new(Some(ctx))), fut)).await
|
||||
}
|
||||
|
||||
task_local! {
|
||||
pub static MUTE_REPLY: ();
|
||||
}
|
||||
|
||||
/// Send a request through the global client's [ClientExt::request]
|
||||
pub async fn request<T: Request + UnderRoot<Root = ExtHostReq>>(t: T) -> T::Response {
|
||||
let response = get_client().request(t).await.unwrap();
|
||||
if MUTE_REPLY.try_with(|b| *b).is_err() {
|
||||
writeln!(log("msg"), "Got response {response:?}").await;
|
||||
}
|
||||
response
|
||||
}
|
||||
|
||||
pub enum MemberRecord {
|
||||
Gen(Vec<Tok<String>>, LazyMemberFactory),
|
||||
Res,
|
||||
/// Send a notification through the global client's [ClientExt::notify]
|
||||
pub async fn notify<T: UnderRoot<Root = ExtHostNotif>>(t: T) {
|
||||
get_client().notify(t).await.unwrap()
|
||||
}
|
||||
|
||||
pub struct SystemRecord {
|
||||
lazy_members: Mutex<HashMap<api::TreeId, MemberRecord>>,
|
||||
ctx: SysCtx,
|
||||
cted: CtedObj,
|
||||
}
|
||||
|
||||
trait_set! {
|
||||
pub trait WithAtomRecordCallback<'a, T> = AsyncFnOnce(
|
||||
Box<dyn AtomDynfo>,
|
||||
SysCtx,
|
||||
AtomTypeId,
|
||||
&'a [u8]
|
||||
) -> T
|
||||
type SystemTable = RefCell<HashMap<api::SysId, Rc<SystemRecord>>>;
|
||||
|
||||
task_local! {
|
||||
static SYSTEM_TABLE: SystemTable;
|
||||
}
|
||||
|
||||
pub async fn with_atom_record<'a, F: Future<Output = SysCtx>, T>(
|
||||
get_sys_ctx: &impl Fn(api::SysId) -> F,
|
||||
atom: &'a api::Atom,
|
||||
cb: impl WithAtomRecordCallback<'a, T>,
|
||||
) -> T {
|
||||
let mut data = &atom.data.0[..];
|
||||
let ctx = get_sys_ctx(atom.owner).await;
|
||||
let inst = ctx.get::<CtedObj>().inst();
|
||||
let id = AtomTypeId::decode(Pin::new(&mut data)).await;
|
||||
let atom_record = atom_by_idx(inst.card(), id.clone()).expect("Atom ID reserved");
|
||||
cb(atom_record, ctx, id, data).await
|
||||
async fn with_sys_record<F: Future>(id: api::SysId, fut: F) -> F::Output {
|
||||
let cted = SYSTEM_TABLE.with(|tbl| tbl.borrow().get(&id).expect("Invalid sys ID").cted.clone());
|
||||
with_sys(SysCtx(id, cted), fut).await
|
||||
}
|
||||
|
||||
pub struct ExtensionOwner {
|
||||
_interner_cell: Rc<RefCell<Option<Interner>>>,
|
||||
_systems_lock: Rc<RwLock<HashMap<api::SysId, SystemRecord>>>,
|
||||
out_recv: Mutex<Receiver<Vec<u8>>>,
|
||||
out_send: Sender<Vec<u8>>,
|
||||
pub trait ContextModifier: 'static {
|
||||
fn apply<'a>(self: Box<Self>, fut: LocalBoxFuture<'a, ()>) -> LocalBoxFuture<'a, ()>;
|
||||
}
|
||||
|
||||
impl ExtPort for ExtensionOwner {
|
||||
fn send<'a>(&'a self, msg: &'a [u8]) -> LocalBoxFuture<'a, ()> {
|
||||
Box::pin(async { self.out_send.clone().send(msg.to_vec()).boxed_local().await.unwrap() })
|
||||
}
|
||||
fn recv(&self) -> LocalBoxFuture<'_, Option<Vec<u8>>> {
|
||||
Box::pin(async { self.out_recv.lock().await.next().await })
|
||||
impl<F: AsyncFnOnce(LocalBoxFuture<'_, ()>) + 'static> ContextModifier for F {
|
||||
fn apply<'a>(self: Box<Self>, fut: LocalBoxFuture<'a, ()>) -> LocalBoxFuture<'a, ()> {
|
||||
Box::pin((self)(fut))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn extension_init(
|
||||
data: ExtensionData,
|
||||
host_header: api::HostHeader,
|
||||
spawner: Spawner,
|
||||
) -> ExtInit {
|
||||
let api::HostHeader { log_strategy, msg_logs } = host_header;
|
||||
let decls = (data.systems.iter().enumerate())
|
||||
.map(|(id, sys)| (u16::try_from(id).expect("more than u16max system ctors"), sys))
|
||||
.map(|(id, sys)| sys.decl(api::SysDeclId(NonZero::new(id + 1).unwrap())))
|
||||
.collect_vec();
|
||||
let systems_lock = Rc::new(RwLock::new(HashMap::<api::SysId, SystemRecord>::new()));
|
||||
let ext_header = api::ExtensionHeader { name: data.name.to_string(), systems: decls.clone() };
|
||||
let (out_send, in_recv) = channel::<Vec<u8>>(1);
|
||||
let (in_send, out_recv) = channel::<Vec<u8>>(1);
|
||||
let (exit_send, exit_recv) = channel(1);
|
||||
let logger = Logger::new(log_strategy);
|
||||
let msg_logger = Logger::new(msg_logs);
|
||||
let interner_cell = Rc::new(RefCell::new(None::<Interner>));
|
||||
let interner_weak = Rc::downgrade(&interner_cell);
|
||||
let systems_weak = Rc::downgrade(&systems_lock);
|
||||
let get_ctx = clone!(systems_weak; move |id: api::SysId| clone!(systems_weak; async move {
|
||||
let systems =
|
||||
systems_weak.upgrade().expect("System table dropped before request processing done");
|
||||
systems.read().await.get(&id).expect("System not found").ctx.clone()
|
||||
}));
|
||||
let init_ctx = {
|
||||
clone!(interner_weak, spawner, logger);
|
||||
move |id: api::SysId, cted: CtedObj, reqnot: ReqNot<api::ExtMsgSet>| {
|
||||
clone!(interner_weak, spawner, logger; async move {
|
||||
let interner_rc =
|
||||
interner_weak.upgrade().expect("System construction order while shutting down");
|
||||
let i = interner_rc.borrow().clone().expect("mk_ctx called very early, no interner!");
|
||||
SysCtx::new(id, i, reqnot, spawner, logger, cted)
|
||||
})
|
||||
}
|
||||
};
|
||||
let rn = ReqNot::<api::ExtMsgSet>::new(
|
||||
msg_logger.clone(),
|
||||
move |a, _| {
|
||||
clone!(in_send mut);
|
||||
Box::pin(async move { in_send.send(a.to_vec()).await.unwrap() })
|
||||
},
|
||||
{
|
||||
clone!(exit_send);
|
||||
move |n, _| {
|
||||
clone!(exit_send mut);
|
||||
async move {
|
||||
match n {
|
||||
api::HostExtNotif::Exit => {
|
||||
eprintln!("Exit received");
|
||||
exit_send.send(()).await.unwrap()
|
||||
},
|
||||
pub struct ExtensionBuilder {
|
||||
pub name: &'static str,
|
||||
pub systems: Vec<Box<dyn DynSystemCtor>>,
|
||||
pub context: Vec<Box<dyn ContextModifier>>,
|
||||
}
|
||||
impl ExtensionBuilder {
|
||||
pub fn new(name: &'static str) -> Self { Self { name, systems: Vec::new(), context: Vec::new() } }
|
||||
pub fn system(mut self, ctor: impl SystemCtor) -> Self {
|
||||
self.systems.push(Box::new(ctor) as Box<_>);
|
||||
self
|
||||
}
|
||||
pub fn add_context(&mut self, fun: impl ContextModifier) {
|
||||
self.context.push(Box::new(fun) as Box<_>);
|
||||
}
|
||||
pub fn context(mut self, fun: impl ContextModifier) -> Self {
|
||||
self.add_context(fun);
|
||||
self
|
||||
}
|
||||
pub fn build(mut self, mut ctx: ExtPort) {
|
||||
self.add_context(with_funs_ctx);
|
||||
self.add_context(with_parsed_const_ctx);
|
||||
self.add_context(with_obj_store);
|
||||
self.add_context(with_lazy_member_store);
|
||||
self.add_context(with_refl_roots);
|
||||
(ctx.spawn)(Box::pin(async move {
|
||||
let host_header = api::HostHeader::decode(ctx.input.as_mut()).await.unwrap();
|
||||
let decls = (self.systems.iter().enumerate())
|
||||
.map(|(id, sys)| (u16::try_from(id).expect("more than u16max system ctors"), sys))
|
||||
.map(|(id, sys)| sys.decl(api::SysDeclId(NonZero::new(id + 1).unwrap())))
|
||||
.collect_vec();
|
||||
api::ExtensionHeader { name: self.name.to_string(), systems: decls.clone() }
|
||||
.encode(ctx.output.as_mut())
|
||||
.await
|
||||
.unwrap();
|
||||
ctx.output.as_mut().flush().await.unwrap();
|
||||
let logger1 = LoggerImpl::from_api(&host_header.logger);
|
||||
let logger2 = logger1.clone();
|
||||
let (client, comm_ctx, extension_srv) =
|
||||
io_comm(Rc::new(Mutex::new(ctx.output)), Mutex::new(ctx.input));
|
||||
let extension_fut = extension_srv.listen(
|
||||
async |n: Box<dyn MsgReader<'_>>| {
|
||||
let notif = n.read().await.unwrap();
|
||||
match notif {
|
||||
api::HostExtNotif::Exit => exit().await,
|
||||
}
|
||||
}
|
||||
.boxed_local()
|
||||
}
|
||||
},
|
||||
{
|
||||
clone!(logger, get_ctx, init_ctx, systems_weak, interner_weak, decls, msg_logger);
|
||||
move |hand, req| {
|
||||
clone!(logger, get_ctx, init_ctx, systems_weak, interner_weak, decls, msg_logger);
|
||||
async move {
|
||||
let interner_cell = interner_weak.upgrade().expect("Interner dropped before request");
|
||||
let i = interner_cell.borrow().clone().expect("Request arrived before interner set");
|
||||
if !matches!(req, api::HostExtReq::AtomReq(api::AtomReq::AtomPrint(_))) {
|
||||
writeln!(msg_logger, "{} extension received request {req:?}", data.name);
|
||||
}
|
||||
|
||||
match req {
|
||||
api::HostExtReq::SystemDrop(sys_drop) => {
|
||||
if let Some(rc) = systems_weak.upgrade() {
|
||||
mem::drop(rc.write().await.remove(&sys_drop.0))
|
||||
}
|
||||
hand.handle(&sys_drop, &()).await
|
||||
},
|
||||
api::HostExtReq::AtomDrop(atom_drop @ api::AtomDrop(sys_id, atom)) => {
|
||||
let ctx = get_ctx(sys_id).await;
|
||||
take_atom(atom, &ctx).await.dyn_free(ctx.clone()).await;
|
||||
hand.handle(&atom_drop, &()).await
|
||||
},
|
||||
api::HostExtReq::Ping(ping @ api::Ping) => hand.handle(&ping, &()).await,
|
||||
api::HostExtReq::Sweep(sweep @ api::Sweep) =>
|
||||
hand.handle(&sweep, &i.sweep_replica().await).await,
|
||||
api::HostExtReq::SysReq(api::SysReq::NewSystem(new_sys)) => {
|
||||
let (sys_id, _) = (decls.iter().enumerate().find(|(_, s)| s.id == new_sys.system))
|
||||
.expect("NewSystem call received for invalid system");
|
||||
let cted = data.systems[sys_id].new_system(&new_sys);
|
||||
let lex_filter =
|
||||
cted.inst().dyn_lexers().iter().fold(api::CharFilter(vec![]), |cf, lx| {
|
||||
char_filter_union(&cf, &mk_char_filter(lx.char_filter().iter().cloned()))
|
||||
});
|
||||
let lazy_members = Mutex::new(HashMap::new());
|
||||
let ctx = init_ctx(new_sys.id, cted.clone(), hand.reqnot()).await;
|
||||
let const_root = stream::iter(cted.inst().dyn_env())
|
||||
.then(|mem| {
|
||||
let lazy_mems = &lazy_members;
|
||||
clone!(i, ctx; async move {
|
||||
let mut tia_ctx = TreeIntoApiCtxImpl {
|
||||
lazy_members: &mut *lazy_mems.lock().await,
|
||||
sys: ctx,
|
||||
basepath: &[],
|
||||
path: Substack::Bottom,
|
||||
};
|
||||
(i.i(&mem.name).await.to_api(), mem.kind.into_api(&mut tia_ctx).await)
|
||||
})
|
||||
Ok(())
|
||||
},
|
||||
async |mut reader| {
|
||||
with_stash(async {
|
||||
let req = reader.read_req().await.unwrap();
|
||||
let handle = reader.finish().await;
|
||||
// Atom printing is never reported because it generates too much
|
||||
// noise
|
||||
if !matches!(req, api::HostExtReq::AtomReq(api::AtomReq::AtomPrint(_))) {
|
||||
writeln!(log("msg"), "{} extension received request {req:?}", self.name).await;
|
||||
}
|
||||
match req {
|
||||
api::HostExtReq::SystemDrop(sys_drop) => {
|
||||
SYSTEM_TABLE.with(|l| l.borrow_mut().remove(&sys_drop.0));
|
||||
handle.reply(&sys_drop, &()).await
|
||||
},
|
||||
api::HostExtReq::AtomDrop(atom_drop @ api::AtomDrop(sys_id, atom)) =>
|
||||
with_sys_record(sys_id, async {
|
||||
take_atom(atom).await.dyn_free().await;
|
||||
handle.reply(&atom_drop, &()).await
|
||||
})
|
||||
.collect()
|
||||
.await;
|
||||
let prelude =
|
||||
cted.inst().dyn_prelude(&i).await.iter().map(|sym| sym.to_api()).collect();
|
||||
let record = SystemRecord { ctx, lazy_members };
|
||||
let systems = systems_weak.upgrade().expect("System constructed during shutdown");
|
||||
systems.write().await.insert(new_sys.id, record);
|
||||
let line_types = join_all(
|
||||
(cted.inst().dyn_parsers().iter())
|
||||
.map(|p| async { i.i(p.line_head()).await.to_api() }),
|
||||
)
|
||||
.await;
|
||||
let response = api::NewSystemResponse { lex_filter, const_root, line_types, prelude };
|
||||
hand.handle(&new_sys, &response).await
|
||||
},
|
||||
api::HostExtReq::GetMember(get_tree @ api::GetMember(sys_id, tree_id)) => {
|
||||
let sys_ctx = get_ctx(sys_id).await;
|
||||
let systems = systems_weak.upgrade().expect("Member queried during shutdown");
|
||||
let systems_g = systems.read().await;
|
||||
let mut lazy_members =
|
||||
systems_g.get(&sys_id).expect("System not found").lazy_members.lock().await;
|
||||
let (path, cb) = match lazy_members.insert(tree_id, MemberRecord::Res) {
|
||||
None => panic!("Tree for ID not found"),
|
||||
Some(MemberRecord::Res) => panic!("This tree has already been transmitted"),
|
||||
Some(MemberRecord::Gen(path, cb)) => (path, cb),
|
||||
};
|
||||
let tree = cb.build(Sym::new(path.clone(), &i).await.unwrap(), sys_ctx.clone()).await;
|
||||
let mut tia_ctx = TreeIntoApiCtxImpl {
|
||||
sys: sys_ctx,
|
||||
path: Substack::Bottom,
|
||||
basepath: &path,
|
||||
lazy_members: &mut lazy_members,
|
||||
};
|
||||
hand.handle(&get_tree, &tree.into_api(&mut tia_ctx).await).await
|
||||
},
|
||||
api::HostExtReq::SysReq(api::SysReq::SysFwded(fwd)) => {
|
||||
let api::SysFwded(sys_id, payload) = fwd;
|
||||
let ctx = get_ctx(sys_id).await;
|
||||
let sys = ctx.cted().inst();
|
||||
sys.dyn_request(hand, payload).await
|
||||
},
|
||||
api::HostExtReq::LexExpr(lex @ api::LexExpr { sys, src, text, pos, id }) => {
|
||||
let mut sys_ctx = get_ctx(sys).await;
|
||||
let text = Tok::from_api(text, &i).await;
|
||||
let src = Sym::from_api(src, sys_ctx.i()).await;
|
||||
let rep = Reporter::new();
|
||||
let expr_store = BorrowedExprStore::new();
|
||||
let trigger_char = text.chars().nth(pos as usize).unwrap();
|
||||
let ekey_na = ekey_not_applicable(&i).await;
|
||||
let ekey_cascade = ekey_cascade(&i).await;
|
||||
let lexers = sys_ctx.cted().inst().dyn_lexers();
|
||||
for lx in lexers.iter().filter(|l| char_filter_match(l.char_filter(), trigger_char)) {
|
||||
let ctx = LexContext {
|
||||
id,
|
||||
pos,
|
||||
text: &text,
|
||||
src: src.clone(),
|
||||
ctx: sys_ctx.clone(),
|
||||
rep: &rep,
|
||||
exprs: &expr_store,
|
||||
};
|
||||
match lx.lex(&text[pos as usize..], &ctx).await {
|
||||
Err(e) if e.any(|e| *e == ekey_na) => continue,
|
||||
Err(e) => {
|
||||
let eopt = e.keep_only(|e| *e != ekey_cascade).map(|e| Err(e.to_api()));
|
||||
expr_store.dispose().await;
|
||||
return hand.handle(&lex, &eopt).await;
|
||||
},
|
||||
Ok((s, expr)) => {
|
||||
let expr = expr.into_api(&mut (), &mut sys_ctx).await;
|
||||
let pos = (text.len() - s.len()) as u32;
|
||||
expr_store.dispose().await;
|
||||
return hand.handle(&lex, &Some(Ok(api::LexedExpr { pos, expr }))).await;
|
||||
},
|
||||
}
|
||||
}
|
||||
writeln!(logger, "Got notified about n/a character '{trigger_char}'");
|
||||
expr_store.dispose().await;
|
||||
hand.handle(&lex, &None).await
|
||||
},
|
||||
api::HostExtReq::ParseLine(pline) => {
|
||||
let api::ParseLine { module, src, exported, comments, sys, line, idx } = &pline;
|
||||
let ctx = get_ctx(*sys).await;
|
||||
let parsers = ctx.cted().inst().dyn_parsers();
|
||||
let src = Sym::from_api(*src, ctx.i()).await;
|
||||
let comments =
|
||||
join_all(comments.iter().map(|c| Comment::from_api(c, src.clone(), &i))).await;
|
||||
let expr_store = BorrowedExprStore::new();
|
||||
let mut from_api_ctx = (ctx.clone(), &expr_store);
|
||||
let line: Vec<PTokTree> =
|
||||
ttv_from_api(line, &mut from_api_ctx, &mut (), &src, &i).await;
|
||||
let snip = Snippet::new(line.first().expect("Empty line"), &line);
|
||||
let parser = parsers[*idx as usize];
|
||||
let module = Sym::from_api(*module, ctx.i()).await;
|
||||
let reporter = Reporter::new();
|
||||
let pctx = ParsCtx::new(ctx.clone(), module, &reporter);
|
||||
let parse_res = parser.parse(pctx, *exported, comments, snip).await;
|
||||
let o_line = match reporter.merge(parse_res) {
|
||||
Err(e) => Err(e.to_api()),
|
||||
Ok(t) => Ok(linev_into_api(t, ctx.clone()).await),
|
||||
};
|
||||
mem::drop(line);
|
||||
expr_store.dispose().await;
|
||||
hand.handle(&pline, &o_line).await
|
||||
},
|
||||
api::HostExtReq::FetchParsedConst(ref fpc @ api::FetchParsedConst(sys, id)) => {
|
||||
let ctx = get_ctx(sys).await;
|
||||
let cnst = get_const(id, ctx.clone()).await;
|
||||
hand.handle(fpc, &cnst.api_return(ctx).await).await
|
||||
},
|
||||
api::HostExtReq::AtomReq(atom_req) => {
|
||||
let atom = atom_req.get_atom();
|
||||
let atom_req = atom_req.clone();
|
||||
with_atom_record(&get_ctx, atom, async move |nfo, ctx, id, buf| {
|
||||
let actx = AtomCtx(buf, atom.drop, ctx.clone());
|
||||
match &atom_req {
|
||||
api::AtomReq::SerializeAtom(ser) => {
|
||||
let mut buf = enc_vec(&id).await;
|
||||
match nfo.serialize(actx, Pin::<&mut Vec<_>>::new(&mut buf)).await {
|
||||
None => hand.handle(ser, &None).await,
|
||||
Some(refs) => {
|
||||
let refs =
|
||||
join_all(refs.into_iter().map(|ex| async { ex.into_api(&mut ()).await }))
|
||||
.await;
|
||||
hand.handle(ser, &Some((buf, refs))).await
|
||||
.await,
|
||||
api::HostExtReq::Ping(ping @ api::Ping) => handle.reply(&ping, &()).await,
|
||||
api::HostExtReq::Sweep(api::Sweep) => todo!(),
|
||||
api::HostExtReq::SysReq(api::SysReq::NewSystem(new_sys)) => {
|
||||
let (ctor_idx, _) =
|
||||
(decls.iter().enumerate().find(|(_, s)| s.id == new_sys.system))
|
||||
.expect("NewSystem call received for invalid system");
|
||||
let cted = self.systems[ctor_idx].new_system(&new_sys);
|
||||
let record = Rc::new(SystemRecord { cted: cted.clone() });
|
||||
SYSTEM_TABLE.with(|tbl| {
|
||||
let mut g = tbl.borrow_mut();
|
||||
g.insert(new_sys.id, record);
|
||||
});
|
||||
with_sys_record(new_sys.id, async {
|
||||
let lex_filter =
|
||||
cted.inst().dyn_lexers().iter().fold(api::CharFilter(vec![]), |cf, lx| {
|
||||
char_filter_union(&cf, &mk_char_filter(lx.char_filter().iter().cloned()))
|
||||
});
|
||||
let const_root = stream::iter(cted.inst().dyn_env().await)
|
||||
.then(async |mem| {
|
||||
let name = is(&mem.name).await;
|
||||
let mut tia_ctx = TreeIntoApiCtxImpl {
|
||||
basepath: &[],
|
||||
path: Substack::Bottom.push(name.clone()),
|
||||
};
|
||||
(name.to_api(), mem.kind.into_api(&mut tia_ctx).await)
|
||||
})
|
||||
.collect()
|
||||
.await;
|
||||
let prelude =
|
||||
cted.inst().dyn_prelude().await.iter().map(|sym| sym.to_api()).collect();
|
||||
let line_types = join_all(
|
||||
(cted.inst().dyn_parsers().iter())
|
||||
.map(async |p| is(p.line_head()).await.to_api()),
|
||||
)
|
||||
.await;
|
||||
let response =
|
||||
api::NewSystemResponse { lex_filter, const_root, line_types, prelude };
|
||||
handle.reply(&new_sys, &response).await
|
||||
})
|
||||
.await
|
||||
},
|
||||
api::HostExtReq::GetMember(get_tree @ api::GetMember(sys_id, tree_id)) =>
|
||||
with_sys_record(sys_id, async {
|
||||
let (path, tree) = get_lazy(tree_id).await;
|
||||
let mut tia_ctx =
|
||||
TreeIntoApiCtxImpl { path: Substack::Bottom, basepath: &path[..] };
|
||||
handle.reply(&get_tree, &tree.into_api(&mut tia_ctx).await).await
|
||||
})
|
||||
.await,
|
||||
api::HostExtReq::SysReq(api::SysReq::SysFwded(fwd)) => {
|
||||
let fwd_tok = Witness::of(&fwd);
|
||||
let api::SysFwded(sys_id, payload) = fwd;
|
||||
with_sys_record(sys_id, async {
|
||||
struct TrivialReqCycle<'a> {
|
||||
req: &'a [u8],
|
||||
rep: &'a mut Vec<u8>,
|
||||
}
|
||||
impl<'a> ReqReader<'a> for TrivialReqCycle<'a> {
|
||||
fn reader(&mut self) -> Pin<&mut dyn AsyncRead> {
|
||||
Pin::new(&mut self.req) as Pin<&mut _>
|
||||
}
|
||||
fn finish(self: Box<Self>) -> LocalBoxFuture<'a, Box<dyn ReqHandle<'a> + 'a>> {
|
||||
Box::pin(async { self as Box<_> })
|
||||
}
|
||||
}
|
||||
impl<'a> ReqHandle<'a> for TrivialReqCycle<'a> {
|
||||
fn start_reply(
|
||||
self: Box<Self>,
|
||||
) -> LocalBoxFuture<'a, io::Result<Box<dyn RepWriter<'a> + 'a>>> {
|
||||
Box::pin(async { Ok(self as Box<_>) })
|
||||
}
|
||||
}
|
||||
impl<'a> RepWriter<'a> for TrivialReqCycle<'a> {
|
||||
fn writer(&mut self) -> Pin<&mut dyn AsyncWrite> {
|
||||
Pin::new(&mut self.rep) as Pin<&mut _>
|
||||
}
|
||||
fn finish(
|
||||
self: Box<Self>,
|
||||
) -> LocalBoxFuture<'a, io::Result<orchid_base::reqnot::Receipt<'a>>>
|
||||
{
|
||||
Box::pin(async { Ok(Receipt::_new()) })
|
||||
}
|
||||
}
|
||||
let mut reply = Vec::new();
|
||||
let req = TrivialReqCycle { req: &payload, rep: &mut reply };
|
||||
let _ = cted().inst().dyn_request(Box::new(req)).await;
|
||||
handle.reply(fwd_tok, &reply).await
|
||||
})
|
||||
.await
|
||||
},
|
||||
api::HostExtReq::LexExpr(lex @ api::LexExpr { sys, src, text, pos, id }) =>
|
||||
with_sys_record(sys, async {
|
||||
let text = es(text).await;
|
||||
let src = Sym::from_api(src).await;
|
||||
let expr_store = BorrowedExprStore::new();
|
||||
let trigger_char = text.chars().nth(pos as usize).unwrap();
|
||||
let ekey_na = ekey_not_applicable().await;
|
||||
let ekey_cascade = ekey_cascade().await;
|
||||
let lexers = cted().inst().dyn_lexers();
|
||||
for lx in
|
||||
lexers.iter().filter(|l| char_filter_match(l.char_filter(), trigger_char))
|
||||
{
|
||||
let ctx = LexContext::new(&expr_store, &text, id, pos, src.clone());
|
||||
match try_with_reporter(lx.lex(&text[pos as usize..], &ctx)).await {
|
||||
Err(e) if e.any(|e| *e == ekey_na) => continue,
|
||||
Err(e) => {
|
||||
let eopt = e.keep_only(|e| *e != ekey_cascade).map(|e| Err(e.to_api()));
|
||||
expr_store.dispose().await;
|
||||
return handle.reply(&lex, &eopt).await;
|
||||
},
|
||||
Ok((s, expr)) => {
|
||||
let expr = expr.into_api(&mut (), &mut ()).await;
|
||||
let pos = (text.len() - s.len()) as u32;
|
||||
expr_store.dispose().await;
|
||||
return handle.reply(&lex, &Some(Ok(api::LexedExpr { pos, expr }))).await;
|
||||
},
|
||||
}
|
||||
},
|
||||
api::AtomReq::AtomPrint(print @ api::AtomPrint(_)) =>
|
||||
hand.handle(print, &nfo.print(actx).await.to_api()).await,
|
||||
api::AtomReq::Fwded(fwded) => {
|
||||
let api::Fwded(_, key, payload) = &fwded;
|
||||
let mut reply = Vec::new();
|
||||
let key = Sym::from_api(*key, &i).await;
|
||||
let some = nfo
|
||||
.handle_req(
|
||||
actx,
|
||||
key,
|
||||
Pin::<&mut &[u8]>::new(&mut &payload[..]),
|
||||
Pin::<&mut Vec<_>>::new(&mut reply),
|
||||
)
|
||||
.await;
|
||||
hand.handle(fwded, &some.then_some(reply)).await
|
||||
},
|
||||
api::AtomReq::CallRef(call @ api::CallRef(_, arg)) => {
|
||||
let expr_store = BorrowedExprStore::new();
|
||||
let expr_handle = ExprHandle::borrowed(ctx.clone(), *arg, &expr_store);
|
||||
let ret = nfo.call_ref(actx, Expr::from_handle(expr_handle.clone())).await;
|
||||
let api_expr = ret.api_return(ctx.clone()).await;
|
||||
mem::drop(expr_handle);
|
||||
expr_store.dispose().await;
|
||||
hand.handle(call, &api_expr).await
|
||||
},
|
||||
api::AtomReq::FinalCall(call @ api::FinalCall(_, arg)) => {
|
||||
let expr_store = BorrowedExprStore::new();
|
||||
let expr_handle = ExprHandle::borrowed(ctx.clone(), *arg, &expr_store);
|
||||
let ret = nfo.call(actx, Expr::from_handle(expr_handle.clone())).await;
|
||||
let api_expr = ret.api_return(ctx.clone()).await;
|
||||
mem::drop(expr_handle);
|
||||
expr_store.dispose().await;
|
||||
hand.handle(call, &api_expr).await
|
||||
},
|
||||
api::AtomReq::Command(cmd @ api::Command(_)) => match nfo.command(actx).await {
|
||||
Err(e) => hand.handle(cmd, &Err(e.to_api())).await,
|
||||
Ok(opt) => match opt {
|
||||
None => hand.handle(cmd, &Ok(api::NextStep::Halt)).await,
|
||||
Some(cont) => {
|
||||
let cont = cont.api_return(ctx.clone()).await;
|
||||
hand.handle(cmd, &Ok(api::NextStep::Continue(cont))).await
|
||||
}
|
||||
writeln!(log("warn"), "Got notified about n/a character '{trigger_char}'").await;
|
||||
expr_store.dispose().await;
|
||||
handle.reply(&lex, &None).await
|
||||
})
|
||||
.await,
|
||||
api::HostExtReq::ParseLine(pline) => {
|
||||
let api::ParseLine { module, src, exported, comments, sys, line, idx } = &pline;
|
||||
with_sys_record(*sys, async {
|
||||
let parsers = cted().inst().dyn_parsers();
|
||||
let src = Sym::from_api(*src).await;
|
||||
let comments =
|
||||
join_all(comments.iter().map(|c| Comment::from_api(c, src.clone()))).await;
|
||||
let expr_store = BorrowedExprStore::new();
|
||||
let line: Vec<PTokTree> =
|
||||
ttv_from_api(line, &mut &expr_store, &mut (), &src).await;
|
||||
let snip = Snippet::new(line.first().expect("Empty line"), &line);
|
||||
let parser = parsers[*idx as usize];
|
||||
let module = Sym::from_api(*module).await;
|
||||
let pctx = ParsCtx::new(module);
|
||||
let o_line =
|
||||
match try_with_reporter(parser.parse(pctx, *exported, comments, snip)).await {
|
||||
Err(e) => Err(e.to_api()),
|
||||
Ok(t) => Ok(linev_into_api(t).await),
|
||||
};
|
||||
mem::drop(line);
|
||||
expr_store.dispose().await;
|
||||
handle.reply(&pline, &o_line).await
|
||||
})
|
||||
.await
|
||||
},
|
||||
api::HostExtReq::FetchParsedConst(ref fpc @ api::FetchParsedConst(sys, id)) =>
|
||||
with_sys_record(sys, async {
|
||||
let cnst = get_const(id).await;
|
||||
handle.reply(fpc, &cnst.serialize().await).await
|
||||
})
|
||||
.await,
|
||||
api::HostExtReq::AtomReq(atom_req) => {
|
||||
let atom = atom_req.get_atom();
|
||||
with_sys_record(atom.owner, async {
|
||||
let (nfo, id, buf) = resolve_atom_type(atom);
|
||||
let actx = AtomCtx(buf, atom.drop);
|
||||
match &atom_req {
|
||||
api::AtomReq::SerializeAtom(ser) => {
|
||||
let mut buf = enc_vec(&id);
|
||||
match nfo.serialize(actx, Pin::<&mut Vec<_>>::new(&mut buf)).await {
|
||||
None => handle.reply(ser, &None).await,
|
||||
Some(refs) => {
|
||||
let refs =
|
||||
join_all(refs.into_iter().map(async |ex| ex.into_api(&mut ()).await))
|
||||
.await;
|
||||
handle.reply(ser, &Some((buf, refs))).await
|
||||
},
|
||||
}
|
||||
},
|
||||
api::AtomReq::AtomPrint(print @ api::AtomPrint(_)) =>
|
||||
handle.reply(print, &nfo.print(actx).await.to_api()).await,
|
||||
api::AtomReq::Fwded(fwded) => {
|
||||
let api::Fwded(_, key, payload) = &fwded;
|
||||
let mut reply = Vec::new();
|
||||
let key = Sym::from_api(*key).await;
|
||||
let some = nfo
|
||||
.handle_req(
|
||||
actx,
|
||||
key,
|
||||
Pin::<&mut &[u8]>::new(&mut &payload[..]),
|
||||
Pin::<&mut Vec<_>>::new(&mut reply),
|
||||
)
|
||||
.await;
|
||||
handle.reply(fwded, &some.then_some(reply)).await
|
||||
},
|
||||
api::AtomReq::CallRef(call @ api::CallRef(_, arg)) => {
|
||||
let expr_store = BorrowedExprStore::new();
|
||||
let expr_handle = ExprHandle::borrowed(*arg, &expr_store);
|
||||
let ret = nfo.call_ref(actx, Expr::from_handle(expr_handle.clone())).await;
|
||||
let api_expr = ret.serialize().await;
|
||||
mem::drop(expr_handle);
|
||||
expr_store.dispose().await;
|
||||
handle.reply(call, &api_expr).await
|
||||
},
|
||||
api::AtomReq::FinalCall(call @ api::FinalCall(_, arg)) => {
|
||||
let expr_store = BorrowedExprStore::new();
|
||||
let expr_handle = ExprHandle::borrowed(*arg, &expr_store);
|
||||
let ret = nfo.call(actx, Expr::from_handle(expr_handle.clone())).await;
|
||||
let api_expr = ret.serialize().await;
|
||||
mem::drop(expr_handle);
|
||||
expr_store.dispose().await;
|
||||
handle.reply(call, &api_expr).await
|
||||
},
|
||||
api::AtomReq::Command(cmd @ api::Command(_)) => match nfo.command(actx).await {
|
||||
Err(e) => handle.reply(cmd, &Err(e.to_api())).await,
|
||||
Ok(opt) => match opt {
|
||||
None => handle.reply(cmd, &Ok(api::NextStep::Halt)).await,
|
||||
Some(cont) => {
|
||||
let cont = cont.serialize().await;
|
||||
handle.reply(cmd, &Ok(api::NextStep::Continue(cont))).await
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
})
|
||||
.await
|
||||
},
|
||||
api::HostExtReq::DeserAtom(deser) => {
|
||||
let api::DeserAtom(sys, buf, refs) = &deser;
|
||||
let mut read = &mut &buf[..];
|
||||
let ctx = get_ctx(*sys).await;
|
||||
// SAFETY: deserialization implicitly grants ownership to previously owned exprs
|
||||
let refs = (refs.iter())
|
||||
.map(|tk| Expr::from_handle(ExprHandle::deserialize(ctx.clone(), *tk)))
|
||||
.collect_vec();
|
||||
let id = AtomTypeId::decode(Pin::new(&mut read)).await;
|
||||
let inst = ctx.cted().inst();
|
||||
let nfo = atom_by_idx(inst.card(), id).expect("Deserializing atom with invalid ID");
|
||||
hand.handle(&deser, &nfo.deserialize(ctx.clone(), read, &refs).await).await
|
||||
},
|
||||
}
|
||||
}
|
||||
.boxed_local()
|
||||
}
|
||||
},
|
||||
);
|
||||
*interner_cell.borrow_mut() =
|
||||
Some(Interner::new_replica(rn.clone().map(|ir: api::IntReq| ir.into_root())));
|
||||
spawner(Box::pin(clone!(spawner; async move {
|
||||
let mut streams = stream_select! { in_recv.map(Some), exit_recv.map(|_| None) };
|
||||
while let Some(item) = streams.next().await {
|
||||
match item {
|
||||
Some(rcvd) => spawner(Box::pin(clone!(rn; async move { rn.receive(&rcvd[..]).await }))),
|
||||
None => break,
|
||||
}
|
||||
}
|
||||
})));
|
||||
ExtInit {
|
||||
header: ext_header,
|
||||
port: Box::new(ExtensionOwner {
|
||||
out_recv: Mutex::new(out_recv),
|
||||
out_send,
|
||||
_interner_cell: interner_cell,
|
||||
_systems_lock: systems_lock,
|
||||
}),
|
||||
}
|
||||
})
|
||||
.await
|
||||
},
|
||||
api::HostExtReq::DeserAtom(deser) => {
|
||||
let api::DeserAtom(sys, buf, refs) = &deser;
|
||||
let read = &mut &buf[..];
|
||||
with_sys_record(*sys, async {
|
||||
// SAFETY: deserialization implicitly grants ownership to previously owned exprs
|
||||
let refs = (refs.iter())
|
||||
.map(|tk| Expr::from_handle(ExprHandle::deserialize(*tk)))
|
||||
.collect_vec();
|
||||
let id = AtomTypeId::decode_slice(read);
|
||||
let nfo = atom_by_idx(cted().inst().card(), id)
|
||||
.expect("Deserializing atom with invalid ID");
|
||||
handle.reply(&deser, &nfo.deserialize(read, &refs).await).await
|
||||
})
|
||||
.await
|
||||
},
|
||||
}
|
||||
})
|
||||
.await
|
||||
},
|
||||
);
|
||||
// add essential services to the very tail, then fold all context into the run
|
||||
// future
|
||||
SYSTEM_TABLE
|
||||
.scope(
|
||||
RefCell::default(),
|
||||
with_interner(
|
||||
new_interner(),
|
||||
with_logger(
|
||||
logger2,
|
||||
with_comm(
|
||||
Rc::new(client),
|
||||
comm_ctx,
|
||||
(self.context.into_iter()).fold(
|
||||
Box::pin(async { extension_fut.await.unwrap() }) as LocalBoxFuture<()>,
|
||||
|fut, cx| cx.apply(fut),
|
||||
),
|
||||
),
|
||||
),
|
||||
),
|
||||
)
|
||||
.await;
|
||||
}) as Pin<Box<_>>);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ use std::cell::RefCell;
|
||||
use std::fmt;
|
||||
use std::hash::Hash;
|
||||
use std::rc::Rc;
|
||||
use std::thread::panicking;
|
||||
|
||||
use async_once_cell::OnceCell;
|
||||
use derive_destructure::destructure;
|
||||
@@ -9,12 +10,13 @@ use hashbrown::HashSet;
|
||||
use orchid_base::error::OrcErrv;
|
||||
use orchid_base::format::{FmtCtx, FmtUnit, Format};
|
||||
use orchid_base::location::Pos;
|
||||
use orchid_base::reqnot::Requester;
|
||||
use orchid_base::stash::stash;
|
||||
|
||||
use crate::api;
|
||||
use crate::atom::ForeignAtom;
|
||||
use crate::entrypoint::{notify, request};
|
||||
use crate::gen_expr::{GExpr, GExprKind};
|
||||
use crate::system::SysCtx;
|
||||
use crate::system::sys_id;
|
||||
|
||||
pub struct BorrowedExprStore(RefCell<Option<HashSet<Rc<ExprHandle>>>>);
|
||||
impl BorrowedExprStore {
|
||||
@@ -22,76 +24,79 @@ impl BorrowedExprStore {
|
||||
pub async fn dispose(self) {
|
||||
let elements = self.0.borrow_mut().take().unwrap();
|
||||
for handle in elements {
|
||||
handle.drop_one().await
|
||||
handle.on_borrow_expire().await
|
||||
}
|
||||
}
|
||||
}
|
||||
impl Drop for BorrowedExprStore {
|
||||
fn drop(&mut self) {
|
||||
if self.0.borrow().is_some() {
|
||||
if self.0.borrow().is_some() && !panicking() {
|
||||
panic!("This should always be explicitly disposed")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(destructure)]
|
||||
pub struct ExprHandle {
|
||||
pub tk: api::ExprTicket,
|
||||
pub ctx: SysCtx,
|
||||
}
|
||||
#[derive(destructure, PartialEq, Eq, Hash)]
|
||||
pub struct ExprHandle(api::ExprTicket);
|
||||
impl ExprHandle {
|
||||
/// This function does not signal to take ownership of the expr.
|
||||
pub fn borrowed(ctx: SysCtx, tk: api::ExprTicket, store: &BorrowedExprStore) -> Rc<Self> {
|
||||
let this = Rc::new(Self { ctx, tk });
|
||||
/// Do not signal to take ownership of the expr. Instead, the
|
||||
/// [BorrowedExprStore] signifies the lifetime of the borrow, and when it is
|
||||
/// freed, it signals to take ownership of any exprs that ended up outliving
|
||||
/// it. It is used to receive exprs sent via [ExprHandle::ticket] as an
|
||||
/// optimization over [ExprHandle::from_ticket]
|
||||
pub fn borrowed(tk: api::ExprTicket, store: &BorrowedExprStore) -> Rc<Self> {
|
||||
let this = Rc::new(Self(tk));
|
||||
store.0.borrow_mut().as_mut().unwrap().insert(this.clone());
|
||||
this
|
||||
}
|
||||
pub fn deserialize(ctx: SysCtx, tk: api::ExprTicket) -> Rc<Self> { Rc::new(Self { ctx, tk }) }
|
||||
pub fn get_ctx(&self) -> SysCtx { self.ctx.clone() }
|
||||
/// Drop one instance of the handle silently; if it's the last one, do
|
||||
/// nothing, otherwise send an Acquire
|
||||
pub async fn drop_one(self: Rc<Self>) {
|
||||
match Rc::try_unwrap(self) {
|
||||
Err(rc) => {
|
||||
eprintln!("Extending lifetime for {:?}", rc.tk);
|
||||
rc.ctx.reqnot().notify(api::Acquire(rc.ctx.sys_id(), rc.tk)).await
|
||||
},
|
||||
Ok(hand) => {
|
||||
// avoid calling destructor
|
||||
hand.destructure();
|
||||
},
|
||||
}
|
||||
/// This function takes over the loose reference pre-created via
|
||||
/// [ExprHandle::serialize] in the sender. It must therefore pair up with a
|
||||
/// corresponding call to that function.
|
||||
pub fn deserialize(tk: api::ExprTicket) -> Rc<Self> { Rc::new(Self(tk)) }
|
||||
/// This function takes ownership of a borrowed expr sent via
|
||||
/// [ExprHandle::ticket] and signals immediately to record that ownership. It
|
||||
/// is used in place of [ExprHandle::borrowed] when it's impractical to
|
||||
/// determine how long the borrow will live.
|
||||
///
|
||||
/// # Safety
|
||||
///
|
||||
/// You need to ensure that the [api::Acquire] sent by this function arrives
|
||||
/// before the borrow expires, so you still need a borrow delimited by some
|
||||
/// message you will send in the future.
|
||||
pub async fn from_ticket(tk: api::ExprTicket) -> Rc<Self> {
|
||||
let store = BorrowedExprStore::new();
|
||||
let expr = Self::borrowed(tk, &store);
|
||||
store.dispose().await;
|
||||
expr
|
||||
}
|
||||
/// The raw ticket used in messages. If you want to transfer ownership via the
|
||||
/// ticket, you should use [ExprHandle::serialize]. Only send this if you want
|
||||
/// to lend the expr, and you expect the receiver to use
|
||||
/// [ExprHandle::borrowed] or [ExprHandle::from_ticket]
|
||||
pub fn ticket(&self) -> api::ExprTicket { self.0 }
|
||||
async fn send_acq(&self) { notify(api::Acquire(sys_id(), self.0)).await }
|
||||
/// If this is the last one reference, do nothing, otherwise send an Acquire
|
||||
pub async fn on_borrow_expire(self: Rc<Self>) { self.serialize().await; }
|
||||
/// Drop the handle and get the ticket without a release notification.
|
||||
/// Use this with messages that imply ownership transfer. This function is
|
||||
/// safe because abusing it is a memory leak.
|
||||
pub fn serialize(self) -> api::ExprTicket {
|
||||
eprintln!("Skipping destructor for {:?}", self.tk);
|
||||
self.destructure().0
|
||||
}
|
||||
}
|
||||
impl Eq for ExprHandle {}
|
||||
impl PartialEq for ExprHandle {
|
||||
fn eq(&self, other: &Self) -> bool {
|
||||
self.ctx.sys_id() == other.ctx.sys_id() && self.tk == other.tk
|
||||
}
|
||||
}
|
||||
impl Hash for ExprHandle {
|
||||
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
|
||||
self.ctx.sys_id().hash(state);
|
||||
self.tk.hash(state);
|
||||
pub async fn serialize(self: Rc<Self>) -> api::ExprTicket {
|
||||
match Rc::try_unwrap(self) {
|
||||
Err(rc) => {
|
||||
rc.send_acq().await;
|
||||
rc.0
|
||||
},
|
||||
Ok(hand) => hand.destructure().0,
|
||||
}
|
||||
}
|
||||
}
|
||||
impl fmt::Debug for ExprHandle {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
write!(f, "ExprHandle({})", self.tk.0)
|
||||
}
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "ExprHandle({})", self.0.0) }
|
||||
}
|
||||
impl Drop for ExprHandle {
|
||||
fn drop(&mut self) {
|
||||
let notif = api::Release(self.ctx.sys_id(), self.tk);
|
||||
let reqnot = self.ctx.reqnot().clone();
|
||||
self.ctx.spawner()(Box::pin(async move { reqnot.notify(notif).await }))
|
||||
let notif = api::Release(sys_id(), self.0);
|
||||
stash(async move { notify(notif).await })
|
||||
}
|
||||
}
|
||||
|
||||
@@ -102,19 +107,23 @@ pub struct Expr {
|
||||
}
|
||||
impl Expr {
|
||||
pub fn from_handle(handle: Rc<ExprHandle>) -> Self { Self { handle, data: Rc::default() } }
|
||||
pub fn new(handle: Rc<ExprHandle>, d: ExprData) -> Self {
|
||||
pub fn from_data(handle: Rc<ExprHandle>, d: ExprData) -> Self {
|
||||
Self { handle, data: Rc::new(OnceCell::from(d)) }
|
||||
}
|
||||
|
||||
/// Creates an instance without incrementing the reference count. This is
|
||||
/// only safe to be called on a reference created with an [Expr::serialize]
|
||||
/// call which created the loose reference it can take ownership of.
|
||||
pub async fn deserialize(tk: api::ExprTicket) -> Self {
|
||||
Self::from_handle(ExprHandle::deserialize(tk))
|
||||
}
|
||||
pub async fn data(&self) -> &ExprData {
|
||||
(self.data.get_or_init(async {
|
||||
let details = self.handle.ctx.reqnot().request(api::Inspect { target: self.handle.tk }).await;
|
||||
let pos = Pos::from_api(&details.location, self.handle.ctx.i()).await;
|
||||
let details = request(api::Inspect { target: self.handle.ticket() }).await;
|
||||
let pos = Pos::from_api(&details.location).await;
|
||||
let kind = match details.kind {
|
||||
api::InspectedKind::Atom(a) =>
|
||||
ExprKind::Atom(ForeignAtom::new(self.handle.clone(), a, pos.clone())),
|
||||
api::InspectedKind::Bottom(b) =>
|
||||
ExprKind::Bottom(OrcErrv::from_api(&b, self.handle.ctx.i()).await),
|
||||
api::InspectedKind::Bottom(b) => ExprKind::Bottom(OrcErrv::from_api(&b).await),
|
||||
api::InspectedKind::Opaque => ExprKind::Opaque,
|
||||
};
|
||||
ExprData { pos, kind }
|
||||
@@ -128,20 +137,21 @@ impl Expr {
|
||||
}
|
||||
}
|
||||
pub fn handle(&self) -> Rc<ExprHandle> { self.handle.clone() }
|
||||
pub fn ctx(&self) -> SysCtx { self.handle.ctx.clone() }
|
||||
|
||||
pub fn slot(&self) -> GExpr {
|
||||
GExpr { pos: Pos::SlotTarget, kind: GExprKind::Slot(self.clone()) }
|
||||
}
|
||||
/// Increments the refcount to ensure that the ticket remains valid even if
|
||||
/// the handle is freed. To avoid a leak, [Expr::deserialize] must eventually
|
||||
/// be called.
|
||||
pub async fn serialize(self) -> api::ExprTicket { self.handle.serialize().await }
|
||||
}
|
||||
impl Format for Expr {
|
||||
async fn print<'a>(&'a self, _c: &'a (impl FmtCtx + ?Sized + 'a)) -> FmtUnit {
|
||||
match &self.data().await.kind {
|
||||
ExprKind::Opaque => "OPAQUE".to_string().into(),
|
||||
ExprKind::Bottom(b) => format!("Bottom({b})").into(),
|
||||
ExprKind::Atom(a) => FmtUnit::from_api(
|
||||
&self.handle.ctx.reqnot().request(api::ExtAtomPrint(a.atom.clone())).await,
|
||||
),
|
||||
ExprKind::Atom(a) => FmtUnit::from_api(&request(api::ExtAtomPrint(a.atom.clone())).await),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
12
orchid-extension/src/ext_port.rs
Normal file
12
orchid-extension/src/ext_port.rs
Normal file
@@ -0,0 +1,12 @@
|
||||
use std::pin::Pin;
|
||||
use std::rc::Rc;
|
||||
|
||||
use futures::future::LocalBoxFuture;
|
||||
use futures::{AsyncRead, AsyncWrite};
|
||||
|
||||
pub struct ExtPort {
|
||||
pub input: Pin<Box<dyn AsyncRead>>,
|
||||
pub output: Pin<Box<dyn AsyncWrite>>,
|
||||
pub log: Pin<Box<dyn AsyncWrite>>,
|
||||
pub spawn: Rc<dyn Fn(LocalBoxFuture<'static, ()>)>,
|
||||
}
|
||||
@@ -1,12 +1,12 @@
|
||||
use std::any::TypeId;
|
||||
use std::borrow::Cow;
|
||||
use std::cell::RefCell;
|
||||
use std::collections::HashMap;
|
||||
use std::future::Future;
|
||||
use std::pin::Pin;
|
||||
use std::rc::Rc;
|
||||
|
||||
use futures::future::LocalBoxFuture;
|
||||
use futures::lock::Mutex;
|
||||
use futures::{AsyncWrite, FutureExt};
|
||||
use itertools::Itertools;
|
||||
use never::Never;
|
||||
@@ -15,15 +15,17 @@ use orchid_base::clone;
|
||||
use orchid_base::error::OrcRes;
|
||||
use orchid_base::format::{FmtCtx, FmtUnit};
|
||||
use orchid_base::name::Sym;
|
||||
use task_local::task_local;
|
||||
use trait_set::trait_set;
|
||||
|
||||
use crate::api;
|
||||
use crate::atom::Atomic;
|
||||
use crate::atom_owned::{DeserializeCtx, OwnedAtom, OwnedVariant};
|
||||
use crate::conv::ToExpr;
|
||||
use crate::coroutine_exec::{ExecHandle, exec};
|
||||
use crate::expr::Expr;
|
||||
use crate::gen_expr::GExpr;
|
||||
use crate::system::{SysCtx, SysCtxEntry};
|
||||
use crate::system::sys_id;
|
||||
|
||||
trait_set! {
|
||||
trait FunCB = Fn(Vec<Expr>) -> LocalBoxFuture<'static, OrcRes<GExpr>> + 'static;
|
||||
@@ -34,26 +36,30 @@ pub trait ExprFunc<I, O>: Clone + 'static {
|
||||
fn apply<'a>(&self, hand: ExecHandle<'a>, v: Vec<Expr>) -> impl Future<Output = OrcRes<GExpr>>;
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
struct FunsCtx(Mutex<HashMap<Sym, FunRecord>>);
|
||||
impl SysCtxEntry for FunsCtx {}
|
||||
task_local! {
|
||||
static FUNS_CTX: RefCell<HashMap<(api::SysId, Sym), FunRecord>>;
|
||||
}
|
||||
|
||||
pub(crate) fn with_funs_ctx<'a>(fut: LocalBoxFuture<'a, ()>) -> LocalBoxFuture<'a, ()> {
|
||||
Box::pin(FUNS_CTX.scope(RefCell::default(), fut))
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
struct FunRecord {
|
||||
argtyps: &'static [TypeId],
|
||||
fun: Rc<dyn FunCB>,
|
||||
}
|
||||
|
||||
async fn process_args<I, O, F: ExprFunc<I, O>>(
|
||||
debug: impl AsRef<str> + Clone + 'static,
|
||||
f: F,
|
||||
) -> FunRecord {
|
||||
fn process_args<I, O, F: ExprFunc<I, O>>(f: F) -> FunRecord {
|
||||
let argtyps = F::argtyps();
|
||||
let fun = Rc::new(move |v: Vec<Expr>| {
|
||||
clone!(f, v mut);
|
||||
exec(debug.clone(), async move |mut hand| {
|
||||
exec(async move |mut hand| {
|
||||
let mut norm_args = Vec::with_capacity(v.len());
|
||||
for (expr, typ) in v.into_iter().zip(argtyps) {
|
||||
if *typ != TypeId::of::<Expr>() {
|
||||
if *typ == TypeId::of::<Expr>() {
|
||||
norm_args.push(expr);
|
||||
} else {
|
||||
norm_args.push(hand.exec(expr).await?);
|
||||
}
|
||||
}
|
||||
@@ -77,17 +83,18 @@ pub(crate) struct Fun {
|
||||
record: FunRecord,
|
||||
}
|
||||
impl Fun {
|
||||
pub async fn new<I, O, F: ExprFunc<I, O>>(path: Sym, ctx: SysCtx, f: F) -> Self {
|
||||
let funs: &FunsCtx = ctx.get_or_default();
|
||||
let mut fung = funs.0.lock().await;
|
||||
let record = if let Some(record) = fung.get(&path) {
|
||||
record.clone()
|
||||
} else {
|
||||
let record = process_args(path.to_string(), f).await;
|
||||
fung.insert(path.clone(), record.clone());
|
||||
record
|
||||
};
|
||||
Self { args: vec![], path, record }
|
||||
pub async fn new<I, O, F: ExprFunc<I, O>>(path: Sym, f: F) -> Self {
|
||||
FUNS_CTX.with(|cx| {
|
||||
let mut fung = cx.borrow_mut();
|
||||
let record = if let Some(record) = fung.get(&(sys_id(), path.clone())) {
|
||||
record.clone()
|
||||
} else {
|
||||
let record = process_args(f);
|
||||
fung.insert((sys_id(), path.clone()), record.clone());
|
||||
record
|
||||
};
|
||||
Self { args: vec![], path, record }
|
||||
})
|
||||
}
|
||||
pub fn arity(&self) -> u8 { self.record.argtyps.len() as u8 }
|
||||
}
|
||||
@@ -101,20 +108,19 @@ impl OwnedAtom for Fun {
|
||||
async fn call_ref(&self, arg: Expr) -> GExpr {
|
||||
let new_args = self.args.iter().cloned().chain([arg]).collect_vec();
|
||||
if new_args.len() == self.record.argtyps.len() {
|
||||
(self.record.fun)(new_args).await.to_expr().await
|
||||
(self.record.fun)(new_args).await.to_gen().await
|
||||
} else {
|
||||
Self { args: new_args, record: self.record.clone(), path: self.path.clone() }.to_expr().await
|
||||
Self { args: new_args, record: self.record.clone(), path: self.path.clone() }.to_gen().await
|
||||
}
|
||||
}
|
||||
async fn call(self, arg: Expr) -> GExpr { self.call_ref(arg).await }
|
||||
async fn serialize(&self, _: SysCtx, write: Pin<&mut (impl AsyncWrite + ?Sized)>) -> Self::Refs {
|
||||
self.path.to_api().encode(write).await;
|
||||
async fn serialize(&self, write: Pin<&mut (impl AsyncWrite + ?Sized)>) -> Self::Refs {
|
||||
self.path.to_api().encode(write).await.unwrap();
|
||||
self.args.clone()
|
||||
}
|
||||
async fn deserialize(mut ctx: impl DeserializeCtx, args: Self::Refs) -> Self {
|
||||
let sys = ctx.sys();
|
||||
let path = Sym::from_api(ctx.decode().await, sys.i()).await;
|
||||
let record = (sys.get::<FunsCtx>().0.lock().await.get(&path))
|
||||
async fn deserialize(mut ds_cx: impl DeserializeCtx, args: Self::Refs) -> Self {
|
||||
let path = Sym::from_api(ds_cx.decode().await).await;
|
||||
let record = (FUNS_CTX.with(|funs| funs.borrow().get(&(sys_id(), path.clone())).cloned()))
|
||||
.expect("Function missing during deserialization")
|
||||
.clone();
|
||||
Self { args, path, record }
|
||||
@@ -134,11 +140,8 @@ pub struct Lambda {
|
||||
record: FunRecord,
|
||||
}
|
||||
impl Lambda {
|
||||
pub async fn new<I, O, F: ExprFunc<I, O>>(
|
||||
debug: impl AsRef<str> + Clone + 'static,
|
||||
f: F,
|
||||
) -> Self {
|
||||
Self { args: vec![], record: process_args(debug, f).await }
|
||||
pub fn new<I, O, F: ExprFunc<I, O>>(f: F) -> Self {
|
||||
Self { args: vec![], record: process_args(f) }
|
||||
}
|
||||
}
|
||||
impl Atomic for Lambda {
|
||||
@@ -151,9 +154,9 @@ impl OwnedAtom for Lambda {
|
||||
async fn call_ref(&self, arg: Expr) -> GExpr {
|
||||
let new_args = self.args.iter().cloned().chain([arg]).collect_vec();
|
||||
if new_args.len() == self.record.argtyps.len() {
|
||||
(self.record.fun)(new_args).await.to_expr().await
|
||||
(self.record.fun)(new_args).await.to_gen().await
|
||||
} else {
|
||||
Self { args: new_args, record: self.record.clone() }.to_expr().await
|
||||
Self { args: new_args, record: self.record.clone() }.to_gen().await
|
||||
}
|
||||
}
|
||||
async fn call(self, arg: Expr) -> GExpr { self.call_ref(arg).await }
|
||||
@@ -176,7 +179,7 @@ mod expr_func_derives {
|
||||
impl<
|
||||
$($t: TryFromExpr + 'static, )*
|
||||
Out: ToExpr,
|
||||
Func: AsyncFn($($t,)*) -> Out + Clone + Send + Sync + 'static
|
||||
Func: AsyncFn($($t,)*) -> Out + Clone + 'static
|
||||
> ExprFunc<($($t,)*), Out> for Func {
|
||||
fn argtyps() -> &'static [TypeId] {
|
||||
static STORE: OnceLock<Vec<TypeId>> = OnceLock::new();
|
||||
@@ -185,7 +188,7 @@ mod expr_func_derives {
|
||||
async fn apply<'a>(&self, _: ExecHandle<'a>, v: Vec<Expr>) -> OrcRes<GExpr> {
|
||||
assert_eq!(v.len(), Self::argtyps().len(), "Arity mismatch");
|
||||
let [$([< $t:lower >],)*] = v.try_into().unwrap_or_else(|_| panic!("Checked above"));
|
||||
Ok(self($($t::try_from_expr([< $t:lower >]).await?,)*).await.to_expr().await)
|
||||
Ok(self($($t::try_from_expr([< $t:lower >]).await?,)*).await.to_gen().await)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,8 +10,8 @@ use orchid_base::{match_mapping, tl_cache};
|
||||
|
||||
use crate::api;
|
||||
use crate::atom::{AtomFactory, ToAtom};
|
||||
use crate::entrypoint::request;
|
||||
use crate::expr::Expr;
|
||||
use crate::system::SysCtx;
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct GExpr {
|
||||
@@ -19,29 +19,32 @@ pub struct GExpr {
|
||||
pub pos: Pos,
|
||||
}
|
||||
impl GExpr {
|
||||
pub async fn api_return(self, ctx: SysCtx) -> api::Expression {
|
||||
/// Release notifications will not be sent for the slots. Use this with
|
||||
/// messages that imply ownership transfer
|
||||
pub async fn serialize(self) -> api::Expression {
|
||||
if let GExprKind::Slot(ex) = self.kind {
|
||||
let hand = ex.handle();
|
||||
mem::drop(ex);
|
||||
api::Expression {
|
||||
location: api::Location::SlotTarget,
|
||||
kind: match Rc::try_unwrap(hand) {
|
||||
Ok(h) => api::ExpressionKind::Slot { tk: h.serialize(), by_value: true },
|
||||
Err(rc) => api::ExpressionKind::Slot { tk: rc.tk, by_value: false },
|
||||
},
|
||||
// an instance is leaked here, we must take ownership of it when we receive this
|
||||
kind: api::ExpressionKind::Slot(hand.serialize().await),
|
||||
}
|
||||
} else {
|
||||
api::Expression {
|
||||
location: api::Location::Inherit,
|
||||
kind: self.kind.api_return(ctx).boxed_local().await,
|
||||
kind: self.kind.serialize().boxed_local().await,
|
||||
}
|
||||
}
|
||||
}
|
||||
pub fn at(self, pos: Pos) -> Self { GExpr { pos, kind: self.kind } }
|
||||
pub async fn create(self) -> Expr {
|
||||
Expr::deserialize(request(api::Create(self.serialize().await)).await).await
|
||||
}
|
||||
}
|
||||
impl Format for GExpr {
|
||||
async fn print<'a>(&'a self, c: &'a (impl FmtCtx + ?Sized + 'a)) -> FmtUnit {
|
||||
self.kind.print(c).await
|
||||
self.kind.print(c).boxed_local().await
|
||||
}
|
||||
}
|
||||
|
||||
@@ -57,21 +60,21 @@ pub enum GExprKind {
|
||||
Bottom(OrcErrv),
|
||||
}
|
||||
impl GExprKind {
|
||||
pub async fn api_return(self, ctx: SysCtx) -> api::ExpressionKind {
|
||||
pub async fn serialize(self) -> api::ExpressionKind {
|
||||
match_mapping!(self, Self => api::ExpressionKind {
|
||||
Call(
|
||||
f => Box::new(f.api_return(ctx.clone()).await),
|
||||
x => Box::new(x.api_return(ctx).await)
|
||||
f => Box::new(f.serialize().await),
|
||||
x => Box::new(x.serialize().await)
|
||||
),
|
||||
Seq(
|
||||
a => Box::new(a.api_return(ctx.clone()).await),
|
||||
b => Box::new(b.api_return(ctx).await)
|
||||
a => Box::new(a.serialize().await),
|
||||
b => Box::new(b.serialize().await)
|
||||
),
|
||||
Lambda(arg, body => Box::new(body.api_return(ctx).await)),
|
||||
Lambda(arg, body => Box::new(body.serialize().await)),
|
||||
Arg(arg),
|
||||
Const(name.to_api()),
|
||||
Bottom(err.to_api()),
|
||||
NewAtom(fac.clone().build(ctx).await),
|
||||
NewAtom(fac.clone().build().await),
|
||||
} {
|
||||
Self::Slot(_) => panic!("processed elsewhere")
|
||||
})
|
||||
@@ -105,7 +108,7 @@ fn inherit(kind: GExprKind) -> GExpr { GExpr { pos: Pos::Inherit, kind } }
|
||||
pub fn sym_ref(path: Sym) -> GExpr { inherit(GExprKind::Const(path)) }
|
||||
pub fn atom<A: ToAtom>(atom: A) -> GExpr { inherit(GExprKind::NewAtom(atom.to_atom_factory())) }
|
||||
|
||||
pub fn seq(ops: impl IntoIterator<Item = GExpr>) -> GExpr {
|
||||
pub fn seq(deps: impl IntoIterator<Item = GExpr>, val: GExpr) -> GExpr {
|
||||
fn recur(mut ops: impl Iterator<Item = GExpr>) -> Option<GExpr> {
|
||||
let op = ops.next()?;
|
||||
Some(match recur(ops) {
|
||||
@@ -113,19 +116,15 @@ pub fn seq(ops: impl IntoIterator<Item = GExpr>) -> GExpr {
|
||||
Some(rec) => inherit(GExprKind::Seq(Box::new(op), Box::new(rec))),
|
||||
})
|
||||
}
|
||||
recur(ops.into_iter()).expect("Empty list provided to seq!")
|
||||
recur(deps.into_iter().chain([val])).expect("Empty list provided to seq!")
|
||||
}
|
||||
|
||||
pub fn arg(n: u64) -> GExpr { inherit(GExprKind::Arg(n)) }
|
||||
|
||||
pub fn lambda(n: u64, b: impl IntoIterator<Item = GExpr>) -> GExpr {
|
||||
inherit(GExprKind::Lambda(n, Box::new(call(b))))
|
||||
}
|
||||
pub fn lambda(n: u64, [b]: [GExpr; 1]) -> GExpr { inherit(GExprKind::Lambda(n, Box::new(b))) }
|
||||
|
||||
pub fn call(v: impl IntoIterator<Item = GExpr>) -> GExpr {
|
||||
v.into_iter()
|
||||
.reduce(|f, x| inherit(GExprKind::Call(Box::new(f), Box::new(x))))
|
||||
.expect("Empty call expression")
|
||||
pub fn call(f: GExpr, argv: impl IntoIterator<Item = GExpr>) -> GExpr {
|
||||
(argv.into_iter()).fold(f, |f, x| inherit(GExprKind::Call(Box::new(f), Box::new(x))))
|
||||
}
|
||||
|
||||
pub fn bot(ev: impl IntoIterator<Item = OrcErr>) -> GExpr {
|
||||
|
||||
50
orchid-extension/src/interner.rs
Normal file
50
orchid-extension/src/interner.rs
Normal file
@@ -0,0 +1,50 @@
|
||||
use std::rc::Rc;
|
||||
|
||||
use futures::future::{LocalBoxFuture, join_all, ready};
|
||||
use itertools::Itertools;
|
||||
use orchid_base::interner::local_interner::{Int, StrBranch, StrvBranch};
|
||||
use orchid_base::interner::{IStr, IStrv, InternerSrv};
|
||||
|
||||
use crate::api;
|
||||
use crate::entrypoint::{MUTE_REPLY, request};
|
||||
|
||||
#[derive(Default)]
|
||||
struct ExtInterner {
|
||||
str: Int<StrBranch>,
|
||||
strv: Int<StrvBranch>,
|
||||
}
|
||||
impl InternerSrv for ExtInterner {
|
||||
fn is<'a>(&'a self, v: &'a str) -> LocalBoxFuture<'a, IStr> {
|
||||
match self.str.i(v) {
|
||||
Ok(i) => Box::pin(ready(i)),
|
||||
Err(e) => Box::pin(async {
|
||||
e.set_if_empty(MUTE_REPLY.scope((), request(api::InternStr(v.to_owned()))).await)
|
||||
}),
|
||||
}
|
||||
}
|
||||
fn es(&self, t: api::TStr) -> LocalBoxFuture<'_, IStr> {
|
||||
match self.str.e(t) {
|
||||
Ok(i) => Box::pin(ready(i)),
|
||||
Err(e) => Box::pin(async move { e.set_if_empty(Rc::new(request(api::ExternStr(t)).await)) }),
|
||||
}
|
||||
}
|
||||
fn iv<'a>(&'a self, v: &'a [IStr]) -> LocalBoxFuture<'a, IStrv> {
|
||||
match self.strv.i(v) {
|
||||
Ok(i) => Box::pin(ready(i)),
|
||||
Err(e) => Box::pin(async {
|
||||
e.set_if_empty(request(api::InternStrv(v.iter().map(|is| is.to_api()).collect_vec())).await)
|
||||
}),
|
||||
}
|
||||
}
|
||||
fn ev(&self, t: orchid_api::TStrv) -> LocalBoxFuture<'_, IStrv> {
|
||||
match self.strv.e(t) {
|
||||
Ok(i) => Box::pin(ready(i)),
|
||||
Err(e) => Box::pin(async move {
|
||||
let tstr_v = request(api::ExternStrv(t)).await;
|
||||
e.set_if_empty(Rc::new(join_all(tstr_v.into_iter().map(|t| self.es(t))).await))
|
||||
}),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn new_interner() -> Rc<dyn InternerSrv> { Rc::<ExtInterner>::default() }
|
||||
@@ -1,49 +1,53 @@
|
||||
use std::fmt;
|
||||
use std::fmt::Debug;
|
||||
use std::future::Future;
|
||||
use std::ops::RangeInclusive;
|
||||
|
||||
use futures::FutureExt;
|
||||
use futures::future::LocalBoxFuture;
|
||||
use orchid_base::error::{OrcErrv, OrcRes, Reporter, mk_errv};
|
||||
use orchid_base::interner::{Interner, Tok};
|
||||
use orchid_base::error::{OrcErrv, OrcRes, mk_errv};
|
||||
use orchid_base::interner::{IStr, is};
|
||||
use orchid_base::location::{Pos, SrcRange};
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::parse::ParseCtx;
|
||||
use orchid_base::reqnot::Requester;
|
||||
|
||||
use crate::api;
|
||||
use crate::entrypoint::request;
|
||||
use crate::expr::BorrowedExprStore;
|
||||
use crate::parser::PTokTree;
|
||||
use crate::system::SysCtx;
|
||||
use crate::tree::GenTokTree;
|
||||
|
||||
pub async fn ekey_cascade(i: &Interner) -> Tok<String> {
|
||||
i.i("An error cascading from a recursive call").await
|
||||
}
|
||||
pub async fn ekey_not_applicable(i: &Interner) -> Tok<String> {
|
||||
i.i("Pseudo-error to communicate that the current branch in a dispatch doesn't apply").await
|
||||
pub async fn ekey_cascade() -> IStr { is("An error cascading from a recursive call").await }
|
||||
pub async fn ekey_not_applicable() -> IStr {
|
||||
is("Pseudo-error to communicate that the current branch in a dispatch doesn't apply").await
|
||||
}
|
||||
const MSG_INTERNAL_ERROR: &str = "This error is a sentinel for the extension library.\
|
||||
it should not be emitted by the extension.";
|
||||
|
||||
pub async fn err_cascade(i: &Interner) -> OrcErrv {
|
||||
mk_errv(ekey_cascade(i).await, MSG_INTERNAL_ERROR, [Pos::None])
|
||||
pub async fn err_cascade() -> OrcErrv {
|
||||
mk_errv(ekey_cascade().await, MSG_INTERNAL_ERROR, [Pos::None])
|
||||
}
|
||||
|
||||
pub async fn err_not_applicable(i: &Interner) -> OrcErrv {
|
||||
mk_errv(ekey_not_applicable(i).await, MSG_INTERNAL_ERROR, [Pos::None])
|
||||
pub async fn err_not_applicable() -> OrcErrv {
|
||||
mk_errv(ekey_not_applicable().await, MSG_INTERNAL_ERROR, [Pos::None])
|
||||
}
|
||||
|
||||
pub struct LexContext<'a> {
|
||||
pub(crate) exprs: &'a BorrowedExprStore,
|
||||
pub ctx: SysCtx,
|
||||
pub text: &'a Tok<String>,
|
||||
pub text: &'a IStr,
|
||||
pub id: api::ParsId,
|
||||
pub pos: u32,
|
||||
pub(crate) src: Sym,
|
||||
pub(crate) rep: &'a Reporter,
|
||||
}
|
||||
impl<'a> LexContext<'a> {
|
||||
pub fn new(
|
||||
exprs: &'a BorrowedExprStore,
|
||||
text: &'a IStr,
|
||||
id: api::ParsId,
|
||||
pos: u32,
|
||||
src: Sym,
|
||||
) -> Self {
|
||||
Self { exprs, id, pos, src, text }
|
||||
}
|
||||
pub fn src(&self) -> &Sym { &self.src }
|
||||
/// This function returns [PTokTree] because it can never return
|
||||
/// [orchid_base::tree::Token::NewExpr]. You can use
|
||||
@@ -51,17 +55,10 @@ impl<'a> LexContext<'a> {
|
||||
/// for embedding in the return value.
|
||||
pub async fn recurse(&self, tail: &'a str) -> OrcRes<(&'a str, PTokTree)> {
|
||||
let start = self.pos(tail);
|
||||
let Some(lx) = self.ctx.reqnot().request(api::SubLex { pos: start, id: self.id }).await else {
|
||||
return Err(err_cascade(self.ctx.i()).await);
|
||||
let Some(lx) = request(api::SubLex { pos: start, id: self.id }).await else {
|
||||
return Err(err_cascade().await);
|
||||
};
|
||||
let tree = PTokTree::from_api(
|
||||
&lx.tree,
|
||||
&mut (self.ctx.clone(), self.exprs),
|
||||
&mut (),
|
||||
&self.src,
|
||||
self.ctx.i(),
|
||||
)
|
||||
.await;
|
||||
let tree = PTokTree::from_api(&lx.tree, &mut { self.exprs }, &mut (), &self.src).await;
|
||||
Ok((&self.text[lx.pos as usize..], tree))
|
||||
}
|
||||
|
||||
@@ -74,20 +71,16 @@ impl<'a> LexContext<'a> {
|
||||
SrcRange::new(self.pos(tail) - len.try_into().unwrap()..self.pos(tail), &self.src)
|
||||
}
|
||||
}
|
||||
impl ParseCtx for LexContext<'_> {
|
||||
fn i(&self) -> &Interner { self.ctx.i() }
|
||||
fn rep(&self) -> &Reporter { self.rep }
|
||||
}
|
||||
|
||||
pub trait Lexer: Send + Sync + Sized + Default + 'static {
|
||||
pub trait Lexer: Debug + Send + Sync + Sized + Default + 'static {
|
||||
const CHAR_FILTER: &'static [RangeInclusive<char>];
|
||||
fn lex<'a>(
|
||||
tail: &'a str,
|
||||
ctx: &'a LexContext<'a>,
|
||||
lctx: &'a LexContext<'a>,
|
||||
) -> impl Future<Output = OrcRes<(&'a str, GenTokTree)>>;
|
||||
}
|
||||
|
||||
pub trait DynLexer: Send + Sync + 'static {
|
||||
pub trait DynLexer: Debug + Send + Sync + 'static {
|
||||
fn char_filter(&self) -> &'static [RangeInclusive<char>];
|
||||
fn lex<'a>(
|
||||
&self,
|
||||
|
||||
@@ -7,10 +7,12 @@ pub mod conv;
|
||||
pub mod coroutine_exec;
|
||||
pub mod entrypoint;
|
||||
pub mod expr;
|
||||
pub mod ext_port;
|
||||
pub mod func_atom;
|
||||
pub mod gen_expr;
|
||||
pub mod interner;
|
||||
pub mod lexer;
|
||||
// pub mod msg;
|
||||
pub mod logger;
|
||||
pub mod other_system;
|
||||
pub mod parser;
|
||||
pub mod reflection;
|
||||
@@ -18,3 +20,4 @@ pub mod system;
|
||||
pub mod system_ctor;
|
||||
pub mod tokio;
|
||||
pub mod tree;
|
||||
pub mod binary;
|
||||
|
||||
57
orchid-extension/src/logger.rs
Normal file
57
orchid-extension/src/logger.rs
Normal file
@@ -0,0 +1,57 @@
|
||||
use std::fmt::Arguments;
|
||||
use std::fs::File;
|
||||
use std::io::Write;
|
||||
use std::rc::Rc;
|
||||
|
||||
use futures::future::LocalBoxFuture;
|
||||
use hashbrown::HashMap;
|
||||
use orchid_base::interner::is;
|
||||
use orchid_base::logging::{LogWriter, Logger};
|
||||
|
||||
use crate::api;
|
||||
use crate::entrypoint::notify;
|
||||
|
||||
pub struct LogWriterImpl {
|
||||
category: String,
|
||||
strat: api::LogStrategy,
|
||||
}
|
||||
impl LogWriter for LogWriterImpl {
|
||||
fn write_fmt<'a>(&'a self, fmt: Arguments<'a>) -> LocalBoxFuture<'a, ()> {
|
||||
Box::pin(async move {
|
||||
match &self.strat {
|
||||
api::LogStrategy::Discard => (),
|
||||
api::LogStrategy::Default =>
|
||||
notify(api::Log { category: is(&self.category).await.to_api(), message: fmt.to_string() })
|
||||
.await,
|
||||
api::LogStrategy::File { path, .. } => {
|
||||
let mut file = (File::options().write(true).create(true).truncate(false).open(path))
|
||||
.unwrap_or_else(|e| panic!("Could not open {path}: {e}"));
|
||||
file.write_fmt(fmt).unwrap_or_else(|e| panic!("Could not write to {path}: {e}"));
|
||||
},
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct LoggerImpl {
|
||||
default: Option<api::LogStrategy>,
|
||||
routing: HashMap<String, api::LogStrategy>,
|
||||
}
|
||||
impl LoggerImpl {
|
||||
pub fn from_api(api: &api::Logger) -> Self {
|
||||
Self {
|
||||
default: api.default.clone(),
|
||||
routing: api.routing.iter().map(|(k, v)| (k.clone(), v.clone())).collect(),
|
||||
}
|
||||
}
|
||||
}
|
||||
impl Logger for LoggerImpl {
|
||||
fn writer(&self, category: &str) -> Rc<dyn LogWriter> {
|
||||
Rc::new(LogWriterImpl { category: category.to_string(), strat: self.strat(category) })
|
||||
}
|
||||
fn strat(&self, category: &str) -> orchid_api::LogStrategy {
|
||||
(self.routing.get(category).cloned().or(self.default.clone()))
|
||||
.expect("Unrecognized log category with no default strategy")
|
||||
}
|
||||
}
|
||||
@@ -1,15 +1,13 @@
|
||||
use std::marker::PhantomData;
|
||||
use std::mem::size_of;
|
||||
|
||||
use crate::api;
|
||||
use crate::system::{DynSystemCard, SystemCard};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct SystemHandle<C: SystemCard> {
|
||||
pub(crate) _card: PhantomData<C>,
|
||||
pub(crate) card: C,
|
||||
pub(crate) id: api::SysId,
|
||||
}
|
||||
impl<C: SystemCard> SystemHandle<C> {
|
||||
pub(crate) fn new(id: api::SysId) -> Self { Self { _card: PhantomData, id } }
|
||||
pub(crate) fn new(id: api::SysId) -> Self { Self { card: C::default(), id } }
|
||||
pub fn id(&self) -> api::SysId { self.id }
|
||||
}
|
||||
impl<C: SystemCard> Clone for SystemHandle<C> {
|
||||
@@ -21,16 +19,7 @@ pub trait DynSystemHandle {
|
||||
fn get_card(&self) -> &dyn DynSystemCard;
|
||||
}
|
||||
|
||||
pub fn leak_card<T: Default>() -> &'static T {
|
||||
const {
|
||||
if 0 != size_of::<T>() {
|
||||
panic!("Attempted to leak positively sized Card. Card types must always be zero-sized");
|
||||
}
|
||||
}
|
||||
Box::leak(Box::default())
|
||||
}
|
||||
|
||||
impl<C: SystemCard> DynSystemHandle for SystemHandle<C> {
|
||||
fn id(&self) -> api::SysId { self.id }
|
||||
fn get_card(&self) -> &'static dyn DynSystemCard { leak_card::<C>() }
|
||||
fn get_card(&self) -> &dyn DynSystemCard { &self.card }
|
||||
}
|
||||
|
||||
@@ -5,21 +5,22 @@ use futures::future::{LocalBoxFuture, join_all};
|
||||
use futures::{FutureExt, Stream, StreamExt};
|
||||
use itertools::Itertools;
|
||||
use never::Never;
|
||||
use orchid_base::error::{OrcErrv, OrcRes, Reporter};
|
||||
use orchid_base::error::{OrcErrv, OrcRes};
|
||||
use orchid_base::id_store::IdStore;
|
||||
use orchid_base::interner::{Interner, Tok};
|
||||
use orchid_base::interner::IStr;
|
||||
use orchid_base::location::SrcRange;
|
||||
use orchid_base::match_mapping;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::parse::{Comment, ParseCtx, Snippet};
|
||||
use orchid_base::reqnot::Requester;
|
||||
use orchid_base::parse::{Comment, Snippet};
|
||||
use orchid_base::tree::{TokTree, Token, ttv_into_api};
|
||||
use task_local::task_local;
|
||||
|
||||
use crate::api;
|
||||
use crate::conv::ToExpr;
|
||||
use crate::entrypoint::request;
|
||||
use crate::expr::Expr;
|
||||
use crate::gen_expr::GExpr;
|
||||
use crate::system::{SysCtx, SysCtxEntry};
|
||||
use crate::system::sys_id;
|
||||
use crate::tree::{GenTok, GenTokTree};
|
||||
|
||||
pub type PTok = Token<Expr, Never>;
|
||||
@@ -81,29 +82,22 @@ pub type ParserObj = &'static dyn DynParser;
|
||||
|
||||
pub struct ParsCtx<'a> {
|
||||
_parse: PhantomData<&'a mut ()>,
|
||||
ctx: SysCtx,
|
||||
module: Sym,
|
||||
reporter: &'a Reporter,
|
||||
}
|
||||
impl<'a> ParsCtx<'a> {
|
||||
pub(crate) fn new(ctx: SysCtx, module: Sym, reporter: &'a Reporter) -> Self {
|
||||
Self { _parse: PhantomData, ctx, module, reporter }
|
||||
}
|
||||
pub fn ctx(&self) -> &SysCtx { &self.ctx }
|
||||
pub(crate) fn new(module: Sym) -> Self { Self { _parse: PhantomData, module } }
|
||||
pub fn module(&self) -> Sym { self.module.clone() }
|
||||
}
|
||||
impl ParseCtx for ParsCtx<'_> {
|
||||
fn i(&self) -> &Interner { self.ctx.i() }
|
||||
fn rep(&self) -> &Reporter { self.reporter }
|
||||
}
|
||||
|
||||
type BoxConstCallback = Box<dyn FnOnce(ConstCtx) -> LocalBoxFuture<'static, GExpr>>;
|
||||
|
||||
#[derive(Default)]
|
||||
pub(crate) struct ParsedConstCtxEntry {
|
||||
pub(crate) consts: IdStore<BoxConstCallback>,
|
||||
task_local! {
|
||||
static PARSED_CONST_CTX: IdStore<BoxConstCallback>
|
||||
}
|
||||
|
||||
pub(crate) fn with_parsed_const_ctx<'a>(fut: LocalBoxFuture<'a, ()>) -> LocalBoxFuture<'a, ()> {
|
||||
Box::pin(PARSED_CONST_CTX.scope(IdStore::default(), fut))
|
||||
}
|
||||
impl SysCtxEntry for ParsedConstCtxEntry {}
|
||||
|
||||
pub struct ParsedLine {
|
||||
pub sr: SrcRange,
|
||||
@@ -115,10 +109,10 @@ impl ParsedLine {
|
||||
sr: &SrcRange,
|
||||
comments: impl IntoIterator<Item = &'a Comment>,
|
||||
exported: bool,
|
||||
name: Tok<String>,
|
||||
name: IStr,
|
||||
f: F,
|
||||
) -> Self {
|
||||
let cb = Box::new(|ctx| async move { f(ctx).await.to_expr().await }.boxed_local());
|
||||
let cb = Box::new(|ctx| async move { f(ctx).await.to_gen().await }.boxed_local());
|
||||
let kind = ParsedLineKind::Mem(ParsedMem { name, exported, kind: ParsedMemKind::Const(cb) });
|
||||
let comments = comments.into_iter().cloned().collect();
|
||||
ParsedLine { comments, sr: sr.clone(), kind }
|
||||
@@ -127,7 +121,7 @@ impl ParsedLine {
|
||||
sr: &SrcRange,
|
||||
comments: impl IntoIterator<Item = &'a Comment>,
|
||||
exported: bool,
|
||||
name: &Tok<String>,
|
||||
name: &IStr,
|
||||
use_prelude: bool,
|
||||
lines: impl IntoIterator<Item = ParsedLine>,
|
||||
) -> Self {
|
||||
@@ -136,7 +130,7 @@ impl ParsedLine {
|
||||
let comments = comments.into_iter().cloned().collect();
|
||||
ParsedLine { comments, sr: sr.clone(), kind: line_kind }
|
||||
}
|
||||
pub async fn into_api(self, mut ctx: SysCtx) -> api::ParsedLine {
|
||||
pub async fn into_api(self) -> api::ParsedLine {
|
||||
api::ParsedLine {
|
||||
comments: self.comments.into_iter().map(|c| c.to_api()).collect(),
|
||||
source_range: self.sr.to_api(),
|
||||
@@ -146,23 +140,23 @@ impl ParsedLine {
|
||||
exported: mem.exported,
|
||||
kind: match mem.kind {
|
||||
ParsedMemKind::Const(cb) => api::ParsedMemberKind::Constant(api::ParsedConstId(
|
||||
ctx.get_or_default::<ParsedConstCtxEntry>().consts.add(cb).id(),
|
||||
PARSED_CONST_CTX.with(|consts| consts.add(cb).id()),
|
||||
)),
|
||||
ParsedMemKind::Mod { lines, use_prelude } => api::ParsedMemberKind::Module {
|
||||
lines: linev_into_api(lines, ctx).boxed_local().await,
|
||||
lines: linev_into_api(lines).boxed_local().await,
|
||||
use_prelude,
|
||||
},
|
||||
},
|
||||
}),
|
||||
ParsedLineKind::Rec(tv) =>
|
||||
api::ParsedLineKind::Recursive(ttv_into_api(tv, &mut (), &mut ctx).await),
|
||||
api::ParsedLineKind::Recursive(ttv_into_api(tv, &mut (), &mut ()).await),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) async fn linev_into_api(v: Vec<ParsedLine>, ctx: SysCtx) -> Vec<api::ParsedLine> {
|
||||
join_all(v.into_iter().map(|l| l.into_api(ctx.clone()))).await
|
||||
pub(crate) async fn linev_into_api(v: Vec<ParsedLine>) -> Vec<api::ParsedLine> {
|
||||
join_all(v.into_iter().map(|l| l.into_api())).await
|
||||
}
|
||||
|
||||
pub enum ParsedLineKind {
|
||||
@@ -171,7 +165,7 @@ pub enum ParsedLineKind {
|
||||
}
|
||||
|
||||
pub struct ParsedMem {
|
||||
pub name: Tok<String>,
|
||||
pub name: IStr,
|
||||
pub exported: bool,
|
||||
pub kind: ParsedMemKind,
|
||||
}
|
||||
@@ -183,26 +177,23 @@ pub enum ParsedMemKind {
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct ConstCtx {
|
||||
ctx: SysCtx,
|
||||
constid: api::ParsedConstId,
|
||||
}
|
||||
impl ConstCtx {
|
||||
pub fn ctx(&self) -> &SysCtx { &self.ctx }
|
||||
pub fn i(&self) -> &Interner { self.ctx.i() }
|
||||
pub fn names<'b>(
|
||||
&'b self,
|
||||
names: impl IntoIterator<Item = &'b Sym> + 'b,
|
||||
) -> impl Stream<Item = OrcRes<Sym>> + 'b {
|
||||
let resolve_names = api::ResolveNames {
|
||||
constid: self.constid,
|
||||
sys: self.ctx.sys_id(),
|
||||
sys: sys_id(),
|
||||
names: names.into_iter().map(|n| n.to_api()).collect_vec(),
|
||||
};
|
||||
stream(async |mut cx| {
|
||||
for name_opt in self.ctx.reqnot().request(resolve_names).await {
|
||||
for name_opt in request(resolve_names).await {
|
||||
cx.emit(match name_opt {
|
||||
Err(e) => Err(OrcErrv::from_api(&e, self.ctx.i()).await),
|
||||
Ok(name) => Ok(Sym::from_api(name, self.ctx.i()).await),
|
||||
Err(e) => Err(OrcErrv::from_api(&e).await),
|
||||
Ok(name) => Ok(Sym::from_api(name).await),
|
||||
})
|
||||
.await
|
||||
}
|
||||
@@ -213,9 +204,8 @@ impl ConstCtx {
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) async fn get_const(id: api::ParsedConstId, ctx: SysCtx) -> GExpr {
|
||||
let ent = ctx.get_or_default::<ParsedConstCtxEntry>();
|
||||
let rec = ent.consts.get(id.0).expect("Bad ID or double read of parsed const");
|
||||
let ctx = ConstCtx { constid: id, ctx: ctx.clone() };
|
||||
rec.remove()(ctx).await
|
||||
pub(crate) async fn get_const(id: api::ParsedConstId) -> GExpr {
|
||||
let cb = PARSED_CONST_CTX
|
||||
.with(|ent| ent.get(id.0).expect("Bad ID or double read of parsed const").remove());
|
||||
cb(ConstCtx { constid: id }).await
|
||||
}
|
||||
|
||||
@@ -1,67 +1,67 @@
|
||||
use std::cell::OnceCell;
|
||||
use std::cell::{OnceCell, RefCell};
|
||||
use std::rc::Rc;
|
||||
|
||||
use futures::FutureExt;
|
||||
use futures::future::LocalBoxFuture;
|
||||
use futures::lock::Mutex;
|
||||
use hashbrown::HashMap;
|
||||
use memo_map::MemoMap;
|
||||
use orchid_base::interner::Tok;
|
||||
use orchid_base::interner::{IStr, es, iv};
|
||||
use orchid_base::name::{NameLike, VPath};
|
||||
use orchid_base::reqnot::Requester;
|
||||
use task_local::task_local;
|
||||
|
||||
use crate::api;
|
||||
use crate::system::{SysCtx, SysCtxEntry, WeakSysCtx};
|
||||
use crate::entrypoint::request;
|
||||
use crate::system::sys_id;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct ReflMemData {
|
||||
// None for inferred steps
|
||||
public: OnceCell<bool>,
|
||||
kind: ReflMemKind,
|
||||
}
|
||||
#[derive(Clone)]
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct ReflMem(Rc<ReflMemData>);
|
||||
impl ReflMem {
|
||||
pub fn kind(&self) -> ReflMemKind { self.0.kind.clone() }
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum ReflMemKind {
|
||||
Const,
|
||||
Mod(ReflMod),
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct ReflModData {
|
||||
inferred: Mutex<bool>,
|
||||
path: VPath,
|
||||
ctx: WeakSysCtx,
|
||||
members: MemoMap<Tok<String>, ReflMem>,
|
||||
members: MemoMap<IStr, ReflMem>,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct ReflMod(Rc<ReflModData>);
|
||||
impl ReflMod {
|
||||
fn ctx(&self) -> SysCtx {
|
||||
self.0.ctx.upgrade().expect("ReflectedModule accessed after context drop")
|
||||
}
|
||||
pub fn path(&self) -> &[Tok<String>] { &self.0.path[..] }
|
||||
pub fn path(&self) -> &[IStr] { &self.0.path[..] }
|
||||
pub fn is_root(&self) -> bool { self.0.path.is_empty() }
|
||||
async fn try_populate(&self) -> Result<(), api::LsModuleError> {
|
||||
let ctx = self.ctx();
|
||||
let path_tok = ctx.i().i(&self.0.path[..]).await;
|
||||
let reply = match ctx.reqnot().request(api::LsModule(ctx.sys_id(), path_tok.to_api())).await {
|
||||
let path_tok = iv(&self.0.path[..]).await;
|
||||
let reply = match request(api::LsModule(sys_id(), path_tok.to_api())).await {
|
||||
Err(api::LsModuleError::TreeUnavailable) =>
|
||||
panic!("Reflected tree accessed outside an interpreter call. This extension is faulty."),
|
||||
Err(err) => return Err(err),
|
||||
Ok(details) => details,
|
||||
};
|
||||
for (k, v) in reply.members {
|
||||
let k = ctx.i().ex(k).await;
|
||||
let k = es(k).await;
|
||||
let mem = match self.0.members.get(&k) {
|
||||
Some(mem) => mem,
|
||||
None => {
|
||||
let path = self.0.path.clone().name_with_suffix(k.clone()).to_sym(ctx.i()).await;
|
||||
let path = self.0.path.clone().name_with_suffix(k.clone()).to_sym().await;
|
||||
let kind = match v.kind {
|
||||
api::MemberInfoKind::Constant => ReflMemKind::Const,
|
||||
api::MemberInfoKind::Module =>
|
||||
ReflMemKind::Mod(default_module(&ctx, VPath::new(path.segs()))),
|
||||
ReflMemKind::Mod(default_module(VPath::new(path.segs()))),
|
||||
};
|
||||
self.0.members.get_or_insert(&k, || default_member(self.is_root(), kind))
|
||||
},
|
||||
@@ -70,7 +70,7 @@ impl ReflMod {
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
pub async fn get_child(&self, key: &Tok<String>) -> Option<ReflMem> {
|
||||
pub async fn get_child(&self, key: &IStr) -> Option<ReflMem> {
|
||||
let inferred_g = self.0.inferred.lock().await;
|
||||
if let Some(mem) = self.0.members.get(key) {
|
||||
return Some(mem.clone());
|
||||
@@ -88,8 +88,7 @@ impl ReflMod {
|
||||
}
|
||||
self.0.members.get(key).cloned()
|
||||
}
|
||||
pub async fn get_by_path(&self, path: &[Tok<String>]) -> Result<ReflMem, InvalidPathError> {
|
||||
let ctx = self.ctx();
|
||||
pub async fn get_by_path(&self, path: &[IStr]) -> Result<ReflMem, InvalidPathError> {
|
||||
let (next, tail) = path.split_first().expect("Attempted to walk by empty path");
|
||||
let inferred_g = self.0.inferred.lock().await;
|
||||
if let Some(next) = self.0.members.get(next) {
|
||||
@@ -105,7 +104,7 @@ impl ReflMod {
|
||||
if !*inferred_g {
|
||||
return Err(InvalidPathError { keep_ancestry: true });
|
||||
}
|
||||
let candidate = default_module(&ctx, self.0.path.clone().suffix([next.clone()]));
|
||||
let candidate = default_module(self.0.path.clone().suffix([next.clone()]));
|
||||
if tail.is_empty() {
|
||||
return match candidate.try_populate().await {
|
||||
Ok(()) => {
|
||||
@@ -116,8 +115,8 @@ impl ReflMod {
|
||||
Err(api::LsModuleError::InvalidPath) => Err(InvalidPathError { keep_ancestry: false }),
|
||||
Err(api::LsModuleError::IsConstant) => {
|
||||
let const_mem = default_member(self.is_root(), ReflMemKind::Const);
|
||||
self.0.members.insert(next.clone(), const_mem);
|
||||
Err(InvalidPathError { keep_ancestry: true })
|
||||
self.0.members.insert(next.clone(), const_mem.clone());
|
||||
Ok(const_mem)
|
||||
},
|
||||
Err(api::LsModuleError::TreeUnavailable) => unreachable!(),
|
||||
};
|
||||
@@ -133,20 +132,17 @@ impl ReflMod {
|
||||
}
|
||||
}
|
||||
|
||||
struct ReflRoot(ReflMod);
|
||||
impl SysCtxEntry for ReflRoot {}
|
||||
task_local! {
|
||||
static REFL_ROOTS: RefCell<HashMap<api::SysId, ReflMod>>
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct InvalidPathError {
|
||||
keep_ancestry: bool,
|
||||
}
|
||||
|
||||
fn default_module(ctx: &SysCtx, path: VPath) -> ReflMod {
|
||||
ReflMod(Rc::new(ReflModData {
|
||||
ctx: ctx.downgrade(),
|
||||
inferred: Mutex::new(true),
|
||||
path,
|
||||
members: MemoMap::new(),
|
||||
}))
|
||||
fn default_module(path: VPath) -> ReflMod {
|
||||
ReflMod(Rc::new(ReflModData { inferred: Mutex::new(true), path, members: MemoMap::new() }))
|
||||
}
|
||||
|
||||
fn default_member(is_root: bool, kind: ReflMemKind) -> ReflMem {
|
||||
@@ -156,8 +152,12 @@ fn default_member(is_root: bool, kind: ReflMemKind) -> ReflMem {
|
||||
}))
|
||||
}
|
||||
|
||||
fn get_root(ctx: &SysCtx) -> &ReflRoot {
|
||||
ctx.get_or_insert(|| ReflRoot(default_module(ctx, VPath::new([]))))
|
||||
pub fn refl() -> ReflMod {
|
||||
REFL_ROOTS.with(|tbl| {
|
||||
tbl.borrow_mut().entry(sys_id()).or_insert_with(|| default_module(VPath::new([]))).clone()
|
||||
})
|
||||
}
|
||||
|
||||
pub fn refl(ctx: &SysCtx) -> ReflMod { get_root(ctx).0.clone() }
|
||||
pub fn with_refl_roots<'a>(fut: LocalBoxFuture<'a, ()>) -> LocalBoxFuture<'a, ()> {
|
||||
Box::pin(REFL_ROOTS.scope(RefCell::default(), fut))
|
||||
}
|
||||
|
||||
@@ -1,24 +1,21 @@
|
||||
use std::any::{Any, TypeId, type_name};
|
||||
use std::fmt;
|
||||
use std::any::{Any, TypeId};
|
||||
use std::fmt::Debug;
|
||||
use std::future::Future;
|
||||
use std::num::NonZero;
|
||||
use std::pin::Pin;
|
||||
use std::rc::{Rc, Weak};
|
||||
|
||||
use futures::FutureExt;
|
||||
use futures::future::LocalBoxFuture;
|
||||
use memo_map::MemoMap;
|
||||
use orchid_api_traits::{Coding, Decode};
|
||||
use orchid_api_traits::{Coding, Decode, Encode, Request};
|
||||
use orchid_base::boxed_iter::BoxedIter;
|
||||
use orchid_base::builtin::Spawner;
|
||||
use orchid_base::interner::Interner;
|
||||
use orchid_base::logging::Logger;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::reqnot::{Receipt, ReqNot};
|
||||
use orchid_base::reqnot::{Receipt, ReqHandle, ReqReader, ReqReaderExt};
|
||||
use task_local::task_local;
|
||||
|
||||
use crate::api;
|
||||
use crate::atom::{AtomCtx, AtomDynfo, AtomTypeId, AtomicFeatures, ForeignAtom, TypAtom, get_info};
|
||||
use crate::atom::{AtomCtx, AtomDynfo, AtomTypeId, AtomicFeatures, ForeignAtom, TAtom, get_info};
|
||||
use crate::coroutine_exec::Replier;
|
||||
use crate::entrypoint::ExtReq;
|
||||
use crate::entrypoint::request;
|
||||
use crate::func_atom::{Fun, Lambda};
|
||||
use crate::lexer::LexerObj;
|
||||
use crate::parser::ParserObj;
|
||||
@@ -26,13 +23,13 @@ use crate::system_ctor::{CtedObj, SystemCtor};
|
||||
use crate::tree::GenMember;
|
||||
|
||||
/// System as consumed by foreign code
|
||||
pub trait SystemCard: Default + Send + Sync + 'static {
|
||||
pub trait SystemCard: Debug + Default + Send + Sync + 'static {
|
||||
type Ctor: SystemCtor;
|
||||
type Req: Coding;
|
||||
fn atoms() -> impl IntoIterator<Item = Option<Box<dyn AtomDynfo>>>;
|
||||
}
|
||||
|
||||
pub trait DynSystemCard: Send + Sync + 'static {
|
||||
pub trait DynSystemCard: Send + Sync + Any + 'static {
|
||||
fn name(&self) -> &'static str;
|
||||
/// Atoms explicitly defined by the system card. Do not rely on this for
|
||||
/// querying atoms as it doesn't include the general atom types
|
||||
@@ -71,7 +68,7 @@ pub async fn resolv_atom(
|
||||
sys: &(impl DynSystemCard + ?Sized),
|
||||
atom: &api::Atom,
|
||||
) -> Box<dyn AtomDynfo> {
|
||||
let tid = AtomTypeId::decode(Pin::new(&mut &atom.data.0[..])).await;
|
||||
let tid = AtomTypeId::decode(Pin::new(&mut &atom.data.0[..])).await.unwrap();
|
||||
atom_by_idx(sys, tid).expect("Value of nonexistent type found")
|
||||
}
|
||||
|
||||
@@ -84,118 +81,92 @@ impl<T: SystemCard> DynSystemCard for T {
|
||||
|
||||
/// System as defined by author
|
||||
pub trait System: Send + Sync + SystemCard + 'static {
|
||||
fn prelude(i: &Interner) -> impl Future<Output = Vec<Sym>>;
|
||||
fn env() -> Vec<GenMember>;
|
||||
fn prelude() -> impl Future<Output = Vec<Sym>>;
|
||||
fn env() -> impl Future<Output = Vec<GenMember>>;
|
||||
fn lexers() -> Vec<LexerObj>;
|
||||
fn parsers() -> Vec<ParserObj>;
|
||||
fn request(hand: ExtReq<'_>, req: Self::Req) -> impl Future<Output = Receipt<'_>>;
|
||||
fn request<'a>(
|
||||
hand: Box<dyn ReqHandle<'a> + 'a>,
|
||||
req: Self::Req,
|
||||
) -> impl Future<Output = Receipt<'a>>;
|
||||
}
|
||||
|
||||
pub trait DynSystem: Send + Sync + DynSystemCard + 'static {
|
||||
fn dyn_prelude<'a>(&'a self, i: &'a Interner) -> LocalBoxFuture<'a, Vec<Sym>>;
|
||||
fn dyn_env(&'_ self) -> Vec<GenMember>;
|
||||
fn dyn_prelude(&self) -> LocalBoxFuture<'_, Vec<Sym>>;
|
||||
fn dyn_env(&self) -> LocalBoxFuture<'_, Vec<GenMember>>;
|
||||
fn dyn_lexers(&self) -> Vec<LexerObj>;
|
||||
fn dyn_parsers(&self) -> Vec<ParserObj>;
|
||||
fn dyn_request<'a>(&self, hand: ExtReq<'a>, req: Vec<u8>) -> LocalBoxFuture<'a, Receipt<'a>>;
|
||||
fn dyn_request<'a>(&self, hand: Box<dyn ReqReader<'a> + 'a>) -> LocalBoxFuture<'a, Receipt<'a>>;
|
||||
fn card(&self) -> &dyn DynSystemCard;
|
||||
}
|
||||
|
||||
impl<T: System> DynSystem for T {
|
||||
fn dyn_prelude<'a>(&'a self, i: &'a Interner) -> LocalBoxFuture<'a, Vec<Sym>> {
|
||||
Box::pin(Self::prelude(i))
|
||||
}
|
||||
fn dyn_env(&'_ self) -> Vec<GenMember> { Self::env() }
|
||||
fn dyn_prelude(&self) -> LocalBoxFuture<'_, Vec<Sym>> { Box::pin(Self::prelude()) }
|
||||
fn dyn_env(&self) -> LocalBoxFuture<'_, Vec<GenMember>> { Self::env().boxed_local() }
|
||||
fn dyn_lexers(&self) -> Vec<LexerObj> { Self::lexers() }
|
||||
fn dyn_parsers(&self) -> Vec<ParserObj> { Self::parsers() }
|
||||
fn dyn_request<'a>(&self, hand: ExtReq<'a>, req: Vec<u8>) -> LocalBoxFuture<'a, Receipt<'a>> {
|
||||
fn dyn_request<'a>(
|
||||
&self,
|
||||
mut hand: Box<dyn ReqReader<'a> + 'a>,
|
||||
) -> LocalBoxFuture<'a, Receipt<'a>> {
|
||||
Box::pin(async move {
|
||||
Self::request(hand, <Self as SystemCard>::Req::decode(Pin::new(&mut &req[..])).await).await
|
||||
let value = hand.read_req::<<Self as SystemCard>::Req>().await.unwrap();
|
||||
Self::request(hand.finish().await, value).await
|
||||
})
|
||||
}
|
||||
fn card(&self) -> &dyn DynSystemCard { self }
|
||||
}
|
||||
|
||||
pub async fn downcast_atom<A>(foreign: ForeignAtom) -> Result<TypAtom<A>, ForeignAtom>
|
||||
#[derive(Clone)]
|
||||
pub(crate) struct SysCtx(pub api::SysId, pub CtedObj);
|
||||
|
||||
task_local! {
|
||||
static SYS_CTX: SysCtx;
|
||||
}
|
||||
|
||||
pub(crate) async fn with_sys<F: Future>(sys: SysCtx, fut: F) -> F::Output {
|
||||
SYS_CTX.scope(sys, fut).await
|
||||
}
|
||||
|
||||
pub fn sys_id() -> api::SysId { SYS_CTX.with(|cx| cx.0) }
|
||||
pub fn cted() -> CtedObj { SYS_CTX.with(|cx| cx.1.clone()) }
|
||||
pub async fn downcast_atom<A>(foreign: ForeignAtom) -> Result<TAtom<A>, ForeignAtom>
|
||||
where A: AtomicFeatures {
|
||||
let mut data = &foreign.atom.data.0[..];
|
||||
let ctx = foreign.ctx().clone();
|
||||
let value = AtomTypeId::decode(Pin::new(&mut data)).await;
|
||||
let own_inst = ctx.get::<CtedObj>().inst();
|
||||
let owner = if *ctx.get::<api::SysId>() == foreign.atom.owner {
|
||||
let value = AtomTypeId::decode_slice(&mut data);
|
||||
let cted = cted();
|
||||
let own_inst = cted.inst();
|
||||
let owner = if sys_id() == foreign.atom.owner {
|
||||
own_inst.card()
|
||||
} else {
|
||||
(ctx.get::<CtedObj>().deps().find(|s| s.id() == foreign.atom.owner))
|
||||
.ok_or_else(|| foreign.clone())?
|
||||
.get_card()
|
||||
cted.deps().find(|s| s.id() == foreign.atom.owner).ok_or_else(|| foreign.clone())?.get_card()
|
||||
};
|
||||
if owner.atoms().flatten().all(|dynfo| dynfo.tid() != TypeId::of::<A>()) {
|
||||
return Err(foreign);
|
||||
}
|
||||
let (typ_id, dynfo) = get_info::<A>(owner);
|
||||
if value != typ_id {
|
||||
return Err(foreign);
|
||||
}
|
||||
let val = dynfo.decode(AtomCtx(data, foreign.atom.drop, ctx)).await;
|
||||
let value = *val.downcast::<A::Data>().expect("atom decode returned wrong type");
|
||||
Ok(TypAtom { value, untyped: foreign })
|
||||
let val = dynfo.decode(AtomCtx(data, foreign.atom.drop)).await;
|
||||
let Ok(value) = val.downcast::<A::Data>() else {
|
||||
panic!("decode of {} returned wrong type.", dynfo.name());
|
||||
};
|
||||
Ok(TAtom { value: *value, untyped: foreign })
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct WeakSysCtx(Weak<MemoMap<TypeId, Box<dyn Any>>>);
|
||||
impl WeakSysCtx {
|
||||
pub fn upgrade(&self) -> Option<SysCtx> { Some(SysCtx(self.0.upgrade()?)) }
|
||||
pub async fn dep_req<Sys: SystemCard, Req: Request + Into<Sys::Req>>(req: Req) -> Req::Response {
|
||||
let mut msg = Vec::new();
|
||||
req.into().encode_vec(&mut msg);
|
||||
let cted = cted();
|
||||
let own_inst = cted.inst();
|
||||
let owner = if own_inst.card().type_id() == TypeId::of::<Sys>() {
|
||||
sys_id()
|
||||
} else {
|
||||
(cted.deps().find(|s| s.get_card().type_id() == TypeId::of::<Sys>()))
|
||||
.expect("System not in dependency array")
|
||||
.id()
|
||||
};
|
||||
let reply = request(api::SysFwd(owner, msg)).await;
|
||||
Req::Response::decode(std::pin::pin!(&reply[..])).await.unwrap()
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct SysCtx(Rc<MemoMap<TypeId, Box<dyn Any>>>);
|
||||
impl SysCtx {
|
||||
pub fn new(
|
||||
id: api::SysId,
|
||||
i: Interner,
|
||||
reqnot: ReqNot<api::ExtMsgSet>,
|
||||
spawner: Spawner,
|
||||
logger: Logger,
|
||||
cted: CtedObj,
|
||||
) -> Self {
|
||||
let this = Self(Rc::new(MemoMap::new()));
|
||||
this.add(id).add(i).add(reqnot).add(spawner).add(logger).add(cted);
|
||||
this
|
||||
}
|
||||
pub fn downgrade(&self) -> WeakSysCtx { WeakSysCtx(Rc::downgrade(&self.0)) }
|
||||
pub fn add<T: SysCtxEntry>(&self, t: T) -> &Self {
|
||||
assert!(self.0.insert(TypeId::of::<T>(), Box::new(t)), "Key already exists");
|
||||
self
|
||||
}
|
||||
pub fn get_or_insert<T: SysCtxEntry>(&self, f: impl FnOnce() -> T) -> &T {
|
||||
(self.0.get_or_insert_owned(TypeId::of::<T>(), || Box::new(f())).downcast_ref())
|
||||
.expect("Keyed by TypeId")
|
||||
}
|
||||
pub fn get_or_default<T: SysCtxEntry + Default>(&self) -> &T { self.get_or_insert(T::default) }
|
||||
pub fn try_get<T: SysCtxEntry>(&self) -> Option<&T> {
|
||||
Some(self.0.get(&TypeId::of::<T>())?.downcast_ref().expect("Keyed by TypeId"))
|
||||
}
|
||||
pub fn get<T: SysCtxEntry>(&self) -> &T {
|
||||
self.try_get().unwrap_or_else(|| panic!("Context {} missing", type_name::<T>()))
|
||||
}
|
||||
/// Shorthand to get the [Interner] instance
|
||||
pub fn i(&self) -> &Interner { self.get::<Interner>() }
|
||||
/// Shorthand to get the messaging link
|
||||
pub fn reqnot(&self) -> &ReqNot<api::ExtMsgSet> { self.get::<ReqNot<api::ExtMsgSet>>() }
|
||||
/// Shorthand to get the system ID
|
||||
pub fn sys_id(&self) -> api::SysId { *self.get::<api::SysId>() }
|
||||
/// Shorthand to get the task spawner callback
|
||||
pub fn spawner(&self) -> &Spawner { self.get::<Spawner>() }
|
||||
/// Shorthand to get the logger
|
||||
pub fn logger(&self) -> &Logger { self.get::<Logger>() }
|
||||
/// Shorthand to get the constructed system object
|
||||
pub fn cted(&self) -> &CtedObj { self.get::<CtedObj>() }
|
||||
}
|
||||
impl fmt::Debug for SysCtx {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
write!(f, "SysCtx({:?})", self.sys_id())
|
||||
}
|
||||
}
|
||||
pub trait SysCtxEntry: 'static + Sized {}
|
||||
impl SysCtxEntry for api::SysId {}
|
||||
impl SysCtxEntry for ReqNot<api::ExtMsgSet> {}
|
||||
impl SysCtxEntry for Spawner {}
|
||||
impl SysCtxEntry for CtedObj {}
|
||||
impl SysCtxEntry for Logger {}
|
||||
impl SysCtxEntry for Interner {}
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
use std::any::Any;
|
||||
use std::fmt::Debug;
|
||||
use std::sync::Arc;
|
||||
|
||||
use orchid_base::boxed_iter::{BoxedIter, box_empty, box_once};
|
||||
@@ -8,6 +9,7 @@ use crate::api;
|
||||
use crate::other_system::{DynSystemHandle, SystemHandle};
|
||||
use crate::system::{DynSystem, System, SystemCard};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct Cted<Ctor: SystemCtor + ?Sized> {
|
||||
pub deps: <Ctor::Deps as DepDef>::Sat,
|
||||
pub inst: Arc<Ctor::Instance>,
|
||||
@@ -15,7 +17,7 @@ pub struct Cted<Ctor: SystemCtor + ?Sized> {
|
||||
impl<C: SystemCtor + ?Sized> Clone for Cted<C> {
|
||||
fn clone(&self) -> Self { Self { deps: self.deps.clone(), inst: self.inst.clone() } }
|
||||
}
|
||||
pub trait DynCted: Send + Sync + 'static {
|
||||
pub trait DynCted: Debug + Send + Sync + 'static {
|
||||
fn as_any(&self) -> &dyn Any;
|
||||
fn deps<'a>(&'a self) -> BoxedIter<'a, &'a (dyn DynSystemHandle + 'a)>;
|
||||
fn inst(&self) -> Arc<dyn DynSystem>;
|
||||
@@ -27,11 +29,11 @@ impl<C: SystemCtor + ?Sized> DynCted for Cted<C> {
|
||||
}
|
||||
pub type CtedObj = Arc<dyn DynCted>;
|
||||
|
||||
pub trait DepSat: Clone + Send + Sync + 'static {
|
||||
pub trait DepSat: Debug + Clone + Send + Sync + 'static {
|
||||
fn iter<'a>(&'a self) -> BoxedIter<'a, &'a (dyn DynSystemHandle + 'a)>;
|
||||
}
|
||||
|
||||
pub trait DepDef {
|
||||
pub trait DepDef: Debug {
|
||||
type Sat: DepSat;
|
||||
fn report(names: &mut impl FnMut(&'static str));
|
||||
fn create(take: &mut impl FnMut() -> api::SysId) -> Self::Sat;
|
||||
@@ -57,15 +59,16 @@ impl DepDef for () {
|
||||
fn report(_: &mut impl FnMut(&'static str)) {}
|
||||
}
|
||||
|
||||
pub trait SystemCtor: Send + Sync + 'static {
|
||||
pub trait SystemCtor: Debug + Send + Sync + 'static {
|
||||
type Deps: DepDef;
|
||||
type Instance: System;
|
||||
const NAME: &'static str;
|
||||
const VERSION: f64;
|
||||
fn inst(deps: <Self::Deps as DepDef>::Sat) -> Self::Instance;
|
||||
/// Create a system instance.
|
||||
fn inst(&self, deps: <Self::Deps as DepDef>::Sat) -> Self::Instance;
|
||||
}
|
||||
|
||||
pub trait DynSystemCtor: Send + Sync + 'static {
|
||||
pub trait DynSystemCtor: Debug + Send + Sync + 'static {
|
||||
fn decl(&self, id: api::SysDeclId) -> api::SystemDecl;
|
||||
fn new_system(&self, new: &api::NewSystem) -> CtedObj;
|
||||
}
|
||||
@@ -82,7 +85,7 @@ impl<T: SystemCtor> DynSystemCtor for T {
|
||||
fn new_system(&self, api::NewSystem { system: _, id: _, depends }: &api::NewSystem) -> CtedObj {
|
||||
let mut ids = depends.iter().copied();
|
||||
let deps = T::Deps::create(&mut || ids.next().unwrap());
|
||||
let inst = Arc::new(T::inst(deps.clone()));
|
||||
let inst = Arc::new(self.inst(deps.clone()));
|
||||
Arc::new(Cted::<T> { deps, inst })
|
||||
}
|
||||
}
|
||||
@@ -149,8 +152,4 @@ mod dep_set_tuple_impls {
|
||||
dep_set_tuple_impl!(A, B, C, D, E, F, G, H, I, J);
|
||||
dep_set_tuple_impl!(A, B, C, D, E, F, G, H, I, J, K);
|
||||
dep_set_tuple_impl!(A, B, C, D, E, F, G, H, I, J, K, L); // 12
|
||||
dep_set_tuple_impl!(A, B, C, D, E, F, G, H, I, J, K, L, M);
|
||||
dep_set_tuple_impl!(A, B, C, D, E, F, G, H, I, J, K, L, M, N);
|
||||
dep_set_tuple_impl!(A, B, C, D, E, F, G, H, I, J, K, L, M, N, O);
|
||||
dep_set_tuple_impl!(A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P); // 16
|
||||
}
|
||||
|
||||
@@ -1,57 +1,31 @@
|
||||
use crate::entrypoint::ExtensionData;
|
||||
use std::rc::Rc;
|
||||
|
||||
use crate::entrypoint::ExtensionBuilder;
|
||||
use crate::ext_port::ExtPort;
|
||||
/// Run an extension inside a Tokio localset. Since the extension API does not
|
||||
/// provide a forking mechanism, it can safely abort once the localset is
|
||||
/// exhausted. If an extension absolutely needs a parallel thread, it can import
|
||||
/// and call [tokio::task::spawn_local] which will keep alive the localset and
|
||||
/// postpone the aggressive shutdown, and listen for the [Drop::drop] of the
|
||||
/// value returned by [crate::system_ctor::SystemCtor::inst] to initiate
|
||||
/// shutdown.
|
||||
#[cfg(feature = "tokio")]
|
||||
pub async fn tokio_main(data: ExtensionData) {
|
||||
use std::io::{ErrorKind, Write};
|
||||
use std::mem;
|
||||
use std::pin::{Pin, pin};
|
||||
use std::rc::Rc;
|
||||
|
||||
use async_once_cell::OnceCell;
|
||||
use futures::StreamExt;
|
||||
use futures::future::LocalBoxFuture;
|
||||
use futures::lock::Mutex;
|
||||
use futures::stream::FuturesUnordered;
|
||||
use orchid_api_traits::{Decode, Encode};
|
||||
use orchid_base::msg::{recv_msg, send_msg};
|
||||
use tokio::io::{Stdout, stdin, stdout};
|
||||
pub async fn tokio_main(builder: ExtensionBuilder) -> ! {
|
||||
use tokio::io::{stderr, stdin, stdout};
|
||||
use tokio::task::{LocalSet, spawn_local};
|
||||
use tokio_util::compat::{Compat, TokioAsyncReadCompatExt, TokioAsyncWriteCompatExt};
|
||||
|
||||
use crate::api;
|
||||
use crate::entrypoint::extension_init;
|
||||
use tokio_util::compat::{TokioAsyncReadCompatExt, TokioAsyncWriteCompatExt};
|
||||
|
||||
let local_set = LocalSet::new();
|
||||
local_set.spawn_local(async {
|
||||
let host_header = api::HostHeader::decode(Pin::new(&mut stdin().compat())).await;
|
||||
let init =
|
||||
Rc::new(extension_init(data, host_header, Rc::new(|fut| mem::drop(spawn_local(fut)))));
|
||||
let mut buf = Vec::new();
|
||||
init.header.encode(Pin::new(&mut buf)).await;
|
||||
std::io::stdout().write_all(&buf).unwrap();
|
||||
std::io::stdout().flush().unwrap();
|
||||
// These are concurrent processes that never exit, so if the FuturesUnordered
|
||||
// produces any result the extension should exit
|
||||
let mut io = FuturesUnordered::<LocalBoxFuture<()>>::new();
|
||||
io.push(Box::pin(async {
|
||||
loop {
|
||||
match recv_msg(pin!(stdin().compat())).await {
|
||||
Ok(msg) => init.send(&msg[..]).await,
|
||||
Err(e) if e.kind() == ErrorKind::BrokenPipe => break,
|
||||
Err(e) if e.kind() == ErrorKind::UnexpectedEof => break,
|
||||
Err(e) => panic!("{e}"),
|
||||
}
|
||||
}
|
||||
}));
|
||||
io.push(Box::pin(async {
|
||||
while let Some(msg) = init.recv().await {
|
||||
static STDOUT: OnceCell<Mutex<Compat<Stdout>>> = OnceCell::new();
|
||||
let stdout_lk = STDOUT.get_or_init(async { Mutex::new(stdout().compat_write()) }).await;
|
||||
let mut stdout_g = stdout_lk.lock().await;
|
||||
send_msg(pin!(&mut *stdout_g), &msg[..]).await.expect("Parent pipe broken");
|
||||
}
|
||||
}));
|
||||
io.next().await;
|
||||
builder.build(ExtPort {
|
||||
input: Box::pin(stdin().compat()),
|
||||
output: Box::pin(stdout().compat_write()),
|
||||
log: Box::pin(stderr().compat_write()),
|
||||
spawn: Rc::new(|fut| {
|
||||
spawn_local(fut);
|
||||
}),
|
||||
});
|
||||
});
|
||||
local_set.await;
|
||||
std::process::exit(0)
|
||||
}
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
use std::cell::RefCell;
|
||||
use std::num::NonZero;
|
||||
use std::rc::Rc;
|
||||
|
||||
use async_fn_stream::stream;
|
||||
use dyn_clone::{DynClone, clone_box};
|
||||
@@ -6,62 +8,49 @@ use futures::future::{LocalBoxFuture, join_all};
|
||||
use futures::{FutureExt, StreamExt};
|
||||
use hashbrown::HashMap;
|
||||
use itertools::Itertools;
|
||||
use orchid_base::interner::{Interner, Tok};
|
||||
use orchid_base::interner::{IStr, is};
|
||||
use orchid_base::location::SrcRange;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::tree::{TokTree, Token, TokenVariant};
|
||||
use substack::Substack;
|
||||
use task_local::task_local;
|
||||
use trait_set::trait_set;
|
||||
|
||||
use crate::api;
|
||||
use crate::conv::ToExpr;
|
||||
use crate::entrypoint::MemberRecord;
|
||||
use crate::expr::{BorrowedExprStore, Expr, ExprHandle};
|
||||
use crate::func_atom::{ExprFunc, Fun};
|
||||
use crate::gen_expr::{GExpr, sym_ref};
|
||||
use crate::system::SysCtx;
|
||||
|
||||
pub type GenTokTree = TokTree<Expr, GExpr>;
|
||||
pub type GenTok = Token<Expr, GExpr>;
|
||||
|
||||
impl TokenVariant<api::Expression> for GExpr {
|
||||
type FromApiCtx<'a> = ();
|
||||
type ToApiCtx<'a> = SysCtx;
|
||||
async fn from_api(
|
||||
_: &api::Expression,
|
||||
_: &mut Self::FromApiCtx<'_>,
|
||||
_: SrcRange,
|
||||
_: &Interner,
|
||||
) -> Self {
|
||||
type ToApiCtx<'a> = ();
|
||||
async fn from_api(_: &api::Expression, _: &mut Self::FromApiCtx<'_>, _: SrcRange) -> Self {
|
||||
panic!("Received new expression from host")
|
||||
}
|
||||
async fn into_api(self, ctx: &mut Self::ToApiCtx<'_>) -> api::Expression {
|
||||
self.api_return(ctx.clone()).await
|
||||
}
|
||||
async fn into_api(self, _: &mut Self::ToApiCtx<'_>) -> api::Expression { self.serialize().await }
|
||||
}
|
||||
|
||||
impl TokenVariant<api::ExprTicket> for Expr {
|
||||
type FromApiCtx<'a> = (SysCtx, &'a BorrowedExprStore);
|
||||
async fn from_api(
|
||||
api: &api::ExprTicket,
|
||||
(ctx, exprs): &mut Self::FromApiCtx<'_>,
|
||||
_: SrcRange,
|
||||
_: &Interner,
|
||||
) -> Self {
|
||||
type FromApiCtx<'a> = &'a BorrowedExprStore;
|
||||
async fn from_api(api: &api::ExprTicket, exprs: &mut Self::FromApiCtx<'_>, _: SrcRange) -> Self {
|
||||
// SAFETY: receiving trees from sublexers implies borrowing
|
||||
Expr::from_handle(ExprHandle::borrowed(ctx.clone(), *api, exprs))
|
||||
Expr::from_handle(ExprHandle::borrowed(*api, exprs))
|
||||
}
|
||||
type ToApiCtx<'a> = ();
|
||||
async fn into_api(self, (): &mut Self::ToApiCtx<'_>) -> api::ExprTicket { self.handle().tk }
|
||||
async fn into_api(self, (): &mut Self::ToApiCtx<'_>) -> api::ExprTicket { self.handle().ticket() }
|
||||
}
|
||||
|
||||
pub async fn x_tok(x: impl ToExpr) -> GenTok { GenTok::NewExpr(x.to_expr().await) }
|
||||
pub async fn x_tok(x: impl ToExpr) -> GenTok { GenTok::NewExpr(x.to_gen().await) }
|
||||
pub async fn ref_tok(path: Sym) -> GenTok { GenTok::NewExpr(sym_ref(path)) }
|
||||
|
||||
pub fn lazy(
|
||||
public: bool,
|
||||
name: &str,
|
||||
cb: impl AsyncFnOnce(Sym, SysCtx) -> MemKind + Clone + 'static,
|
||||
cb: impl AsyncFnOnce(Sym) -> MemKind + Clone + 'static,
|
||||
) -> Vec<GenMember> {
|
||||
vec![GenMember {
|
||||
name: name.to_string(),
|
||||
@@ -71,7 +60,7 @@ pub fn lazy(
|
||||
}]
|
||||
}
|
||||
pub fn cnst(public: bool, name: &str, value: impl ToExpr + Clone + 'static) -> Vec<GenMember> {
|
||||
lazy(public, name, async |_, _| MemKind::Const(value.to_expr().await))
|
||||
lazy(public, name, async |_| MemKind::Const(value.to_gen().await))
|
||||
}
|
||||
pub fn module(
|
||||
public: bool,
|
||||
@@ -86,9 +75,8 @@ pub fn root_mod(name: &str, mems: impl IntoIterator<Item = Vec<GenMember>>) -> (
|
||||
(name.to_string(), kind)
|
||||
}
|
||||
pub fn fun<I, O>(public: bool, name: &str, xf: impl ExprFunc<I, O>) -> Vec<GenMember> {
|
||||
let fac = LazyMemberFactory::new(async move |sym, ctx| {
|
||||
MemKind::Const(Fun::new(sym, ctx, xf).await.to_expr().await)
|
||||
});
|
||||
let fac =
|
||||
LazyMemberFactory::new(async move |sym| MemKind::Const(Fun::new(sym, xf).await.to_gen().await));
|
||||
vec![GenMember { name: name.to_string(), kind: MemKind::Lazy(fac), public, comments: vec![] }]
|
||||
}
|
||||
pub fn prefix(path: &str, items: impl IntoIterator<Item = Vec<GenMember>>) -> Vec<GenMember> {
|
||||
@@ -149,14 +137,14 @@ pub fn merge_trivial(trees: impl IntoIterator<Item = Vec<GenMember>>) -> Vec<Gen
|
||||
|
||||
trait_set! {
|
||||
trait LazyMemberCallback =
|
||||
FnOnce(Sym, SysCtx) -> LocalBoxFuture<'static, MemKind> + DynClone
|
||||
FnOnce(Sym) -> LocalBoxFuture<'static, MemKind> + DynClone
|
||||
}
|
||||
pub struct LazyMemberFactory(Box<dyn LazyMemberCallback>);
|
||||
impl LazyMemberFactory {
|
||||
pub fn new(cb: impl AsyncFnOnce(Sym, SysCtx) -> MemKind + Clone + 'static) -> Self {
|
||||
Self(Box::new(|s, ctx| cb(s, ctx).boxed_local()))
|
||||
pub fn new(cb: impl AsyncFnOnce(Sym) -> MemKind + Clone + 'static) -> Self {
|
||||
Self(Box::new(|s| cb(s).boxed_local()))
|
||||
}
|
||||
pub async fn build(self, path: Sym, ctx: SysCtx) -> MemKind { (self.0)(path, ctx).await }
|
||||
pub async fn build(self, path: Sym) -> MemKind { (self.0)(path).await }
|
||||
}
|
||||
impl Clone for LazyMemberFactory {
|
||||
fn clone(&self) -> Self { Self(clone_box(&*self.0)) }
|
||||
@@ -169,11 +157,10 @@ pub struct GenMember {
|
||||
pub comments: Vec<String>,
|
||||
}
|
||||
impl GenMember {
|
||||
pub async fn into_api(self, ctx: &mut impl TreeIntoApiCtx) -> api::Member {
|
||||
let name = ctx.sys().i().i::<String>(&self.name).await;
|
||||
let kind = self.kind.into_api(&mut ctx.push_path(name.clone())).await;
|
||||
let comments =
|
||||
join_all(self.comments.iter().map(|cmt| async { ctx.sys().i().i(cmt).await.to_api() })).await;
|
||||
pub(crate) async fn into_api(self, tia_cx: &mut impl TreeIntoApiCtx) -> api::Member {
|
||||
let name = is(&self.name).await;
|
||||
let kind = self.kind.into_api(&mut tia_cx.push_path(name.clone())).await;
|
||||
let comments = join_all(self.comments.iter().map(async |cmt| is(cmt).await.to_api())).await;
|
||||
api::Member { kind, name: name.to_api(), comments, exported: self.public }
|
||||
}
|
||||
}
|
||||
@@ -184,10 +171,10 @@ pub enum MemKind {
|
||||
Lazy(LazyMemberFactory),
|
||||
}
|
||||
impl MemKind {
|
||||
pub async fn into_api(self, ctx: &mut impl TreeIntoApiCtx) -> api::MemberKind {
|
||||
pub(crate) async fn into_api(self, ctx: &mut impl TreeIntoApiCtx) -> api::MemberKind {
|
||||
match self {
|
||||
Self::Lazy(lazy) => api::MemberKind::Lazy(ctx.with_lazy(lazy)),
|
||||
Self::Const(c) => api::MemberKind::Const(c.api_return(ctx.sys()).await),
|
||||
Self::Lazy(lazy) => api::MemberKind::Lazy(add_lazy(ctx, lazy)),
|
||||
Self::Const(c) => api::MemberKind::Const(c.serialize().await),
|
||||
Self::Mod { members } => api::MemberKind::Module(api::Module {
|
||||
members: stream(async |mut cx| {
|
||||
for m in members {
|
||||
@@ -202,33 +189,58 @@ impl MemKind {
|
||||
}
|
||||
}
|
||||
|
||||
pub trait TreeIntoApiCtx {
|
||||
fn sys(&self) -> SysCtx;
|
||||
fn with_lazy(&mut self, fac: LazyMemberFactory) -> api::TreeId;
|
||||
fn push_path(&mut self, seg: Tok<String>) -> impl TreeIntoApiCtx;
|
||||
pub enum MemberRecord {
|
||||
Gen(Vec<IStr>, LazyMemberFactory),
|
||||
Res,
|
||||
}
|
||||
|
||||
pub struct TreeIntoApiCtxImpl<'a, 'b> {
|
||||
pub sys: SysCtx,
|
||||
pub basepath: &'a [Tok<String>],
|
||||
pub path: Substack<'a, Tok<String>>,
|
||||
pub lazy_members: &'b mut HashMap<api::TreeId, MemberRecord>,
|
||||
#[derive(Clone, Default)]
|
||||
pub(crate) struct LazyMemberStore(Rc<RefCell<HashMap<api::TreeId, MemberRecord>>>);
|
||||
|
||||
task_local! {
|
||||
static LAZY_MEMBERS: LazyMemberStore;
|
||||
}
|
||||
|
||||
impl TreeIntoApiCtx for TreeIntoApiCtxImpl<'_, '_> {
|
||||
fn sys(&self) -> SysCtx { self.sys.clone() }
|
||||
fn push_path(&mut self, seg: Tok<String>) -> impl TreeIntoApiCtx {
|
||||
TreeIntoApiCtxImpl {
|
||||
lazy_members: self.lazy_members,
|
||||
sys: self.sys.clone(),
|
||||
basepath: self.basepath,
|
||||
path: self.path.push(seg),
|
||||
}
|
||||
}
|
||||
fn with_lazy(&mut self, fac: LazyMemberFactory) -> api::TreeId {
|
||||
let id = api::TreeId(NonZero::new((self.lazy_members.len() + 2) as u64).unwrap());
|
||||
let path = self.basepath.iter().cloned().chain(self.path.unreverse()).collect_vec();
|
||||
self.lazy_members.insert(id, MemberRecord::Gen(path, fac));
|
||||
pub fn with_lazy_member_store<'a>(fut: LocalBoxFuture<'a, ()>) -> LocalBoxFuture<'a, ()> {
|
||||
Box::pin(LAZY_MEMBERS.scope(LazyMemberStore::default(), fut))
|
||||
}
|
||||
|
||||
fn add_lazy(cx: &impl TreeIntoApiCtx, fac: LazyMemberFactory) -> api::TreeId {
|
||||
LAZY_MEMBERS.with(|lazy_members| {
|
||||
let mut g = lazy_members.0.borrow_mut();
|
||||
let id = api::TreeId(NonZero::new((g.len() + 2) as u64).unwrap());
|
||||
let path = cx.path().collect_vec();
|
||||
g.insert(id, MemberRecord::Gen(path, fac));
|
||||
id
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn get_lazy(id: api::TreeId) -> (Sym, MemKind) {
|
||||
let (path, cb) =
|
||||
LAZY_MEMBERS.with(|tbl| match tbl.0.borrow_mut().insert(id, MemberRecord::Res) {
|
||||
None => panic!("Tree for ID not found"),
|
||||
Some(MemberRecord::Res) => panic!("This tree has already been transmitted"),
|
||||
Some(MemberRecord::Gen(path, cb)) => (path, cb),
|
||||
});
|
||||
let path = Sym::new(path).await.unwrap();
|
||||
(path.clone(), cb.build(path).await)
|
||||
}
|
||||
|
||||
pub(crate) trait TreeIntoApiCtx {
|
||||
fn push_path(&mut self, seg: IStr) -> impl TreeIntoApiCtx;
|
||||
fn path(&self) -> impl Iterator<Item = IStr>;
|
||||
}
|
||||
|
||||
pub struct TreeIntoApiCtxImpl<'a> {
|
||||
pub basepath: &'a [IStr],
|
||||
pub path: Substack<'a, IStr>,
|
||||
}
|
||||
|
||||
impl TreeIntoApiCtx for TreeIntoApiCtxImpl<'_> {
|
||||
fn push_path(&mut self, seg: IStr) -> impl TreeIntoApiCtx {
|
||||
TreeIntoApiCtxImpl { basepath: self.basepath, path: self.path.push(seg) }
|
||||
}
|
||||
fn path(&self) -> impl Iterator<Item = IStr> {
|
||||
self.basepath.iter().cloned().chain(self.path.unreverse())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,23 +7,29 @@ edition = "2024"
|
||||
|
||||
[dependencies]
|
||||
async-fn-stream = { version = "0.1.0", path = "../async-fn-stream" }
|
||||
async-lock = "3.4.1"
|
||||
async-once-cell = "0.5.4"
|
||||
async-process = "2.4.0"
|
||||
bound = "0.6.0"
|
||||
derive_destructure = "1.0.0"
|
||||
futures = { version = "0.3.31", features = ["std"], default-features = false }
|
||||
hashbrown = "0.16.0"
|
||||
futures-locks = "0.7.1"
|
||||
hashbrown = "0.16.1"
|
||||
itertools = "0.14.0"
|
||||
lazy_static = "1.5.0"
|
||||
libloading = { version = "0.9.0", optional = true }
|
||||
memo-map = "0.3.3"
|
||||
never = "0.1.0"
|
||||
num-traits = "0.2.19"
|
||||
orchid-api = { version = "0.1.0", path = "../orchid-api" }
|
||||
orchid-api-traits = { version = "0.1.0", path = "../orchid-api-traits" }
|
||||
orchid-base = { version = "0.1.0", path = "../orchid-base" }
|
||||
ordered-float = "5.0.0"
|
||||
pastey = "0.1.1"
|
||||
ordered-float = "5.1.0"
|
||||
pastey = "0.2.1"
|
||||
substack = "1.1.1"
|
||||
test_executors = "0.3.5"
|
||||
test_executors = "0.4.1"
|
||||
tokio = { version = "1.49.0", features = ["process"], optional = true }
|
||||
tokio-util = { version = "0.7.18", features = ["compat"], optional = true }
|
||||
trait-set = "0.3.0"
|
||||
unsync-pipe = { version = "0.2.0", path = "../unsync-pipe" }
|
||||
|
||||
[features]
|
||||
tokio = ["dep:tokio", "dep:tokio-util", "dep:libloading"]
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
use std::fmt;
|
||||
use std::rc::{Rc, Weak};
|
||||
|
||||
use async_lock::OnceCell;
|
||||
use async_once_cell::OnceCell;
|
||||
use derive_destructure::destructure;
|
||||
use orchid_base::format::{FmtCtx, FmtUnit, Format, take_first_fmt};
|
||||
use orchid_base::location::Pos;
|
||||
use orchid_base::reqnot::Requester;
|
||||
use orchid_base::reqnot::ClientExt;
|
||||
use orchid_base::tree::AtomRepr;
|
||||
|
||||
use crate::api;
|
||||
use crate::ctx::Ctx;
|
||||
use crate::expr::{Expr, ExprParseCtx, PathSetBuilder};
|
||||
use crate::expr::{Expr, PathSetBuilder};
|
||||
use crate::extension::Extension;
|
||||
use crate::system::System;
|
||||
|
||||
@@ -58,15 +58,15 @@ impl AtomHand {
|
||||
#[must_use]
|
||||
pub async fn call(self, arg: Expr) -> Expr {
|
||||
let owner_sys = self.0.owner.clone();
|
||||
let reqnot = owner_sys.reqnot();
|
||||
owner_sys.ext().exprs().give_expr(arg.clone());
|
||||
let ctx = owner_sys.ctx();
|
||||
let client = owner_sys.client();
|
||||
ctx.exprs.give_expr(arg.clone());
|
||||
let ret = match Rc::try_unwrap(self.0) {
|
||||
Ok(data) => reqnot.request(api::FinalCall(data.api(), arg.id())).await,
|
||||
Err(hand) => reqnot.request(api::CallRef(hand.api_ref(), arg.id())).await,
|
||||
Ok(data) => client.request(api::FinalCall(data.api(), arg.id())).await.unwrap(),
|
||||
Err(hand) => client.request(api::CallRef(hand.api_ref(), arg.id())).await.unwrap(),
|
||||
};
|
||||
let mut parse_ctx = ExprParseCtx { ctx: owner_sys.ctx(), exprs: owner_sys.ext().exprs() };
|
||||
let val = Expr::from_api(&ret, PathSetBuilder::new(), &mut parse_ctx).await;
|
||||
owner_sys.ext().exprs().take_expr(arg.id());
|
||||
let val = Expr::from_api(&ret, PathSetBuilder::new(), ctx.clone()).await;
|
||||
ctx.exprs.take_expr(arg.id());
|
||||
val
|
||||
}
|
||||
#[must_use]
|
||||
@@ -74,19 +74,21 @@ impl AtomHand {
|
||||
#[must_use]
|
||||
pub fn ext(&self) -> &Extension { self.sys().ext() }
|
||||
pub async fn req(&self, key: api::TStrv, req: Vec<u8>) -> Option<Vec<u8>> {
|
||||
self.0.owner.reqnot().request(api::Fwded(self.0.api_ref(), key, req)).await
|
||||
self.0.owner.client().request(api::Fwded(self.0.api_ref(), key, req)).await.unwrap()
|
||||
}
|
||||
#[must_use]
|
||||
pub fn api_ref(&self) -> api::Atom { self.0.api_ref() }
|
||||
#[must_use]
|
||||
pub async fn to_string(&self) -> String { take_first_fmt(self, &self.0.owner.ctx().i).await }
|
||||
pub async fn to_string(&self) -> String { take_first_fmt(self).await }
|
||||
#[must_use]
|
||||
pub fn downgrade(&self) -> WeakAtomHand { WeakAtomHand(Rc::downgrade(&self.0)) }
|
||||
}
|
||||
impl Format for AtomHand {
|
||||
async fn print<'a>(&'a self, _c: &'a (impl FmtCtx + ?Sized + 'a)) -> FmtUnit {
|
||||
(self.0.display.get_or_init(|| async {
|
||||
FmtUnit::from_api(&self.0.owner.reqnot().request(api::AtomPrint(self.0.api_ref())).await)
|
||||
(self.0.display.get_or_init(async {
|
||||
FmtUnit::from_api(
|
||||
&self.0.owner.client().request(api::AtomPrint(self.0.api_ref())).await.unwrap(),
|
||||
)
|
||||
}))
|
||||
.await
|
||||
.clone()
|
||||
|
||||
@@ -3,23 +3,32 @@ use std::num::{NonZero, NonZeroU16};
|
||||
use std::rc::{Rc, Weak};
|
||||
use std::{fmt, ops};
|
||||
|
||||
use async_lock::RwLock;
|
||||
use futures::future::LocalBoxFuture;
|
||||
use futures_locks::RwLock;
|
||||
use hashbrown::HashMap;
|
||||
use orchid_base::builtin::Spawner;
|
||||
use orchid_base::interner::Interner;
|
||||
|
||||
use crate::api;
|
||||
use crate::expr_store::ExprStore;
|
||||
use crate::logger::LoggerImpl;
|
||||
use crate::system::{System, WeakSystem};
|
||||
use crate::tree::WeakRoot;
|
||||
|
||||
pub trait JoinHandle {
|
||||
fn abort(&self);
|
||||
fn join(self: Box<Self>) -> LocalBoxFuture<'static, ()>;
|
||||
}
|
||||
|
||||
pub trait Spawner {
|
||||
fn spawn_obj(&self, fut: LocalBoxFuture<'static, ()>) -> Box<dyn JoinHandle>;
|
||||
}
|
||||
|
||||
pub struct CtxData {
|
||||
pub i: Interner,
|
||||
pub spawn: Spawner,
|
||||
spawner: Rc<dyn Spawner>,
|
||||
pub systems: RwLock<HashMap<api::SysId, WeakSystem>>,
|
||||
pub system_id: RefCell<NonZeroU16>,
|
||||
pub common_exprs: ExprStore,
|
||||
pub exprs: ExprStore,
|
||||
pub root: RwLock<WeakRoot>,
|
||||
pub logger: LoggerImpl,
|
||||
}
|
||||
#[derive(Clone)]
|
||||
pub struct Ctx(Rc<CtxData>);
|
||||
@@ -37,16 +46,25 @@ impl WeakCtx {
|
||||
}
|
||||
impl Ctx {
|
||||
#[must_use]
|
||||
pub fn new(spawn: Spawner) -> Self {
|
||||
pub fn new(spawner: impl Spawner + 'static, logger: LoggerImpl) -> Self {
|
||||
Self(Rc::new(CtxData {
|
||||
spawn,
|
||||
i: Interner::default(),
|
||||
spawner: Rc::new(spawner),
|
||||
systems: RwLock::default(),
|
||||
system_id: RefCell::new(NonZero::new(1).unwrap()),
|
||||
common_exprs: ExprStore::default(),
|
||||
exprs: ExprStore::default(),
|
||||
root: RwLock::default(),
|
||||
logger,
|
||||
}))
|
||||
}
|
||||
/// Spawn a parallel future that you can join at any later time.
|
||||
///
|
||||
/// Don't use this for async Drop, use [orchid_base::stash::stash] instead.
|
||||
/// If you use this for an actor object, make sure to actually join the
|
||||
/// handle.
|
||||
#[must_use]
|
||||
pub fn spawn(&self, fut: impl Future<Output = ()> + 'static) -> Box<dyn JoinHandle> {
|
||||
self.spawner.spawn_obj(Box::pin(fut))
|
||||
}
|
||||
#[must_use]
|
||||
pub(crate) async fn system_inst(&self, id: api::SysId) -> Option<System> {
|
||||
self.systems.read().await.get(&id).and_then(WeakSystem::upgrade)
|
||||
@@ -62,9 +80,6 @@ impl Ctx {
|
||||
}
|
||||
impl fmt::Debug for Ctx {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
f.debug_struct("Ctx")
|
||||
.field("i", &self.i)
|
||||
.field("system_id", &self.system_id)
|
||||
.finish_non_exhaustive()
|
||||
f.debug_struct("Ctx").field("system_id", &self.system_id).finish_non_exhaustive()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use hashbrown::HashSet;
|
||||
use itertools::Itertools;
|
||||
use orchid_base::error::{OrcErrv, OrcRes, Reporter, mk_errv};
|
||||
use orchid_base::interner::{Interner, Tok};
|
||||
use orchid_base::error::{OrcErrv, OrcRes, mk_errv};
|
||||
use orchid_base::interner::{IStr, is};
|
||||
use orchid_base::location::Pos;
|
||||
use orchid_base::name::VName;
|
||||
|
||||
@@ -16,17 +16,17 @@ pub enum AbsPathError {
|
||||
RootPath,
|
||||
}
|
||||
impl AbsPathError {
|
||||
pub async fn err_obj(self, i: &Interner, pos: Pos, path: &str) -> OrcErrv {
|
||||
pub async fn err_obj(self, pos: Pos, path: &str) -> OrcErrv {
|
||||
let (descr, msg) = match self {
|
||||
AbsPathError::RootPath => (
|
||||
i.i("Path ends on root module").await,
|
||||
is("Path ends on root module").await,
|
||||
format!(
|
||||
"{path} is equal to the empty path. You cannot directly reference the root. \
|
||||
Use one fewer 'super::' or add more segments to make it valid."
|
||||
),
|
||||
),
|
||||
AbsPathError::TooManySupers => (
|
||||
i.i("Too many 'super::' steps in path").await,
|
||||
is("Too many 'super::' steps in path").await,
|
||||
format!("{path} is leading outside the root."),
|
||||
),
|
||||
};
|
||||
@@ -41,39 +41,31 @@ impl AbsPathError {
|
||||
///
|
||||
/// if the relative path contains as many or more `super` segments than the
|
||||
/// length of the absolute path.
|
||||
pub async fn absolute_path(
|
||||
mut cwd: &[Tok<String>],
|
||||
mut rel: &[Tok<String>],
|
||||
i: &Interner,
|
||||
) -> Result<VName, AbsPathError> {
|
||||
let i_self = i.i("self").await;
|
||||
let i_super = i.i("super").await;
|
||||
let relative = rel.first().is_some_and(|s| *s != i_self && *s != i_super);
|
||||
if let Some((_, tail)) = rel.split_first().filter(|(h, _)| **h != i_self) {
|
||||
pub async fn absolute_path(mut cwd: &[IStr], mut rel: &[IStr]) -> Result<VName, AbsPathError> {
|
||||
let i_self = is("self").await;
|
||||
let i_super = is("super").await;
|
||||
let mut relative = false;
|
||||
if let Some((_, tail)) = rel.split_first().filter(|(h, _)| **h == i_self) {
|
||||
rel = tail;
|
||||
relative = true;
|
||||
} else {
|
||||
while let Some((_, tail)) = rel.split_first().filter(|(h, _)| **h == i_super) {
|
||||
cwd = cwd.split_last().ok_or(AbsPathError::TooManySupers)?.1;
|
||||
rel = tail;
|
||||
relative = true;
|
||||
}
|
||||
}
|
||||
if relative { VName::new(cwd.iter().chain(rel).cloned()) } else { VName::new(rel.to_vec()) }
|
||||
.map_err(|_| AbsPathError::RootPath)
|
||||
}
|
||||
|
||||
pub struct DealiasCtx<'a> {
|
||||
pub i: &'a Interner,
|
||||
pub rep: &'a Reporter,
|
||||
}
|
||||
|
||||
pub async fn resolv_glob<Mod: Tree>(
|
||||
cwd: &[Tok<String>],
|
||||
cwd: &[IStr],
|
||||
root: &Mod,
|
||||
abs_path: &[Tok<String>],
|
||||
abs_path: &[IStr],
|
||||
pos: Pos,
|
||||
i: &Interner,
|
||||
ctx: &mut Mod::Ctx<'_>,
|
||||
) -> OrcRes<HashSet<Tok<String>>> {
|
||||
) -> OrcRes<HashSet<IStr>> {
|
||||
let coprefix_len = cwd.iter().zip(abs_path).take_while(|(a, b)| a == b).count();
|
||||
let (co_prefix, diff_path) = abs_path.split_at(abs_path.len().min(coprefix_len + 1));
|
||||
let fst_diff =
|
||||
@@ -87,7 +79,7 @@ pub async fn resolv_glob<Mod: Tree>(
|
||||
ChildErrorKind::Missing => ("Invalid import path", format!("{path} not found")),
|
||||
ChildErrorKind::Private => ("Import inaccessible", format!("{path} is private")),
|
||||
};
|
||||
return Err(mk_errv(i.i(tk).await, msg, [pos]));
|
||||
return Err(mk_errv(is(tk).await, msg, [pos]));
|
||||
},
|
||||
};
|
||||
Ok(target_module.children(coprefix_len < abs_path.len()))
|
||||
@@ -98,11 +90,11 @@ pub type ChildResult<'a, T> = Result<&'a T, ChildErrorKind>;
|
||||
pub trait Tree {
|
||||
type Ctx<'a>;
|
||||
#[must_use]
|
||||
fn children(&self, public_only: bool) -> HashSet<Tok<String>>;
|
||||
fn children(&self, public_only: bool) -> HashSet<IStr>;
|
||||
#[must_use]
|
||||
fn child(
|
||||
&self,
|
||||
key: Tok<String>,
|
||||
key: IStr,
|
||||
public_only: bool,
|
||||
ctx: &mut Self::Ctx<'_>,
|
||||
) -> impl Future<Output = ChildResult<'_, Self>>;
|
||||
@@ -133,7 +125,7 @@ pub struct ChildError {
|
||||
pub async fn walk<'a, T: Tree>(
|
||||
root: &'a T,
|
||||
public_only: bool,
|
||||
path: impl IntoIterator<Item = Tok<String>>,
|
||||
path: impl IntoIterator<Item = IStr>,
|
||||
ctx: &mut T::Ctx<'_>,
|
||||
) -> Result<&'a T, ChildError> {
|
||||
let mut cur = root;
|
||||
|
||||
55
orchid-host/src/dylib.rs
Normal file
55
orchid-host/src/dylib.rs
Normal file
@@ -0,0 +1,55 @@
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::{Arc, Mutex};
|
||||
|
||||
use hashbrown::HashMap;
|
||||
use libloading::Library;
|
||||
use orchid_base::binary::vt_to_future;
|
||||
|
||||
use crate::api;
|
||||
use crate::ctx::Ctx;
|
||||
use crate::extension::ExtPort;
|
||||
|
||||
static DYNAMIC_LIBRARIES: Mutex<Option<HashMap<PathBuf, Arc<Library>>>> = Mutex::new(None);
|
||||
fn load_dylib(path: &Path) -> Result<Arc<Library>, libloading::Error> {
|
||||
let mut g = DYNAMIC_LIBRARIES.lock().unwrap();
|
||||
let map = g.get_or_insert_default();
|
||||
if let Some(lib) = map.get(path) {
|
||||
Ok(lib.clone())
|
||||
} else {
|
||||
let lib = Arc::new(unsafe { Library::new(path) }?);
|
||||
map.insert(path.to_owned(), lib.clone());
|
||||
Ok(lib)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "tokio")]
|
||||
pub async fn ext_dylib(path: &Path, ctx: Ctx) -> Result<ExtPort, libloading::Error> {
|
||||
use futures::io::BufReader;
|
||||
use futures::{AsyncBufReadExt, StreamExt};
|
||||
use libloading::Symbol;
|
||||
use unsync_pipe::pipe;
|
||||
|
||||
let (write_input, input) = pipe(1024);
|
||||
let (output, read_output) = pipe(1024);
|
||||
let (log, read_log) = pipe(1024);
|
||||
let log_path = path.to_string_lossy().to_string();
|
||||
let _ = ctx.spawn(async move {
|
||||
use orchid_base::logging::log;
|
||||
let mut lines = BufReader::new(read_log).lines();
|
||||
while let Some(line) = lines.next().await {
|
||||
writeln!(log("stderr"), "{log_path} err> {}", line.expect("Readline implies this")).await;
|
||||
}
|
||||
});
|
||||
let library = load_dylib(path)?;
|
||||
let entrypoint: Symbol<unsafe extern "C" fn(api::binary::ExtensionContext)> =
|
||||
unsafe { library.get("orchid_extension_main") }?;
|
||||
let data = Box::into_raw(Box::new(ctx)) as *const ();
|
||||
extern "C" fn drop(data: *const ()) { std::mem::drop(unsafe { Box::from_raw(data as *mut Ctx) }) }
|
||||
extern "C" fn spawn(data: *const (), vt: api::binary::FutureVT) {
|
||||
let _ = unsafe { (data as *mut Ctx).as_mut().unwrap().spawn(vt_to_future(vt)) };
|
||||
}
|
||||
let spawner = api::binary::Spawner { data, drop, spawn };
|
||||
let cx = api::binary::ExtensionContext { input, output, log, spawner };
|
||||
unsafe { (entrypoint)(cx) };
|
||||
Ok(ExtPort { input: Box::pin(write_input), output: Box::pin(read_output) })
|
||||
}
|
||||
@@ -1,18 +1,17 @@
|
||||
use std::mem;
|
||||
|
||||
use async_lock::RwLockWriteGuard;
|
||||
use bound::Bound;
|
||||
use futures::FutureExt;
|
||||
use futures_locks::{RwLockWriteGuard, TryLockError};
|
||||
use orchid_base::error::OrcErrv;
|
||||
use orchid_base::format::{FmtCtxImpl, Format, take_first};
|
||||
use orchid_base::format::fmt;
|
||||
use orchid_base::location::Pos;
|
||||
use orchid_base::logging::Logger;
|
||||
use orchid_base::logging::log;
|
||||
|
||||
use crate::ctx::Ctx;
|
||||
use crate::expr::{Expr, ExprKind, PathSet, Step};
|
||||
use crate::tree::Root;
|
||||
|
||||
type ExprGuard = Bound<RwLockWriteGuard<'static, ExprKind>, Expr>;
|
||||
type ExprGuard = Bound<RwLockWriteGuard<ExprKind>, Expr>;
|
||||
|
||||
/// The stack operation associated with a transform
|
||||
enum StackOp {
|
||||
@@ -30,21 +29,19 @@ pub enum ExecResult {
|
||||
}
|
||||
|
||||
pub struct ExecCtx {
|
||||
ctx: Ctx,
|
||||
gas: Option<u64>,
|
||||
stack: Vec<ExprGuard>,
|
||||
cur: ExprGuard,
|
||||
cur_pos: Pos,
|
||||
did_pop: bool,
|
||||
logger: Logger,
|
||||
root: Root,
|
||||
}
|
||||
impl ExecCtx {
|
||||
#[must_use]
|
||||
pub async fn new(ctx: Ctx, logger: Logger, root: Root, init: Expr) -> Self {
|
||||
pub async fn new(root: Root, init: Expr) -> Self {
|
||||
let cur_pos = init.pos();
|
||||
let cur = Bound::async_new(init, |init| init.kind().write()).await;
|
||||
Self { ctx, gas: None, stack: vec![], cur, cur_pos, did_pop: false, logger, root }
|
||||
Self { gas: None, stack: vec![], cur, cur_pos, did_pop: false, root }
|
||||
}
|
||||
#[must_use]
|
||||
pub fn remaining_gas(&self) -> u64 { self.gas.expect("queried remaining_gas but no gas was set") }
|
||||
@@ -76,21 +73,20 @@ impl ExecCtx {
|
||||
#[must_use]
|
||||
pub async fn unpack_ident(&self, ex: &Expr) -> Expr {
|
||||
match ex.kind().try_write().as_deref_mut() {
|
||||
Some(ExprKind::Identity(ex)) => {
|
||||
Ok(ExprKind::Identity(ex)) => {
|
||||
let val = self.unpack_ident(ex).boxed_local().await;
|
||||
*ex = val.clone();
|
||||
val
|
||||
},
|
||||
Some(_) => ex.clone(),
|
||||
None => panic!("Cycle encountered!"),
|
||||
Ok(_) => ex.clone(),
|
||||
Err(TryLockError) => panic!("Cycle encountered!"),
|
||||
}
|
||||
}
|
||||
pub async fn execute(&mut self) {
|
||||
while self.use_gas(1) {
|
||||
let mut kind_swap = ExprKind::Missing;
|
||||
mem::swap(&mut kind_swap, &mut self.cur);
|
||||
let unit = kind_swap.print(&FmtCtxImpl { i: &self.ctx.i }).await;
|
||||
writeln!(self.logger, "Exxecute lvl{} {}", self.stack.len(), take_first(&unit, true));
|
||||
writeln!(log("debug"), "Exxecute lvl{} {}", self.stack.len(), fmt(&kind_swap).await).await;
|
||||
let (kind, op) = match kind_swap {
|
||||
ExprKind::Identity(target) => {
|
||||
let inner = self.unpack_ident(&target).await;
|
||||
|
||||
@@ -4,12 +4,11 @@ use std::num::NonZeroU64;
|
||||
use std::rc::{Rc, Weak};
|
||||
use std::{fmt, mem};
|
||||
|
||||
use async_lock::RwLock;
|
||||
use futures::FutureExt;
|
||||
use futures_locks::RwLock;
|
||||
use itertools::Itertools;
|
||||
use orchid_base::error::OrcErrv;
|
||||
use orchid_base::format::{FmtCtx, FmtUnit, Format, Variants};
|
||||
use orchid_base::interner::Interner;
|
||||
use orchid_base::location::{Pos, SrcRange};
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::tl_cache;
|
||||
@@ -21,12 +20,6 @@ use crate::atom::AtomHand;
|
||||
use crate::ctx::Ctx;
|
||||
use crate::expr_store::ExprStore;
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct ExprParseCtx<'a> {
|
||||
pub ctx: &'a Ctx,
|
||||
pub exprs: &'a ExprStore,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct ExprData {
|
||||
pos: Pos,
|
||||
@@ -41,9 +34,9 @@ impl Expr {
|
||||
pub async fn try_into_owned_atom(self) -> Result<AtomHand, Self> {
|
||||
match Rc::try_unwrap(self.0) {
|
||||
Err(e) => Err(Self(e)),
|
||||
Ok(data) => match data.kind.into_inner() {
|
||||
Ok(data) => match data.kind.try_unwrap().expect("This fields shouldn't be copied") {
|
||||
ExprKind::Atom(a) => Ok(a),
|
||||
inner => Err(Self(Rc::new(ExprData { kind: inner.into(), pos: data.pos }))),
|
||||
inner => Err(Self(Rc::new(ExprData { kind: RwLock::new(inner), pos: data.pos }))),
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -61,42 +54,34 @@ impl Expr {
|
||||
)
|
||||
}
|
||||
#[must_use]
|
||||
pub async fn from_api(
|
||||
api: &api::Expression,
|
||||
psb: PathSetBuilder<'_, u64>,
|
||||
ctx: &mut ExprParseCtx<'_>,
|
||||
) -> Self {
|
||||
let pos = Pos::from_api(&api.location, &ctx.ctx.i).await;
|
||||
pub async fn from_api(api: &api::Expression, psb: PathSetBuilder<'_, u64>, ctx: Ctx) -> Self {
|
||||
let pos = Pos::from_api(&api.location).await;
|
||||
let kind = match &api.kind {
|
||||
api::ExpressionKind::Arg(n) => {
|
||||
assert!(psb.register_arg(n), "Arguments must be enclosed in a matching lambda");
|
||||
ExprKind::Arg
|
||||
},
|
||||
api::ExpressionKind::Bottom(bot) =>
|
||||
ExprKind::Bottom(OrcErrv::from_api(bot, &ctx.ctx.i).await),
|
||||
api::ExpressionKind::Bottom(bot) => ExprKind::Bottom(OrcErrv::from_api(bot).await),
|
||||
api::ExpressionKind::Call(f, x) => {
|
||||
let (lpsb, rpsb) = psb.split();
|
||||
ExprKind::Call(
|
||||
Expr::from_api(f, lpsb, ctx).boxed_local().await,
|
||||
Expr::from_api(f, lpsb, ctx.clone()).boxed_local().await,
|
||||
Expr::from_api(x, rpsb, ctx).boxed_local().await,
|
||||
)
|
||||
},
|
||||
api::ExpressionKind::Const(name) => ExprKind::Const(Sym::from_api(*name, &ctx.ctx.i).await),
|
||||
api::ExpressionKind::Const(name) => ExprKind::Const(Sym::from_api(*name).await),
|
||||
api::ExpressionKind::Lambda(x, body) => {
|
||||
let lbuilder = psb.lambda(x);
|
||||
let body = Expr::from_api(body, lbuilder.stack(), ctx).boxed_local().await;
|
||||
ExprKind::Lambda(lbuilder.collect(), body)
|
||||
},
|
||||
api::ExpressionKind::NewAtom(a) =>
|
||||
ExprKind::Atom(AtomHand::from_api(a, pos.clone(), &mut ctx.ctx.clone()).await),
|
||||
api::ExpressionKind::Slot { tk, by_value: false } =>
|
||||
return ctx.exprs.get_expr(*tk).expect("Invalid slot"),
|
||||
api::ExpressionKind::Slot { tk, by_value: true } =>
|
||||
return ctx.exprs.take_expr(*tk).expect("Invalid slot"),
|
||||
ExprKind::Atom(AtomHand::from_api(a, pos.clone(), &mut ctx.clone()).await),
|
||||
api::ExpressionKind::Slot(tk) => return ctx.exprs.take_expr(*tk).expect("Invalid slot"),
|
||||
api::ExpressionKind::Seq(a, b) => {
|
||||
let (apsb, bpsb) = psb.split();
|
||||
ExprKind::Seq(
|
||||
Expr::from_api(a, apsb, ctx).boxed_local().await,
|
||||
Expr::from_api(a, apsb, ctx.clone()).boxed_local().await,
|
||||
Expr::from_api(b, bpsb, ctx).boxed_local().await,
|
||||
)
|
||||
},
|
||||
@@ -169,8 +154,8 @@ async fn print_exprkind<'a>(
|
||||
ExprKind::Bottom(e) if e.len() == 1 => format!("Bottom({e})").into(),
|
||||
ExprKind::Bottom(e) => format!("Bottom(\n\t{}\n)", indent(&e.to_string())).into(),
|
||||
ExprKind::Call(f, x) => tl_cache!(Rc<Variants>: Rc::new(Variants::default()
|
||||
.unbounded("{0} {1l}")
|
||||
.bounded("({0} {1b})")))
|
||||
.unbounded("{0b} {1l}")
|
||||
.bounded("({0b} {1})")))
|
||||
.units([print_expr(f, c, visited).await, print_expr(x, c, visited).await]),
|
||||
ExprKind::Identity(id) =>
|
||||
tl_cache!(Rc<Variants>: Rc::new(Variants::default().bounded("{{{0}}}"))).units([print_expr(
|
||||
@@ -180,11 +165,11 @@ async fn print_exprkind<'a>(
|
||||
.await]),
|
||||
ExprKind::Const(c) => format!("{c}").into(),
|
||||
ExprKind::Lambda(None, body) => tl_cache!(Rc<Variants>: Rc::new(Variants::default()
|
||||
.unbounded("\\.{0l}")
|
||||
// .unbounded("\\.{0l}")
|
||||
.bounded("(\\.{0b})")))
|
||||
.units([print_expr(body, c, visited).await]),
|
||||
ExprKind::Lambda(Some(path), body) => tl_cache!(Rc<Variants>: Rc::new(Variants::default()
|
||||
.unbounded("\\{0b}. {1l}")
|
||||
// .unbounded("\\{0b}. {1l}")
|
||||
.bounded("(\\{0b}. {1b})")))
|
||||
.units([format!("{path}").into(), print_expr(body, c, visited).await]),
|
||||
ExprKind::Seq(l, r) =>
|
||||
@@ -340,12 +325,7 @@ impl WeakExpr {
|
||||
|
||||
impl TokenVariant<api::ExprTicket> for Expr {
|
||||
type FromApiCtx<'a> = ExprStore;
|
||||
async fn from_api(
|
||||
api: &api::ExprTicket,
|
||||
ctx: &mut Self::FromApiCtx<'_>,
|
||||
_: SrcRange,
|
||||
_: &Interner,
|
||||
) -> Self {
|
||||
async fn from_api(api: &api::ExprTicket, ctx: &mut Self::FromApiCtx<'_>, _: SrcRange) -> Self {
|
||||
ctx.get_expr(*api).expect("Invalid ticket")
|
||||
}
|
||||
type ToApiCtx<'a> = ExprStore;
|
||||
@@ -361,14 +341,9 @@ impl TokenVariant<api::ExprTicket> for Expr {
|
||||
pub struct ExprWillPanic;
|
||||
|
||||
impl TokenVariant<api::Expression> for Expr {
|
||||
type FromApiCtx<'a> = ExprParseCtx<'a>;
|
||||
async fn from_api(
|
||||
api: &api::Expression,
|
||||
ctx: &mut Self::FromApiCtx<'_>,
|
||||
_: SrcRange,
|
||||
_: &Interner,
|
||||
) -> Self {
|
||||
Self::from_api(api, PathSetBuilder::new(), ctx).await
|
||||
type FromApiCtx<'a> = Ctx;
|
||||
async fn from_api(api: &api::Expression, ctx: &mut Self::FromApiCtx<'_>, _: SrcRange) -> Self {
|
||||
Self::from_api(api, PathSetBuilder::new(), ctx.clone()).await
|
||||
}
|
||||
type ToApiCtx<'a> = ExprWillPanic;
|
||||
async fn into_api(self, ExprWillPanic: &mut Self::ToApiCtx<'_>) -> api::Expression {
|
||||
|
||||
@@ -13,7 +13,6 @@ use crate::expr::Expr;
|
||||
pub struct ExprStoreData {
|
||||
exprs: RefCell<HashMap<api::ExprTicket, (u32, Expr)>>,
|
||||
parent: Option<ExprStore>,
|
||||
tracking_parent: bool,
|
||||
}
|
||||
#[derive(Clone, Default)]
|
||||
pub struct ExprStore(Rc<ExprStoreData>);
|
||||
@@ -25,16 +24,12 @@ impl ExprStore {
|
||||
/// but operations on the parent can access the child exprs too until this
|
||||
/// store is dropped.
|
||||
#[must_use]
|
||||
pub fn derive(&self, tracking_parent: bool) -> Self {
|
||||
Self(Rc::new(ExprStoreData {
|
||||
exprs: RefCell::default(),
|
||||
parent: Some(self.clone()),
|
||||
tracking_parent,
|
||||
}))
|
||||
pub fn derive(&self) -> Self {
|
||||
Self(Rc::new(ExprStoreData { exprs: RefCell::default(), parent: Some(self.clone()) }))
|
||||
}
|
||||
pub fn give_expr(&self, expr: Expr) {
|
||||
if self.0.tracking_parent {
|
||||
self.0.parent.as_ref().unwrap().give_expr(expr.clone());
|
||||
if let Some(parent) = self.0.parent.as_ref() {
|
||||
parent.give_expr(expr.clone())
|
||||
}
|
||||
match self.0.exprs.borrow_mut().entry(expr.id()) {
|
||||
Entry::Occupied(mut oe) => oe.get_mut().0 += 1,
|
||||
@@ -44,8 +39,8 @@ impl ExprStore {
|
||||
}
|
||||
}
|
||||
pub fn take_expr(&self, ticket: api::ExprTicket) -> Option<Expr> {
|
||||
if self.0.tracking_parent {
|
||||
self.0.parent.as_ref().unwrap().take_expr(ticket);
|
||||
if let Some(parent) = self.0.parent.as_ref() {
|
||||
parent.take_expr(ticket);
|
||||
}
|
||||
match self.0.exprs.borrow_mut().entry(ticket) {
|
||||
Entry::Vacant(_) => panic!("Attempted to double-take expression"),
|
||||
@@ -79,13 +74,11 @@ impl Drop for ExprStore {
|
||||
if 1 < Rc::strong_count(&self.0) {
|
||||
return;
|
||||
}
|
||||
if !self.0.tracking_parent {
|
||||
return;
|
||||
}
|
||||
let parent = self.0.parent.as_ref().unwrap();
|
||||
for (id, (count, _)) in self.0.exprs.borrow().iter() {
|
||||
for _ in 0..*count {
|
||||
parent.take_expr(*id);
|
||||
if let Some(parent) = self.0.parent.as_ref() {
|
||||
for (id, (count, _)) in self.0.exprs.borrow().iter() {
|
||||
for _ in 0..*count {
|
||||
parent.take_expr(*id);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,7 +2,7 @@ use std::cell::RefCell;
|
||||
use std::future::Future;
|
||||
use std::io;
|
||||
use std::num::NonZeroU64;
|
||||
use std::pin::pin;
|
||||
use std::pin::Pin;
|
||||
use std::rc::{Rc, Weak};
|
||||
|
||||
use async_fn_stream::stream;
|
||||
@@ -10,28 +10,32 @@ use derive_destructure::destructure;
|
||||
use futures::channel::mpsc::{Sender, channel};
|
||||
use futures::future::{join, join_all};
|
||||
use futures::lock::Mutex;
|
||||
use futures::{SinkExt, StreamExt, stream};
|
||||
use hashbrown::HashMap;
|
||||
use futures::{AsyncRead, AsyncWrite, AsyncWriteExt, SinkExt, StreamExt};
|
||||
use hashbrown::{HashMap, HashSet};
|
||||
use itertools::Itertools;
|
||||
use orchid_api_traits::Request;
|
||||
use orchid_base::builtin::ExtInit;
|
||||
use orchid_base::clone;
|
||||
use orchid_api_traits::{Decode, Encode, Request};
|
||||
use orchid_base::format::{FmtCtxImpl, Format};
|
||||
use orchid_base::interner::Tok;
|
||||
use orchid_base::interner::{IStr, IStrv, es, ev, is, iv};
|
||||
use orchid_base::location::Pos;
|
||||
use orchid_base::logging::Logger;
|
||||
use orchid_base::logging::log;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::reqnot::{DynRequester, ReqNot, Requester as _};
|
||||
use orchid_base::reqnot::{Client, ClientExt, MsgReaderExt, ReqHandleExt, ReqReaderExt, io_comm};
|
||||
use orchid_base::stash::{stash, with_stash};
|
||||
use orchid_base::tree::AtomRepr;
|
||||
|
||||
use crate::api;
|
||||
use crate::atom::AtomHand;
|
||||
use crate::ctx::Ctx;
|
||||
use crate::ctx::{Ctx, JoinHandle};
|
||||
use crate::dealias::{ChildError, ChildErrorKind, walk};
|
||||
use crate::expr_store::ExprStore;
|
||||
use crate::expr::{Expr, PathSetBuilder};
|
||||
use crate::system::SystemCtor;
|
||||
use crate::tree::MemberKind;
|
||||
|
||||
pub struct ExtPort {
|
||||
pub input: Pin<Box<dyn AsyncWrite>>,
|
||||
pub output: Pin<Box<dyn AsyncRead>>,
|
||||
}
|
||||
|
||||
pub struct ReqPair<R: Request>(R, Sender<R::Response>);
|
||||
|
||||
/// Data held about an Extension. This is refcounted within [Extension]. It's
|
||||
@@ -42,131 +46,129 @@ pub struct ReqPair<R: Request>(R, Sender<R::Response>);
|
||||
pub struct ExtensionData {
|
||||
name: String,
|
||||
ctx: Ctx,
|
||||
reqnot: ReqNot<api::HostMsgSet>,
|
||||
join_ext: Option<Box<dyn JoinHandle>>,
|
||||
client: Rc<dyn Client>,
|
||||
systems: Vec<SystemCtor>,
|
||||
logger: Logger,
|
||||
next_pars: RefCell<NonZeroU64>,
|
||||
exprs: ExprStore,
|
||||
exiting_snd: Sender<()>,
|
||||
lex_recur: Mutex<HashMap<api::ParsId, Sender<ReqPair<api::SubLex>>>>,
|
||||
strings: RefCell<HashSet<IStr>>,
|
||||
string_vecs: RefCell<HashSet<IStrv>>,
|
||||
}
|
||||
impl Drop for ExtensionData {
|
||||
fn drop(&mut self) {
|
||||
let reqnot = self.reqnot.clone();
|
||||
let mut exiting_snd = self.exiting_snd.clone();
|
||||
(self.ctx.spawn)(Box::pin(async move {
|
||||
reqnot.notify(api::HostExtNotif::Exit).await;
|
||||
exiting_snd.send(()).await.unwrap()
|
||||
}))
|
||||
let client = self.client.clone();
|
||||
let join_ext = self.join_ext.take().expect("Only called once in Drop");
|
||||
stash(async move {
|
||||
client.notify(api::HostExtNotif::Exit).await.unwrap();
|
||||
join_ext.join().await;
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct Extension(Rc<ExtensionData>);
|
||||
impl Extension {
|
||||
pub fn new(init: ExtInit, logger: Logger, msg_logger: Logger, ctx: Ctx) -> io::Result<Self> {
|
||||
pub async fn new(mut init: ExtPort, ctx: Ctx) -> io::Result<Self> {
|
||||
api::HostHeader { logger: ctx.logger.to_api() }.encode(init.input.as_mut()).await.unwrap();
|
||||
init.input.flush().await.unwrap();
|
||||
|
||||
let header = api::ExtensionHeader::decode(init.output.as_mut()).await.unwrap();
|
||||
let header2 = header.clone();
|
||||
Ok(Self(Rc::new_cyclic(|weak: &Weak<ExtensionData>| {
|
||||
let init = Rc::new(init);
|
||||
let (exiting_snd, exiting_rcv) = channel::<()>(0);
|
||||
(ctx.spawn)({
|
||||
clone!(init, weak, ctx);
|
||||
Box::pin(async move {
|
||||
let rcv_stream = stream(async |mut cx| {
|
||||
loop {
|
||||
cx.emit(init.recv().await).await
|
||||
}
|
||||
});
|
||||
let mut event_stream = pin!(stream::select(exiting_rcv.map(|()| None), rcv_stream));
|
||||
while let Some(Some(msg)) = event_stream.next().await {
|
||||
if let Some(reqnot) = weak.upgrade().map(|rc| rc.reqnot.clone()) {
|
||||
let reqnot = reqnot.clone();
|
||||
(ctx.spawn)(Box::pin(async move {
|
||||
reqnot.receive(&msg).await;
|
||||
}))
|
||||
}
|
||||
}
|
||||
})
|
||||
});
|
||||
ExtensionData {
|
||||
name: init.name.clone(),
|
||||
exiting_snd,
|
||||
exprs: ctx.common_exprs.derive(false),
|
||||
ctx: ctx.clone(),
|
||||
systems: (init.systems.iter().cloned())
|
||||
.map(|decl| SystemCtor { decl, ext: WeakExtension(weak.clone()) })
|
||||
.collect(),
|
||||
logger: logger.clone(),
|
||||
next_pars: RefCell::new(NonZeroU64::new(1).unwrap()),
|
||||
lex_recur: Mutex::default(),
|
||||
reqnot: ReqNot::new(
|
||||
msg_logger,
|
||||
move |sfn, _| clone!(init; Box::pin(async move { init.send(sfn).await })),
|
||||
clone!(weak; move |notif, _| {
|
||||
clone!(weak; Box::pin(async move {
|
||||
let this = Extension(weak.upgrade().unwrap());
|
||||
if !matches!(notif, api::ExtHostNotif::Log(_)) {
|
||||
writeln!(this.reqnot().logger(), "Host received notif {notif:?}");
|
||||
}
|
||||
match notif {
|
||||
api::ExtHostNotif::ExprNotif(api::ExprNotif::Acquire(acq)) => {
|
||||
let target = this.0.exprs.get_expr(acq.1).expect("Invalid ticket");
|
||||
this.0.exprs.give_expr(target)
|
||||
}
|
||||
api::ExtHostNotif::ExprNotif(api::ExprNotif::Release(rel)) => {
|
||||
if this.is_own_sys(rel.0).await {
|
||||
this.0.exprs.take_expr(rel.1);
|
||||
} else {
|
||||
writeln!(this.reqnot().logger(), "Not our system {:?}", rel.0)
|
||||
// context not needed because exit is extension-initiated
|
||||
let (client, _, comm) = io_comm(Rc::new(Mutex::new(init.input)), Mutex::new(init.output));
|
||||
let weak2 = weak;
|
||||
let weak = weak.clone();
|
||||
let ctx2 = ctx.clone();
|
||||
let join_ext = ctx.clone().spawn(async move {
|
||||
comm
|
||||
.listen(
|
||||
async |reader| {
|
||||
with_stash(async {
|
||||
let this = Extension(weak.upgrade().unwrap());
|
||||
let notif = reader.read::<api::ExtHostNotif>().await.unwrap();
|
||||
// logging is never logged because its value will be logged anyway
|
||||
if !matches!(notif, api::ExtHostNotif::Log(_)) {
|
||||
writeln!(log("msg"), "Host received notif {notif:?}").await;
|
||||
}
|
||||
}
|
||||
api::ExtHostNotif::ExprNotif(api::ExprNotif::Move(mov)) => {
|
||||
if !this.is_own_sys(mov.dec).await {
|
||||
writeln!(this.reqnot().logger(), "Not our system {:?}", mov.dec);
|
||||
return;
|
||||
match notif {
|
||||
api::ExtHostNotif::ExprNotif(api::ExprNotif::Acquire(acq)) => {
|
||||
let target = this.0.ctx.exprs.get_expr(acq.1).expect("Invalid ticket");
|
||||
this.0.ctx.exprs.give_expr(target)
|
||||
},
|
||||
api::ExtHostNotif::ExprNotif(api::ExprNotif::Release(rel)) => {
|
||||
if this.is_own_sys(rel.0).await {
|
||||
this.0.ctx.exprs.take_expr(rel.1);
|
||||
} else {
|
||||
writeln!(log("warn"), "Not our system {:?}", rel.0).await
|
||||
}
|
||||
},
|
||||
api::ExtHostNotif::Log(api::Log { category, message }) =>
|
||||
write!(log(&es(category).await), "{message}").await,
|
||||
api::ExtHostNotif::Sweeped(data) => {
|
||||
for i in join_all(data.strings.into_iter().map(es)).await {
|
||||
this.0.strings.borrow_mut().remove(&i);
|
||||
}
|
||||
for i in join_all(data.vecs.into_iter().map(ev)).await {
|
||||
this.0.string_vecs.borrow_mut().remove(&i);
|
||||
}
|
||||
},
|
||||
}
|
||||
Ok(())
|
||||
})
|
||||
.await
|
||||
},
|
||||
async |mut reader| {
|
||||
with_stash(async {
|
||||
let req = reader.read_req::<api::ExtHostReq>().await.unwrap();
|
||||
let handle = reader.finish().await;
|
||||
// Atom printing and interning is never reported because it generates too much
|
||||
// noise
|
||||
if !matches!(req, api::ExtHostReq::ExtAtomPrint(_))
|
||||
|| matches!(req, api::ExtHostReq::IntReq(_))
|
||||
{
|
||||
writeln!(log("msg"), "Host received request {req:?}").await;
|
||||
}
|
||||
let recp = this.ctx().system_inst(mov.inc).await.expect("invallid recipient sys id");
|
||||
let expr = this.0.exprs.get_expr(mov.expr).expect("invalid ticket");
|
||||
recp.ext().0.exprs.give_expr(expr);
|
||||
this.0.exprs.take_expr(mov.expr);
|
||||
},
|
||||
api::ExtHostNotif::Log(api::Log(str)) => this.logger().log(str),
|
||||
}
|
||||
}))}),
|
||||
{
|
||||
clone!(weak, ctx);
|
||||
move |hand, req| {
|
||||
clone!(weak, ctx);
|
||||
Box::pin(async move {
|
||||
let this = Self(weak.upgrade().unwrap());
|
||||
if !matches!(req, api::ExtHostReq::ExtAtomPrint(_)) {
|
||||
writeln!(this.reqnot().logger(), "Host received request {req:?}");
|
||||
}
|
||||
let i = this.ctx().i.clone();
|
||||
match req {
|
||||
api::ExtHostReq::Ping(ping) => hand.handle(&ping, &()).await,
|
||||
api::ExtHostReq::Ping(ping) => handle.reply(&ping, &()).await,
|
||||
api::ExtHostReq::IntReq(intreq) => match intreq {
|
||||
api::IntReq::InternStr(s) => hand.handle(&s, &i.i(&*s.0).await.to_api()).await,
|
||||
api::IntReq::InternStrv(v) => {
|
||||
let tokens = join_all(v.0.iter().map(|m| i.ex(*m))).await;
|
||||
hand.handle(&v, &i.i(&tokens).await.to_api()).await
|
||||
api::IntReq::InternStr(s) => {
|
||||
let i = is(&s.0).await;
|
||||
this.0.strings.borrow_mut().insert(i.clone());
|
||||
handle.reply(&s, &i.to_api()).await
|
||||
},
|
||||
api::IntReq::InternStrv(v) => {
|
||||
let tokens = join_all(v.0.iter().map(|m| es(*m))).await;
|
||||
this.0.strings.borrow_mut().extend(tokens.iter().cloned());
|
||||
let i = iv(&tokens).await;
|
||||
this.0.string_vecs.borrow_mut().insert(i.clone());
|
||||
handle.reply(&v, &i.to_api()).await
|
||||
},
|
||||
api::IntReq::ExternStr(si) => {
|
||||
let i = es(si.0).await;
|
||||
this.0.strings.borrow_mut().insert(i.clone());
|
||||
handle.reply(&si, &i.to_string()).await
|
||||
},
|
||||
api::IntReq::ExternStr(si) =>
|
||||
hand.handle(&si, &Tok::<String>::from_api(si.0, &i).await.rc()).await,
|
||||
api::IntReq::ExternStrv(vi) => {
|
||||
let markerv = (i.ex(vi.0).await.iter()).map(|t| t.to_api()).collect_vec();
|
||||
hand.handle(&vi, &markerv).await
|
||||
let i = ev(vi.0).await;
|
||||
this.0.strings.borrow_mut().extend(i.iter().cloned());
|
||||
this.0.string_vecs.borrow_mut().insert(i.clone());
|
||||
let markerv = i.iter().map(|t| t.to_api()).collect_vec();
|
||||
handle.reply(&vi, &markerv).await
|
||||
},
|
||||
},
|
||||
api::ExtHostReq::Fwd(ref fw @ api::Fwd(ref atom, ref key, ref body)) => {
|
||||
let sys =
|
||||
ctx.system_inst(atom.owner).await.expect("owner of live atom dropped");
|
||||
let client = sys.client();
|
||||
let reply =
|
||||
sys.reqnot().request(api::Fwded(fw.0.clone(), *key, body.clone())).await;
|
||||
hand.handle(fw, &reply).await
|
||||
client.request(api::Fwded(fw.0.clone(), *key, body.clone())).await.unwrap();
|
||||
handle.reply(fw, &reply).await
|
||||
},
|
||||
api::ExtHostReq::SysFwd(ref fw @ api::SysFwd(id, ref body)) => {
|
||||
let sys = ctx.system_inst(id).await.unwrap();
|
||||
hand.handle(fw, &sys.request(body.clone()).await).await
|
||||
handle.reply(fw, &sys.request(body.clone()).await).await
|
||||
},
|
||||
api::ExtHostReq::SubLex(sl) => {
|
||||
let (rep_in, mut rep_out) = channel(0);
|
||||
@@ -176,23 +178,29 @@ impl Extension {
|
||||
lex_g.get(&sl.id).cloned().expect("Sublex for nonexistent lexid");
|
||||
req_in.send(ReqPair(sl.clone(), rep_in)).await.unwrap();
|
||||
}
|
||||
hand.handle(&sl, &rep_out.next().await.unwrap()).await
|
||||
handle.reply(&sl, &rep_out.next().await.unwrap()).await
|
||||
},
|
||||
api::ExtHostReq::ExprReq(api::ExprReq::Inspect(
|
||||
ins @ api::Inspect { target },
|
||||
)) => {
|
||||
let expr = this.exprs().get_expr(target).expect("Invalid ticket");
|
||||
hand
|
||||
.handle(&ins, &api::Inspected {
|
||||
refcount: expr.strong_count() as u32,
|
||||
location: expr.pos().to_api(),
|
||||
kind: expr.to_api().await,
|
||||
})
|
||||
.await
|
||||
api::ExtHostReq::ExprReq(expr_req) => match expr_req {
|
||||
api::ExprReq::Inspect(ins @ api::Inspect { target }) => {
|
||||
let expr = ctx.exprs.get_expr(target).expect("Invalid ticket");
|
||||
handle
|
||||
.reply(&ins, &api::Inspected {
|
||||
refcount: expr.strong_count() as u32,
|
||||
location: expr.pos().to_api(),
|
||||
kind: expr.to_api().await,
|
||||
})
|
||||
.await
|
||||
},
|
||||
api::ExprReq::Create(ref cre @ api::Create(ref expr)) => {
|
||||
let expr = Expr::from_api(expr, PathSetBuilder::new(), ctx.clone()).await;
|
||||
let expr_id = expr.id();
|
||||
ctx.exprs.give_expr(expr);
|
||||
handle.reply(cre, &expr_id).await
|
||||
},
|
||||
},
|
||||
api::ExtHostReq::LsModule(ref ls @ api::LsModule(_sys, path)) => {
|
||||
let reply: <api::LsModule as Request>::Response = 'reply: {
|
||||
let path = i.ex(path).await;
|
||||
let path = ev(path).await;
|
||||
let root = (ctx.root.read().await.upgrade())
|
||||
.expect("LSModule called when root isn't in context");
|
||||
let root_data = &*root.0.read().await;
|
||||
@@ -219,7 +227,7 @@ impl Extension {
|
||||
}
|
||||
Ok(api::ModuleInfo { members })
|
||||
};
|
||||
hand.handle(ls, &reply).await
|
||||
handle.reply(ls, &reply).await
|
||||
},
|
||||
api::ExtHostReq::ResolveNames(ref rn) => {
|
||||
let api::ResolveNames { constid, names, sys } = rn;
|
||||
@@ -231,44 +239,63 @@ impl Extension {
|
||||
};
|
||||
let responses = stream(async |mut cx| {
|
||||
for name in names {
|
||||
cx.emit(match resolver(&ctx.i.ex(*name).await[..]).await {
|
||||
Ok(abs) => Ok(abs.to_sym(&ctx.i).await.to_api()),
|
||||
Err(e) => Err(e.to_api()),
|
||||
cx.emit(match resolver(&ev(*name).await[..]).await {
|
||||
Ok(abs) => {
|
||||
let sym = abs.to_sym().await;
|
||||
this.0.string_vecs.borrow_mut().insert(sym.tok());
|
||||
Ok(sym.to_api())
|
||||
},
|
||||
Err(e) => {
|
||||
(this.0.strings.borrow_mut())
|
||||
.extend(e.iter().map(|e| e.description.clone()));
|
||||
Err(e.to_api())
|
||||
},
|
||||
})
|
||||
.await
|
||||
}
|
||||
})
|
||||
.collect()
|
||||
.await;
|
||||
hand.handle(rn, &responses).await
|
||||
handle.reply(rn, &responses).await
|
||||
},
|
||||
api::ExtHostReq::ExtAtomPrint(ref eap @ api::ExtAtomPrint(ref atom)) => {
|
||||
let atom = AtomHand::from_api(atom, Pos::None, &mut ctx.clone()).await;
|
||||
let unit = atom.print(&FmtCtxImpl { i: &this.ctx().i }).await;
|
||||
hand.handle(eap, &unit.to_api()).await
|
||||
let unit = atom.print(&FmtCtxImpl::default()).await;
|
||||
handle.reply(eap, &unit.to_api()).await
|
||||
},
|
||||
}
|
||||
})
|
||||
}
|
||||
},
|
||||
),
|
||||
.await
|
||||
},
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
});
|
||||
ExtensionData {
|
||||
name: header2.name.clone(),
|
||||
ctx: ctx2,
|
||||
systems: (header.systems.iter().cloned())
|
||||
.map(|decl| SystemCtor { decl, ext: WeakExtension(weak2.clone()) })
|
||||
.collect(),
|
||||
join_ext: Some(join_ext),
|
||||
next_pars: RefCell::new(NonZeroU64::new(1).unwrap()),
|
||||
lex_recur: Mutex::default(),
|
||||
client: Rc::new(client),
|
||||
strings: RefCell::default(),
|
||||
string_vecs: RefCell::default(),
|
||||
}
|
||||
})))
|
||||
}
|
||||
pub fn name(&self) -> &String { &self.0.name }
|
||||
#[must_use]
|
||||
pub fn reqnot(&self) -> &ReqNot<api::HostMsgSet> { &self.0.reqnot }
|
||||
pub fn client(&self) -> &dyn Client { &*self.0.client }
|
||||
#[must_use]
|
||||
pub fn ctx(&self) -> &Ctx { &self.0.ctx }
|
||||
#[must_use]
|
||||
pub fn logger(&self) -> &Logger { &self.0.logger }
|
||||
pub fn system_ctors(&self) -> impl Iterator<Item = &SystemCtor> { self.0.systems.iter() }
|
||||
#[must_use]
|
||||
pub fn exprs(&self) -> &ExprStore { &self.0.exprs }
|
||||
#[must_use]
|
||||
pub async fn is_own_sys(&self, id: api::SysId) -> bool {
|
||||
let Some(sys) = self.ctx().system_inst(id).await else {
|
||||
writeln!(self.logger(), "Invalid system ID {id:?}");
|
||||
writeln!(log("warn"), "Invalid system ID {id:?}").await;
|
||||
return false;
|
||||
};
|
||||
Rc::ptr_eq(&self.0, &sys.ext().0)
|
||||
@@ -281,7 +308,7 @@ impl Extension {
|
||||
}
|
||||
pub(crate) async fn lex_req<F: Future<Output = Option<api::SubLexed>>>(
|
||||
&self,
|
||||
source: Tok<String>,
|
||||
source: IStr,
|
||||
src: Sym,
|
||||
pos: u32,
|
||||
sys: api::SysId,
|
||||
@@ -294,9 +321,10 @@ impl Extension {
|
||||
self.0.lex_recur.lock().await.insert(id, req_in); // lex_recur released
|
||||
let (ret, ()) = join(
|
||||
async {
|
||||
let res = (self.reqnot())
|
||||
let res = (self.client())
|
||||
.request(api::LexExpr { id, pos, sys, src: src.to_api(), text: source.to_api() })
|
||||
.await;
|
||||
.await
|
||||
.unwrap();
|
||||
// collect sender to unblock recursion handler branch before returning
|
||||
self.0.lex_recur.lock().await.remove(&id);
|
||||
res
|
||||
@@ -313,10 +341,10 @@ impl Extension {
|
||||
}
|
||||
pub fn system_drop(&self, id: api::SysId) {
|
||||
let rc = self.clone();
|
||||
(self.ctx().spawn)(Box::pin(async move {
|
||||
rc.reqnot().request(api::SystemDrop(id)).await;
|
||||
let _ = self.ctx().spawn(with_stash(async move {
|
||||
rc.client().request(api::SystemDrop(id)).await.unwrap();
|
||||
rc.ctx().systems.write().await.remove(&id);
|
||||
}))
|
||||
}));
|
||||
}
|
||||
#[must_use]
|
||||
pub fn downgrade(&self) -> WeakExtension { WeakExtension(Rc::downgrade(&self.0)) }
|
||||
|
||||
@@ -1,10 +1,8 @@
|
||||
use std::rc::Rc;
|
||||
|
||||
use futures::FutureExt;
|
||||
use futures::lock::Mutex;
|
||||
use orchid_base::clone;
|
||||
use orchid_base::error::{OrcErrv, OrcRes, mk_errv};
|
||||
use orchid_base::interner::Tok;
|
||||
use orchid_base::interner::{IStr, is};
|
||||
use orchid_base::location::SrcRange;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::parse::{name_char, name_start, op_char, unrep_space};
|
||||
@@ -13,14 +11,14 @@ use orchid_base::tree::recur;
|
||||
|
||||
use crate::api;
|
||||
use crate::ctx::Ctx;
|
||||
use crate::expr::{Expr, ExprParseCtx};
|
||||
use crate::expr::Expr;
|
||||
use crate::expr_store::ExprStore;
|
||||
use crate::parsed::{ParsTok, ParsTokTree, tt_to_api};
|
||||
use crate::system::System;
|
||||
|
||||
pub struct LexCtx<'a> {
|
||||
pub systems: &'a [System],
|
||||
pub source: &'a Tok<String>,
|
||||
pub source: &'a IStr,
|
||||
pub path: &'a Sym,
|
||||
pub tail: &'a str,
|
||||
pub sub_trees: &'a mut Vec<Expr>,
|
||||
@@ -60,14 +58,7 @@ impl<'a> LexCtx<'a> {
|
||||
}
|
||||
#[must_use]
|
||||
pub async fn des_subtree(&mut self, tree: &api::TokenTree, exprs: ExprStore) -> ParsTokTree {
|
||||
ParsTokTree::from_api(
|
||||
tree,
|
||||
&mut { exprs },
|
||||
&mut ExprParseCtx { ctx: self.ctx, exprs: &self.ctx.common_exprs },
|
||||
self.path,
|
||||
&self.ctx.i,
|
||||
)
|
||||
.await
|
||||
ParsTokTree::from_api(tree, &mut { exprs }, &mut self.ctx.clone(), self.path).await
|
||||
}
|
||||
#[must_use]
|
||||
pub fn strip_char(&mut self, tgt: char) -> bool {
|
||||
@@ -105,21 +96,21 @@ pub async fn lex_once(ctx: &mut LexCtx<'_>) -> OrcRes<ParsTokTree> {
|
||||
let name = &ctx.tail[..ctx.tail.len() - tail.len() - "::".len()];
|
||||
ctx.set_tail(tail);
|
||||
let body = lex_once(ctx).boxed_local().await?;
|
||||
ParsTok::NS(ctx.ctx.i.i(name).await, Box::new(body))
|
||||
ParsTok::NS(is(name).await, Box::new(body))
|
||||
} else if ctx.strip_prefix("--[") {
|
||||
let Some((cmt, tail)) = ctx.tail.split_once("]--") else {
|
||||
return Err(mk_errv(
|
||||
ctx.ctx.i.i("Unterminated block comment").await,
|
||||
is("Unterminated block comment").await,
|
||||
"This block comment has no ending ]--",
|
||||
[SrcRange::new(start..start + 3, ctx.path)],
|
||||
));
|
||||
};
|
||||
ctx.set_tail(tail);
|
||||
ParsTok::Comment(Rc::new(cmt.to_string()))
|
||||
ParsTok::Comment(is(cmt).await)
|
||||
} else if let Some(tail) = ctx.tail.strip_prefix("--").filter(|t| !t.starts_with(op_char)) {
|
||||
let end = tail.find(['\n', '\r']).map_or(tail.len(), |n| n - 1);
|
||||
ctx.push_pos(end as u32);
|
||||
ParsTok::Comment(Rc::new(tail[2..end].to_string()))
|
||||
ParsTok::Comment(is(&tail[2..end]).await)
|
||||
} else if let Some(tail) = ctx.tail.strip_prefix('\\').filter(|t| t.starts_with(name_start)) {
|
||||
// fanciness like \$placeh in templates is resolved in the macro engine.
|
||||
ctx.set_tail(tail);
|
||||
@@ -132,7 +123,7 @@ pub async fn lex_once(ctx: &mut LexCtx<'_>) -> OrcRes<ParsTokTree> {
|
||||
while !ctx.strip_char(*rp) {
|
||||
if ctx.tail.is_empty() {
|
||||
return Err(mk_errv(
|
||||
ctx.ctx.i.i("unclosed paren").await,
|
||||
is("unclosed paren").await,
|
||||
format!("this {lp} has no matching {rp}"),
|
||||
[SrcRange::new(start..start + 1, ctx.path)],
|
||||
));
|
||||
@@ -146,9 +137,9 @@ pub async fn lex_once(ctx: &mut LexCtx<'_>) -> OrcRes<ParsTokTree> {
|
||||
let mut errors = Vec::new();
|
||||
if ctx.tail.starts_with(|c| sys.can_lex(c)) {
|
||||
let (source, pos, path) = (ctx.source.clone(), ctx.get_pos(), ctx.path.clone());
|
||||
let temp_store = ctx.ctx.exprs.derive();
|
||||
let ctx_lck = &Mutex::new(&mut *ctx);
|
||||
let errors_lck = &Mutex::new(&mut errors);
|
||||
let temp_store = sys.ext().exprs().derive(true);
|
||||
let temp_store_cb = temp_store.clone();
|
||||
let lx = sys
|
||||
.lex(source, path, pos, |pos| {
|
||||
@@ -169,10 +160,7 @@ pub async fn lex_once(ctx: &mut LexCtx<'_>) -> OrcRes<ParsTokTree> {
|
||||
})
|
||||
.await;
|
||||
match lx {
|
||||
Err(e) =>
|
||||
return Err(
|
||||
errors.into_iter().fold(OrcErrv::from_api(&e, &ctx.ctx.i).await, |a, b| a + b),
|
||||
),
|
||||
Err(e) => return Err(errors.into_iter().fold(OrcErrv::from_api(&e).await, |a, b| a + b)),
|
||||
Ok(Some(lexed)) => {
|
||||
ctx.set_pos(lexed.pos);
|
||||
let lexed_tree = ctx.des_subtree(&lexed.expr, temp_store).await;
|
||||
@@ -192,12 +180,12 @@ pub async fn lex_once(ctx: &mut LexCtx<'_>) -> OrcRes<ParsTokTree> {
|
||||
}
|
||||
}
|
||||
if ctx.tail.starts_with(name_start) {
|
||||
ParsTok::Name(ctx.ctx.i.i(ctx.get_start_matches(name_char)).await)
|
||||
ParsTok::Name(is(ctx.get_start_matches(name_char)).await)
|
||||
} else if ctx.tail.starts_with(op_char) {
|
||||
ParsTok::Name(ctx.ctx.i.i(ctx.get_start_matches(op_char)).await)
|
||||
ParsTok::Name(is(ctx.get_start_matches(op_char)).await)
|
||||
} else {
|
||||
return Err(mk_errv(
|
||||
ctx.ctx.i.i("Unrecognized character").await,
|
||||
is("Unrecognized character").await,
|
||||
"The following syntax is meaningless.",
|
||||
[SrcRange::new(start..start + 1, ctx.path)],
|
||||
));
|
||||
@@ -206,12 +194,7 @@ pub async fn lex_once(ctx: &mut LexCtx<'_>) -> OrcRes<ParsTokTree> {
|
||||
Ok(ParsTokTree { tok, sr: SrcRange::new(start..ctx.get_pos(), ctx.path) })
|
||||
}
|
||||
|
||||
pub async fn lex(
|
||||
text: Tok<String>,
|
||||
path: Sym,
|
||||
systems: &[System],
|
||||
ctx: &Ctx,
|
||||
) -> OrcRes<Vec<ParsTokTree>> {
|
||||
pub async fn lex(text: IStr, path: Sym, systems: &[System], ctx: &Ctx) -> OrcRes<Vec<ParsTokTree>> {
|
||||
let mut sub_trees = Vec::new();
|
||||
let mut ctx =
|
||||
LexCtx { source: &text, sub_trees: &mut sub_trees, tail: &text[..], systems, path: &path, ctx };
|
||||
|
||||
@@ -3,11 +3,13 @@ use orchid_api as api;
|
||||
pub mod atom;
|
||||
pub mod ctx;
|
||||
pub mod dealias;
|
||||
pub mod dylib;
|
||||
pub mod execute;
|
||||
pub mod expr;
|
||||
pub mod expr_store;
|
||||
pub mod extension;
|
||||
pub mod lex;
|
||||
pub mod logger;
|
||||
pub mod parse;
|
||||
pub mod parsed;
|
||||
pub mod subprocess;
|
||||
|
||||
84
orchid-host/src/logger.rs
Normal file
84
orchid-host/src/logger.rs
Normal file
@@ -0,0 +1,84 @@
|
||||
use std::fmt::Arguments;
|
||||
use std::fs::File;
|
||||
use std::io::{Write, stderr};
|
||||
use std::rc::Rc;
|
||||
|
||||
use futures::future::LocalBoxFuture;
|
||||
use hashbrown::HashMap;
|
||||
use itertools::Itertools;
|
||||
use orchid_base::logging::{LogWriter, Logger};
|
||||
|
||||
use crate::api;
|
||||
|
||||
pub struct LogWriterImpl(api::LogStrategy);
|
||||
impl LogWriter for LogWriterImpl {
|
||||
fn write_fmt<'a>(&'a self, fmt: Arguments<'a>) -> LocalBoxFuture<'a, ()> {
|
||||
Box::pin(async move {
|
||||
match &self.0 {
|
||||
api::LogStrategy::Discard => (),
|
||||
api::LogStrategy::Default => {
|
||||
stderr().write_fmt(fmt).expect("Could not write to stderr!");
|
||||
stderr().flush().expect("Could not flush stderr")
|
||||
},
|
||||
api::LogStrategy::File { path, .. } => {
|
||||
let mut file = (File::options().write(true).create(true).truncate(false).open(path))
|
||||
.unwrap_or_else(|e| panic!("Could not open {path}: {e}"));
|
||||
file.write_fmt(fmt).unwrap_or_else(|e| panic!("Could not write to {path}: {e}"));
|
||||
},
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Default)]
|
||||
pub struct LoggerImpl {
|
||||
routing: HashMap<String, api::LogStrategy>,
|
||||
default: Option<api::LogStrategy>,
|
||||
}
|
||||
impl LoggerImpl {
|
||||
pub fn to_api(&self) -> api::Logger {
|
||||
api::Logger {
|
||||
default: self.default.clone(),
|
||||
routing: self.routing.iter().map(|(k, v)| (k.clone(), v.clone())).collect(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn new(
|
||||
default: Option<api::LogStrategy>,
|
||||
strats: impl IntoIterator<Item = (String, api::LogStrategy)>,
|
||||
) -> Self {
|
||||
Self { routing: strats.into_iter().collect(), default }
|
||||
}
|
||||
pub fn set_default(&mut self, strat: api::LogStrategy) { self.default = Some(strat) }
|
||||
pub fn clear_default(&mut self) { self.default = None }
|
||||
pub fn set_category(&mut self, category: &str, strat: api::LogStrategy) {
|
||||
self.routing.insert(category.to_string(), strat);
|
||||
}
|
||||
pub fn with_default(mut self, strat: api::LogStrategy) -> Self {
|
||||
self.set_default(strat);
|
||||
self
|
||||
}
|
||||
pub fn with_category(mut self, category: &str, strat: api::LogStrategy) -> Self {
|
||||
self.set_category(category, strat);
|
||||
self
|
||||
}
|
||||
pub async fn log(&self, category: &str, msg: impl AsRef<str>) {
|
||||
writeln!(self.writer(category), "{}", msg.as_ref()).await
|
||||
}
|
||||
pub fn has_category(&self, category: &str) -> bool { self.routing.contains_key(category) }
|
||||
pub async fn log_buf(&self, category: &str, event: impl AsRef<str>, buf: &[u8]) {
|
||||
if std::env::var("ORCHID_LOG_BUFFERS").is_ok_and(|v| !v.is_empty()) {
|
||||
let data = buf.iter().map(|b| format!("{b:02x}")).join(" ");
|
||||
writeln!(self.writer(category), "{}: [{data}]", event.as_ref()).await
|
||||
}
|
||||
}
|
||||
}
|
||||
impl Logger for LoggerImpl {
|
||||
fn writer(&self, category: &str) -> Rc<dyn LogWriter> {
|
||||
Rc::new(LogWriterImpl(self.strat(category).clone()))
|
||||
}
|
||||
fn strat(&self, category: &str) -> api::LogStrategy {
|
||||
(self.routing.get(category).cloned().or(self.default.clone()))
|
||||
.expect("Invalid category and catchall logger not set")
|
||||
}
|
||||
}
|
||||
@@ -1,12 +1,11 @@
|
||||
use futures::future::join_all;
|
||||
use futures::FutureExt;
|
||||
use itertools::Itertools;
|
||||
use orchid_base::error::{OrcRes, Reporter, mk_errv};
|
||||
use orchid_base::error::{OrcRes, mk_errv, report};
|
||||
use orchid_base::format::fmt;
|
||||
use orchid_base::interner::{Interner, Tok};
|
||||
use orchid_base::interner::{IStr, is};
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::parse::{
|
||||
Comment, Import, ParseCtx, Parsed, Snippet, expect_end, line_items, parse_multiname,
|
||||
try_pop_no_fluff,
|
||||
Comment, Import, Parsed, Snippet, expect_end, line_items, parse_multiname, try_pop_no_fluff,
|
||||
};
|
||||
use orchid_base::tree::{Paren, TokTree, Token};
|
||||
use substack::Substack;
|
||||
@@ -22,12 +21,6 @@ pub struct HostParseCtxImpl<'a> {
|
||||
pub ctx: Ctx,
|
||||
pub src: Sym,
|
||||
pub systems: &'a [System],
|
||||
pub rep: &'a Reporter,
|
||||
}
|
||||
|
||||
impl ParseCtx for HostParseCtxImpl<'_> {
|
||||
fn rep(&self) -> &Reporter { self.rep }
|
||||
fn i(&self) -> &Interner { &self.ctx.i }
|
||||
}
|
||||
|
||||
impl HostParseCtx for HostParseCtxImpl<'_> {
|
||||
@@ -36,7 +29,7 @@ impl HostParseCtx for HostParseCtxImpl<'_> {
|
||||
fn src_path(&self) -> Sym { self.src.clone() }
|
||||
}
|
||||
|
||||
pub trait HostParseCtx: ParseCtx {
|
||||
pub trait HostParseCtx {
|
||||
#[must_use]
|
||||
fn ctx(&self) -> &Ctx;
|
||||
#[must_use]
|
||||
@@ -47,34 +40,39 @@ pub trait HostParseCtx: ParseCtx {
|
||||
|
||||
pub async fn parse_items(
|
||||
ctx: &impl HostParseCtx,
|
||||
path: Substack<'_, Tok<String>>,
|
||||
path: Substack<'_, IStr>,
|
||||
items: ParsSnippet<'_>,
|
||||
) -> OrcRes<Vec<Item>> {
|
||||
let lines = line_items(ctx, items).await;
|
||||
let line_res =
|
||||
join_all(lines.into_iter().map(|p| parse_item(ctx, path.clone(), p.output, p.tail))).await;
|
||||
Ok(line_res.into_iter().flat_map(|l| l.ok().into_iter().flatten()).collect())
|
||||
let lines = line_items(items).await;
|
||||
let mut line_ok = Vec::new();
|
||||
for Parsed { output: comments, tail } in lines {
|
||||
match parse_item(ctx, path.clone(), comments, tail).boxed_local().await {
|
||||
Err(e) => report(e),
|
||||
Ok(l) => line_ok.extend(l),
|
||||
}
|
||||
}
|
||||
Ok(line_ok)
|
||||
}
|
||||
|
||||
pub async fn parse_item(
|
||||
ctx: &impl HostParseCtx,
|
||||
path: Substack<'_, Tok<String>>,
|
||||
path: Substack<'_, IStr>,
|
||||
comments: Vec<Comment>,
|
||||
item: ParsSnippet<'_>,
|
||||
) -> OrcRes<Vec<Item>> {
|
||||
match item.pop_front() {
|
||||
Some((TokTree { tok: Token::Name(n), .. }, postdisc)) => match n {
|
||||
n if *n == ctx.i().i("export").await => match try_pop_no_fluff(ctx, postdisc).await? {
|
||||
n if *n == is("export").await => match try_pop_no_fluff(postdisc).await? {
|
||||
Parsed { output: TokTree { tok: Token::Name(n), .. }, tail } =>
|
||||
parse_exportable_item(ctx, path, comments, true, n.clone(), tail).await,
|
||||
Parsed { output, tail: _ } => Err(mk_errv(
|
||||
ctx.i().i("Malformed export").await,
|
||||
is("Malformed export").await,
|
||||
"`export` can either prefix other lines or list names inside ( )",
|
||||
[output.sr()],
|
||||
)),
|
||||
},
|
||||
n if *n == ctx.i().i("import").await => {
|
||||
let imports = parse_import(ctx, postdisc).await?;
|
||||
n if *n == is("import").await => {
|
||||
let imports = parse_import(postdisc).await?;
|
||||
Ok(Vec::from_iter(imports.into_iter().map(|t| Item {
|
||||
comments: comments.clone(),
|
||||
sr: t.sr.clone(),
|
||||
@@ -83,33 +81,29 @@ pub async fn parse_item(
|
||||
},
|
||||
n => parse_exportable_item(ctx, path, comments, false, n.clone(), postdisc).await,
|
||||
},
|
||||
Some(_) => Err(mk_errv(
|
||||
ctx.i().i("Expected a line type").await,
|
||||
"All lines must begin with a keyword",
|
||||
[item.sr()],
|
||||
)),
|
||||
Some(_) =>
|
||||
Err(mk_errv(is("Expected a line type").await, "All lines must begin with a keyword", [
|
||||
item.sr()
|
||||
])),
|
||||
None => unreachable!("These lines are filtered and aggregated in earlier stages"),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn parse_import<'a>(
|
||||
ctx: &impl HostParseCtx,
|
||||
tail: ParsSnippet<'a>,
|
||||
) -> OrcRes<Vec<Import>> {
|
||||
let Parsed { output: imports, tail } = parse_multiname(ctx, tail).await?;
|
||||
expect_end(ctx, tail).await?;
|
||||
pub async fn parse_import<'a>(tail: ParsSnippet<'a>) -> OrcRes<Vec<Import>> {
|
||||
let Parsed { output: imports, tail } = parse_multiname(tail).await?;
|
||||
expect_end(tail).await?;
|
||||
Ok(imports)
|
||||
}
|
||||
|
||||
pub async fn parse_exportable_item<'a>(
|
||||
ctx: &impl HostParseCtx,
|
||||
path: Substack<'_, Tok<String>>,
|
||||
path: Substack<'_, IStr>,
|
||||
comments: Vec<Comment>,
|
||||
exported: bool,
|
||||
discr: Tok<String>,
|
||||
discr: IStr,
|
||||
tail: ParsSnippet<'a>,
|
||||
) -> OrcRes<Vec<Item>> {
|
||||
let kind = if discr == ctx.i().i("mod").await {
|
||||
let kind = if discr == is("mod").await {
|
||||
let (name, body) = parse_module(ctx, path, tail).await?;
|
||||
ItemKind::Member(ParsedMember { name, exported, kind: ParsedMemberKind::Mod(body) })
|
||||
} else if let Some(parser) = ctx.systems().find_map(|s| s.get_parser(discr.clone())) {
|
||||
@@ -122,7 +116,7 @@ pub async fn parse_exportable_item<'a>(
|
||||
} else {
|
||||
let ext_lines = ctx.systems().flat_map(System::line_types).join(", ");
|
||||
return Err(mk_errv(
|
||||
ctx.i().i("Unrecognized line type").await,
|
||||
is("Unrecognized line type").await,
|
||||
format!("Line types are: mod, {ext_lines}"),
|
||||
[tail.prev().sr()],
|
||||
));
|
||||
@@ -132,25 +126,25 @@ pub async fn parse_exportable_item<'a>(
|
||||
|
||||
pub async fn parse_module<'a>(
|
||||
ctx: &impl HostParseCtx,
|
||||
path: Substack<'_, Tok<String>>,
|
||||
path: Substack<'_, IStr>,
|
||||
tail: ParsSnippet<'a>,
|
||||
) -> OrcRes<(Tok<String>, ParsedModule)> {
|
||||
let (name, tail) = match try_pop_no_fluff(ctx, tail).await? {
|
||||
) -> OrcRes<(IStr, ParsedModule)> {
|
||||
let (name, tail) = match try_pop_no_fluff(tail).await? {
|
||||
Parsed { output: TokTree { tok: Token::Name(n), .. }, tail } => (n.clone(), tail),
|
||||
Parsed { output, .. } => {
|
||||
return Err(mk_errv(
|
||||
ctx.i().i("Missing module name").await,
|
||||
format!("A name was expected, {} was found", fmt(output, ctx.i()).await),
|
||||
is("Missing module name").await,
|
||||
format!("A name was expected, {} was found", fmt(output).await),
|
||||
[output.sr()],
|
||||
));
|
||||
},
|
||||
};
|
||||
let Parsed { output, tail: surplus } = try_pop_no_fluff(ctx, tail).await?;
|
||||
expect_end(ctx, surplus).await?;
|
||||
let Parsed { output, tail: surplus } = try_pop_no_fluff(tail).await?;
|
||||
expect_end(surplus).await?;
|
||||
let Some(body) = output.as_s(Paren::Round) else {
|
||||
return Err(mk_errv(
|
||||
ctx.i().i("Expected module body").await,
|
||||
format!("A ( block ) was expected, {} was found", fmt(output, ctx.i()).await),
|
||||
is("Expected module body").await,
|
||||
format!("A ( block ) was expected, {} was found", fmt(output).await),
|
||||
[output.sr()],
|
||||
));
|
||||
};
|
||||
|
||||
@@ -6,7 +6,7 @@ use futures::future::{LocalBoxFuture, join_all};
|
||||
use hashbrown::HashSet;
|
||||
use itertools::Itertools;
|
||||
use orchid_base::format::{FmtCtx, FmtUnit, Format, Variants};
|
||||
use orchid_base::interner::Tok;
|
||||
use orchid_base::interner::{IStr, IStrv};
|
||||
use orchid_base::location::SrcRange;
|
||||
use orchid_base::parse::{Comment, Import};
|
||||
use orchid_base::tl_cache;
|
||||
@@ -57,10 +57,10 @@ impl Format for Item {
|
||||
ItemKind::Member(mem) => match &mem.kind {
|
||||
ParsedMemberKind::Const(_, sys) =>
|
||||
tl_cache!(Rc<Variants>: Rc::new(Variants::default().bounded("const {0} via {1}")))
|
||||
.units([mem.name.rc().into(), sys.print(c).await]),
|
||||
.units([mem.name.to_string().into(), sys.print(c).await]),
|
||||
ParsedMemberKind::Mod(module) =>
|
||||
tl_cache!(Rc<Variants>: Rc::new(Variants::default().bounded("module {0} {{\n\t{1}\n}}")))
|
||||
.units([mem.name.rc().into(), module.print(c).boxed_local().await]),
|
||||
.units([mem.name.to_string().into(), module.print(c).boxed_local().await]),
|
||||
},
|
||||
};
|
||||
tl_cache!(Rc<Variants>: Rc::new(Variants::default().bounded("{0}\n{1}")))
|
||||
@@ -69,14 +69,14 @@ impl Format for Item {
|
||||
}
|
||||
|
||||
pub struct ParsedMember {
|
||||
pub name: Tok<String>,
|
||||
pub name: IStr,
|
||||
pub exported: bool,
|
||||
pub kind: ParsedMemberKind,
|
||||
}
|
||||
impl ParsedMember {
|
||||
#[must_use]
|
||||
pub fn name(&self) -> Tok<String> { self.name.clone() }
|
||||
pub fn new(exported: bool, name: Tok<String>, kind: impl Into<ParsedMemberKind>) -> Self {
|
||||
pub fn name(&self) -> IStr { self.name.clone() }
|
||||
pub fn new(exported: bool, name: IStr, kind: impl Into<ParsedMemberKind>) -> Self {
|
||||
Self { exported, name, kind: kind.into() }
|
||||
}
|
||||
}
|
||||
@@ -89,17 +89,14 @@ impl Debug for ParsedMember {
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) type ParsedExprCallback =
|
||||
Rc<dyn for<'a> Fn(&'a [Tok<String>]) -> LocalBoxFuture<'a, Expr>>;
|
||||
pub(crate) type ParsedExprCallback = Rc<dyn for<'a> Fn(&'a [IStr]) -> LocalBoxFuture<'a, Expr>>;
|
||||
|
||||
pub struct ParsedExpr {
|
||||
pub(crate) debug: String,
|
||||
pub(crate) callback: ParsedExprCallback,
|
||||
}
|
||||
impl ParsedExpr {
|
||||
pub async fn run(self, imported_names: &[Tok<String>]) -> Expr {
|
||||
(self.callback)(imported_names).await
|
||||
}
|
||||
pub async fn run(self, imported_names: &[IStr]) -> Expr { (self.callback)(imported_names).await }
|
||||
}
|
||||
impl fmt::Debug for ParsedExpr {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "{}", self.debug) }
|
||||
@@ -115,7 +112,7 @@ impl From<ParsedModule> for ParsedMemberKind {
|
||||
}
|
||||
#[derive(Debug, Default)]
|
||||
pub struct ParsedModule {
|
||||
pub exports: Vec<Tok<String>>,
|
||||
pub exports: Vec<IStr>,
|
||||
pub items: Vec<Item>,
|
||||
pub use_prelude: bool,
|
||||
}
|
||||
@@ -141,7 +138,7 @@ impl ParsedModule {
|
||||
(self.items.iter())
|
||||
.filter_map(|it| if let ItemKind::Import(i) = &it.kind { Some(i) } else { None })
|
||||
}
|
||||
pub fn default_item(self, name: Tok<String>, sr: SrcRange) -> Item {
|
||||
pub fn default_item(self, name: IStr, sr: SrcRange) -> Item {
|
||||
let mem = ParsedMember { exported: true, name, kind: ParsedMemberKind::Mod(self) };
|
||||
Item { comments: vec![], sr, kind: ItemKind::Member(mem) }
|
||||
}
|
||||
@@ -150,7 +147,7 @@ impl Tree for ParsedModule {
|
||||
type Ctx<'a> = ();
|
||||
async fn child(
|
||||
&self,
|
||||
key: Tok<String>,
|
||||
key: IStr,
|
||||
public_only: bool,
|
||||
(): &mut Self::Ctx<'_>,
|
||||
) -> ChildResult<'_, Self> {
|
||||
@@ -168,7 +165,7 @@ impl Tree for ParsedModule {
|
||||
}
|
||||
ChildResult::Err(ChildErrorKind::Missing)
|
||||
}
|
||||
fn children(&self, public_only: bool) -> HashSet<Tok<String>> {
|
||||
fn children(&self, public_only: bool) -> HashSet<IStr> {
|
||||
let mut public: HashSet<_> = self.exports.iter().cloned().collect();
|
||||
if !public_only {
|
||||
public.extend(
|
||||
@@ -185,7 +182,7 @@ impl Tree for ParsedModule {
|
||||
impl Format for ParsedModule {
|
||||
async fn print<'a>(&'a self, c: &'a (impl FmtCtx + ?Sized + 'a)) -> FmtUnit {
|
||||
let head_str = format!("export ::({})\n", self.exports.iter().join(", "));
|
||||
Variants::sequence(self.items.len() + 1, "\n", None).units(
|
||||
Variants::default().sequence(self.items.len() + 1, "", "\n", "", None).units_own(
|
||||
[head_str.into()].into_iter().chain(join_all(self.items.iter().map(|i| i.print(c))).await),
|
||||
)
|
||||
}
|
||||
@@ -197,11 +194,11 @@ impl Format for ParsedModule {
|
||||
/// point to a module and rule_loc selects a macro rule within that module
|
||||
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
|
||||
pub struct ConstPath {
|
||||
steps: Tok<Vec<Tok<String>>>,
|
||||
steps: IStrv,
|
||||
}
|
||||
impl ConstPath {
|
||||
#[must_use]
|
||||
pub fn to_const(steps: Tok<Vec<Tok<String>>>) -> Self { Self { steps } }
|
||||
pub fn to_const(steps: IStrv) -> Self { Self { steps } }
|
||||
}
|
||||
|
||||
pub async fn tt_to_api(exprs: &mut ExprStore, subtree: ParsTokTree) -> api::TokenTree {
|
||||
|
||||
@@ -1,102 +1,34 @@
|
||||
use std::cell::RefCell;
|
||||
use std::io::{self, Write};
|
||||
use std::pin::Pin;
|
||||
use std::{io, process};
|
||||
|
||||
use async_process::{self, Child, ChildStdin, ChildStdout};
|
||||
use futures::future::LocalBoxFuture;
|
||||
use futures::io::BufReader;
|
||||
use futures::lock::Mutex;
|
||||
use futures::{self, AsyncBufReadExt, AsyncWriteExt};
|
||||
use orchid_api_traits::{Decode, Encode};
|
||||
use orchid_base::builtin::{ExtInit, ExtPort};
|
||||
use orchid_base::logging::Logger;
|
||||
use orchid_base::msg::{recv_msg, send_msg};
|
||||
use futures::{self, AsyncBufReadExt, StreamExt};
|
||||
use orchid_base::logging::log;
|
||||
#[cfg(feature = "tokio")]
|
||||
use tokio_util::compat::{TokioAsyncReadCompatExt, TokioAsyncWriteCompatExt};
|
||||
|
||||
use crate::api;
|
||||
use crate::ctx::Ctx;
|
||||
use crate::extension::ExtPort;
|
||||
|
||||
pub async fn ext_command(
|
||||
cmd: std::process::Command,
|
||||
logger: Logger,
|
||||
msg_logs: Logger,
|
||||
ctx: Ctx,
|
||||
) -> io::Result<ExtInit> {
|
||||
let mut child = async_process::Command::from(cmd)
|
||||
.stdin(async_process::Stdio::piped())
|
||||
.stdout(async_process::Stdio::piped())
|
||||
.stderr(async_process::Stdio::piped())
|
||||
#[cfg(feature = "tokio")]
|
||||
pub async fn ext_command(cmd: std::process::Command, ctx: Ctx) -> io::Result<ExtPort> {
|
||||
let name = cmd.get_program().to_string_lossy().to_string();
|
||||
let mut child = tokio::process::Command::from(cmd)
|
||||
.stdin(process::Stdio::piped())
|
||||
.stdout(process::Stdio::piped())
|
||||
.stderr(process::Stdio::piped())
|
||||
.spawn()?;
|
||||
let mut stdin = child.stdin.take().unwrap();
|
||||
api::HostHeader { log_strategy: logger.strat(), msg_logs: msg_logs.strat() }
|
||||
.encode(Pin::new(&mut stdin))
|
||||
.await;
|
||||
let mut stdout = child.stdout.take().unwrap();
|
||||
let header = api::ExtensionHeader::decode(Pin::new(&mut stdout)).await;
|
||||
std::thread::spawn(|| {});
|
||||
let stdin = child.stdin.take().unwrap();
|
||||
let stdout = child.stdout.take().unwrap();
|
||||
let child_stderr = child.stderr.take().unwrap();
|
||||
(ctx.spawn)(Box::pin(async move {
|
||||
let mut reader = BufReader::new(child_stderr);
|
||||
loop {
|
||||
let mut buf = String::new();
|
||||
if 0 == reader.read_line(&mut buf).await.unwrap() {
|
||||
break;
|
||||
}
|
||||
logger.log(buf.strip_suffix('\n').expect("Readline implies this"));
|
||||
let _ = ctx.spawn(Box::pin(async move {
|
||||
let _ = child;
|
||||
let mut lines = BufReader::new(child_stderr.compat()).lines();
|
||||
while let Some(line) = lines.next().await {
|
||||
// route stderr with an empty category string. This is not the intended logging
|
||||
// method
|
||||
writeln!(log("stderr"), "{} err> {}", name, line.expect("Readline implies this")).await;
|
||||
}
|
||||
}));
|
||||
Ok(ExtInit {
|
||||
port: Box::new(Subprocess {
|
||||
name: header.name.clone(),
|
||||
child: RefCell::new(Some(child)),
|
||||
stdin: Some(Mutex::new(Box::pin(stdin))),
|
||||
stdout: Mutex::new(Box::pin(stdout)),
|
||||
ctx,
|
||||
}),
|
||||
header,
|
||||
})
|
||||
}
|
||||
|
||||
pub struct Subprocess {
|
||||
name: String,
|
||||
child: RefCell<Option<Child>>,
|
||||
stdin: Option<Mutex<Pin<Box<ChildStdin>>>>,
|
||||
stdout: Mutex<Pin<Box<ChildStdout>>>,
|
||||
ctx: Ctx,
|
||||
}
|
||||
impl Drop for Subprocess {
|
||||
fn drop(&mut self) {
|
||||
let mut child = self.child.borrow_mut().take().unwrap();
|
||||
let name = self.name.clone();
|
||||
if std::thread::panicking() {
|
||||
eprintln!("Killing extension {name}");
|
||||
// we don't really care to handle errors here
|
||||
let _: Result<_, _> = std::io::stderr().flush();
|
||||
let _: Result<_, _> = child.kill();
|
||||
return;
|
||||
}
|
||||
let stdin = self.stdin.take().unwrap();
|
||||
(self.ctx.spawn)(Box::pin(async move {
|
||||
stdin.lock().await.close().await.unwrap();
|
||||
let status = (child.status().await)
|
||||
.unwrap_or_else(|e| panic!("{e}, extension {name} exited with error"));
|
||||
assert!(status.success(), "Extension {name} exited with error {status}");
|
||||
}))
|
||||
}
|
||||
}
|
||||
impl ExtPort for Subprocess {
|
||||
fn send<'a>(&'a self, msg: &'a [u8]) -> LocalBoxFuture<'a, ()> {
|
||||
Box::pin(async {
|
||||
send_msg(Pin::new(&mut *self.stdin.as_ref().unwrap().lock().await), msg).await.unwrap()
|
||||
})
|
||||
}
|
||||
fn recv(&self) -> LocalBoxFuture<'_, Option<Vec<u8>>> {
|
||||
Box::pin(async {
|
||||
std::io::Write::flush(&mut std::io::stderr()).unwrap();
|
||||
match recv_msg(self.stdout.lock().await.as_mut()).await {
|
||||
Ok(msg) => Some(msg),
|
||||
Err(e) if e.kind() == io::ErrorKind::BrokenPipe => None,
|
||||
Err(e) if e.kind() == io::ErrorKind::UnexpectedEof => None,
|
||||
Err(e) => panic!("Failed to read from stdout: {}, {e}", e.kind()),
|
||||
}
|
||||
})
|
||||
}
|
||||
Ok(ExtPort { input: Box::pin(stdin.compat_write()), output: Box::pin(stdout.compat()) })
|
||||
}
|
||||
|
||||
@@ -2,16 +2,15 @@ use futures::FutureExt;
|
||||
use futures::future::join_all;
|
||||
use itertools::Itertools;
|
||||
use orchid_base::error::{OrcErrv, OrcRes};
|
||||
use orchid_base::interner::{Interner, Tok};
|
||||
use orchid_base::interner::{IStr, es};
|
||||
use orchid_base::location::SrcRange;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::parse::Comment;
|
||||
use orchid_base::reqnot::Requester;
|
||||
use orchid_base::reqnot::ClientExt;
|
||||
use orchid_base::tree::ttv_from_api;
|
||||
use substack::Substack;
|
||||
|
||||
use crate::api;
|
||||
use crate::expr::ExprParseCtx;
|
||||
use crate::expr_store::ExprStore;
|
||||
use crate::parse::HostParseCtx;
|
||||
use crate::parsed::{
|
||||
@@ -23,7 +22,7 @@ pub struct Parser {
|
||||
pub(crate) system: System,
|
||||
pub(crate) idx: u16,
|
||||
}
|
||||
type ModPath<'a> = Substack<'a, Tok<String>>;
|
||||
type ModPath<'a> = Substack<'a, IStr>;
|
||||
|
||||
impl Parser {
|
||||
pub async fn parse(
|
||||
@@ -35,12 +34,12 @@ impl Parser {
|
||||
comments: Vec<Comment>,
|
||||
callback: &mut impl AsyncFnMut(ModPath<'_>, Vec<ParsTokTree>) -> OrcRes<Vec<Item>>,
|
||||
) -> OrcRes<Vec<Item>> {
|
||||
let mut temp_store = self.system.ext().exprs().derive(true);
|
||||
let mut temp_store = self.system.ctx().exprs.derive();
|
||||
let src_path = line.first().expect("cannot be empty").sr.path();
|
||||
let line =
|
||||
join_all((line.into_iter()).map(|t| async { tt_to_api(&mut temp_store.clone(), t).await }))
|
||||
.await;
|
||||
let mod_path = ctx.src_path().suffix(path.unreverse(), self.system.i()).await;
|
||||
let mod_path = ctx.src_path().suffix(path.unreverse()).await;
|
||||
let comments = comments.iter().map(Comment::to_api).collect_vec();
|
||||
let req = api::ParseLine {
|
||||
idx: self.idx,
|
||||
@@ -51,18 +50,16 @@ impl Parser {
|
||||
comments,
|
||||
line,
|
||||
};
|
||||
match self.system.reqnot().request(req).await {
|
||||
match self.system.client().request(req).await.unwrap() {
|
||||
Ok(parsed_v) =>
|
||||
conv(parsed_v, path, callback, &mut ConvCtx {
|
||||
i: self.system.i(),
|
||||
mod_path: &mod_path,
|
||||
ext_exprs: &mut temp_store,
|
||||
pctx: &mut ExprParseCtx { ctx: self.system.ctx(), exprs: self.system.ext().exprs() },
|
||||
src_path: &src_path,
|
||||
sys: &self.system,
|
||||
})
|
||||
.await,
|
||||
Err(e) => Err(OrcErrv::from_api(&e, &self.system.ctx().i).await),
|
||||
Err(e) => Err(OrcErrv::from_api(&e).await),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -71,14 +68,12 @@ struct ConvCtx<'a> {
|
||||
sys: &'a System,
|
||||
mod_path: &'a Sym,
|
||||
src_path: &'a Sym,
|
||||
i: &'a Interner,
|
||||
ext_exprs: &'a mut ExprStore,
|
||||
pctx: &'a mut ExprParseCtx<'a>,
|
||||
}
|
||||
async fn conv(
|
||||
parsed_v: Vec<api::ParsedLine>,
|
||||
module: Substack<'_, Tok<String>>,
|
||||
callback: &'_ mut impl AsyncFnMut(Substack<'_, Tok<String>>, Vec<ParsTokTree>) -> OrcRes<Vec<Item>>,
|
||||
module: Substack<'_, IStr>,
|
||||
callback: &'_ mut impl AsyncFnMut(Substack<'_, IStr>, Vec<ParsTokTree>) -> OrcRes<Vec<Item>>,
|
||||
ctx: &mut ConvCtx<'_>,
|
||||
) -> OrcRes<Vec<Item>> {
|
||||
let mut items = Vec::new();
|
||||
@@ -87,28 +82,30 @@ async fn conv(
|
||||
api::ParsedLineKind::Member(api::ParsedMember { name, exported, kind }) =>
|
||||
(name, exported, kind),
|
||||
api::ParsedLineKind::Recursive(rec) => {
|
||||
let tokens = ttv_from_api(rec, ctx.ext_exprs, ctx.pctx, ctx.src_path, ctx.i).await;
|
||||
let tokens =
|
||||
ttv_from_api(rec, ctx.ext_exprs, &mut ctx.sys.ctx().clone(), ctx.src_path).await;
|
||||
items.extend(callback(module.clone(), tokens).await?);
|
||||
continue;
|
||||
},
|
||||
};
|
||||
let name = ctx.i.ex(name).await;
|
||||
let name = es(name).await;
|
||||
let mem_path = module.push(name.clone());
|
||||
let mkind = match kind {
|
||||
api::ParsedMemberKind::Module { lines, use_prelude } => {
|
||||
let items = conv(lines, module.push(name.clone()), callback, ctx).boxed_local().await?;
|
||||
let items = conv(lines, mem_path, callback, ctx).boxed_local().await?;
|
||||
ParsedMemberKind::Mod(ParsedModule::new(use_prelude, items))
|
||||
},
|
||||
api::ParsedMemberKind::Constant(cid) => {
|
||||
ctx.sys.0.const_paths.insert(cid, ctx.mod_path.suffix(module.unreverse(), ctx.i).await);
|
||||
ctx.sys.0.const_paths.insert(cid, ctx.mod_path.suffix(mem_path.unreverse()).await);
|
||||
ParsedMemberKind::Const(cid, ctx.sys.clone())
|
||||
},
|
||||
};
|
||||
items.push(Item {
|
||||
comments: join_all(
|
||||
parsed.comments.iter().map(|c| Comment::from_api(c, ctx.src_path.clone(), ctx.i)),
|
||||
parsed.comments.iter().map(|c| Comment::from_api(c, ctx.src_path.clone())),
|
||||
)
|
||||
.await,
|
||||
sr: SrcRange::from_api(&parsed.source_range, ctx.i).await,
|
||||
sr: SrcRange::from_api(&parsed.source_range).await,
|
||||
kind: ItemKind::Member(ParsedMember { name, exported, kind: mkind }),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -3,19 +3,21 @@ use std::fmt;
|
||||
use std::future::Future;
|
||||
use std::rc::{Rc, Weak};
|
||||
|
||||
use async_lock::RwLock;
|
||||
use derive_destructure::destructure;
|
||||
use futures::future::join_all;
|
||||
use futures_locks::RwLock;
|
||||
use hashbrown::HashMap;
|
||||
use itertools::Itertools;
|
||||
use memo_map::MemoMap;
|
||||
use orchid_base::char_filter::char_filter_match;
|
||||
use orchid_base::error::{OrcRes, mk_errv_floating};
|
||||
use orchid_base::format::{FmtCtx, FmtUnit, Format};
|
||||
use orchid_base::interner::{Interner, Tok};
|
||||
use orchid_base::interner::{IStr, es, is};
|
||||
use orchid_base::iter_utils::IteratorPrint;
|
||||
use orchid_base::logging::log;
|
||||
use orchid_base::name::{NameLike, Sym, VName, VPath};
|
||||
use orchid_base::reqnot::{ReqNot, Requester};
|
||||
use orchid_base::reqnot::{Client, ClientExt};
|
||||
use orchid_base::stash::stash;
|
||||
use ordered_float::NotNan;
|
||||
use substack::{Stackframe, Substack};
|
||||
|
||||
@@ -35,7 +37,7 @@ pub(crate) struct SystemInstData {
|
||||
decl_id: api::SysDeclId,
|
||||
lex_filter: api::CharFilter,
|
||||
id: api::SysId,
|
||||
line_types: Vec<Tok<String>>,
|
||||
line_types: Vec<IStr>,
|
||||
prelude: Vec<Sym>,
|
||||
owned_atoms: RwLock<HashMap<api::AtomId, WeakAtomHand>>,
|
||||
pub(crate) const_paths: MemoMap<api::ParsedConstId, Sym>,
|
||||
@@ -68,8 +70,6 @@ impl System {
|
||||
#[must_use]
|
||||
pub fn ctx(&self) -> &Ctx { &self.0.ctx }
|
||||
#[must_use]
|
||||
pub fn i(&self) -> &Interner { &self.0.ctx.i }
|
||||
#[must_use]
|
||||
pub fn deps(&self) -> &[System] { &self.0.deps }
|
||||
#[must_use]
|
||||
pub fn ctor(&self) -> SystemCtor {
|
||||
@@ -77,22 +77,27 @@ impl System {
|
||||
.expect("Ctor was used to create ext")
|
||||
}
|
||||
#[must_use]
|
||||
pub(crate) fn reqnot(&self) -> &ReqNot<api::HostMsgSet> { self.0.ext.reqnot() }
|
||||
pub(crate) fn client(&self) -> &dyn Client { self.0.ext.client() }
|
||||
#[must_use]
|
||||
pub async fn get_tree(&self, id: api::TreeId) -> api::MemberKind {
|
||||
self.reqnot().request(api::GetMember(self.0.id, id)).await
|
||||
self.client().request(api::GetMember(self.0.id, id)).await.unwrap()
|
||||
}
|
||||
#[must_use]
|
||||
pub fn has_lexer(&self) -> bool { !self.0.lex_filter.0.is_empty() }
|
||||
#[must_use]
|
||||
pub fn can_lex(&self, c: char) -> bool { char_filter_match(&self.0.lex_filter, c) }
|
||||
pub fn can_lex(&self, c: char) -> bool {
|
||||
let ret = char_filter_match(&self.0.lex_filter, c);
|
||||
let ctor = self.ctor();
|
||||
stash(async move { writeln!(log("debug"), "{} can lex {c}: {}", ctor.name(), ret).await });
|
||||
ret
|
||||
}
|
||||
#[must_use]
|
||||
pub fn prelude(&self) -> Vec<Sym> { self.0.prelude.clone() }
|
||||
/// Have this system lex a part of the source. It is assumed that
|
||||
/// [Self::can_lex] was called and returned true.
|
||||
pub async fn lex<F: Future<Output = Option<api::SubLexed>>>(
|
||||
&self,
|
||||
source: Tok<String>,
|
||||
source: IStr,
|
||||
src: Sym,
|
||||
pos: u32,
|
||||
r: impl FnMut(u32) -> F,
|
||||
@@ -100,16 +105,16 @@ impl System {
|
||||
self.0.ext.lex_req(source, src, pos, self.id(), r).await
|
||||
}
|
||||
#[must_use]
|
||||
pub fn get_parser(&self, ltyp: Tok<String>) -> Option<Parser> {
|
||||
pub fn get_parser(&self, ltyp: IStr) -> Option<Parser> {
|
||||
(self.0.line_types.iter().enumerate())
|
||||
.find(|(_, txt)| *txt == <yp)
|
||||
.map(|(idx, _)| Parser { idx: idx as u16, system: self.clone() })
|
||||
}
|
||||
pub fn line_types(&self) -> impl Iterator<Item = &Tok<String>> + '_ { self.0.line_types.iter() }
|
||||
pub fn line_types(&self) -> impl Iterator<Item = &IStr> + '_ { self.0.line_types.iter() }
|
||||
|
||||
#[must_use]
|
||||
pub async fn request(&self, req: Vec<u8>) -> Vec<u8> {
|
||||
self.reqnot().request(api::SysFwded(self.id(), req)).await
|
||||
self.client().request(api::SysFwded(self.id(), req)).await.unwrap()
|
||||
}
|
||||
pub(crate) async fn new_atom(&self, data: Vec<u8>, id: api::AtomId) -> AtomHand {
|
||||
let mut owned_g = self.0.owned_atoms.write().await;
|
||||
@@ -124,10 +129,10 @@ impl System {
|
||||
}
|
||||
pub(crate) fn drop_atom(&self, dropped_atom_id: api::AtomId) {
|
||||
let this = self.0.clone();
|
||||
(self.0.ctx.spawn)(Box::pin(async move {
|
||||
this.ext.reqnot().request(api::AtomDrop(this.id, dropped_atom_id)).await;
|
||||
let _ = self.0.ctx.spawn(Box::pin(async move {
|
||||
this.ext.client().request(api::AtomDrop(this.id, dropped_atom_id)).await.unwrap();
|
||||
this.owned_atoms.write().await.remove(&dropped_atom_id);
|
||||
}))
|
||||
}));
|
||||
}
|
||||
#[must_use]
|
||||
pub fn downgrade(&self) -> WeakSystem {
|
||||
@@ -137,7 +142,7 @@ impl System {
|
||||
pub(crate) async fn name_resolver(
|
||||
&self,
|
||||
orig: api::ParsedConstId,
|
||||
) -> impl AsyncFnMut(&[Tok<String>]) -> OrcRes<VName> + use<> {
|
||||
) -> impl AsyncFnMut(&[IStr]) -> OrcRes<VName> + use<> {
|
||||
let root = self.0.ctx.root.read().await.upgrade().expect("find_names when root not in context");
|
||||
let orig = self.0.const_paths.get(&orig).expect("origin for find_names invalid").clone();
|
||||
let ctx = self.0.ctx.clone();
|
||||
@@ -155,7 +160,7 @@ impl System {
|
||||
Some(Ok(dest)) => return Ok(dest.target.to_vname().suffix(tail.iter().cloned())),
|
||||
Some(Err(dests)) =>
|
||||
return Err(mk_errv_floating(
|
||||
ctx.i.i("Ambiguous name").await,
|
||||
is("Ambiguous name").await,
|
||||
format!(
|
||||
"{selector} could refer to {}",
|
||||
dests.iter().map(|ri| &ri.target).display("or")
|
||||
@@ -163,11 +168,14 @@ impl System {
|
||||
)),
|
||||
None => (),
|
||||
}
|
||||
if root_data.root.members.get(selector).is_some() {
|
||||
return Ok(VName::new(rel.iter().cloned()).expect("split_first was called above"));
|
||||
}
|
||||
if tail.is_empty() {
|
||||
return Ok(VPath::new(cwd.iter().cloned()).name_with_suffix(selector.clone()));
|
||||
}
|
||||
Err(mk_errv_floating(
|
||||
ctx.i.i("Invalid name").await,
|
||||
is("Invalid name").await,
|
||||
format!("{selector} doesn't refer to a module"),
|
||||
))
|
||||
}
|
||||
@@ -200,8 +208,7 @@ impl SystemCtor {
|
||||
#[must_use]
|
||||
pub fn name(&self) -> &str { &self.decl.name }
|
||||
pub async fn name_tok(&self) -> Sym {
|
||||
(Sym::parse(&self.decl.name, &self.ext.upgrade().expect("ext dropped early").ctx().i).await)
|
||||
.expect("System cannot have empty name")
|
||||
(Sym::parse(&self.decl.name).await).expect("System cannot have empty name")
|
||||
}
|
||||
#[must_use]
|
||||
pub fn priority(&self) -> NotNan<f64> { self.decl.priority }
|
||||
@@ -217,17 +224,17 @@ impl SystemCtor {
|
||||
debug_assert_eq!(depends.len(), self.decl.depends.len(), "Wrong number of deps provided");
|
||||
let ext = self.ext.upgrade().expect("SystemCtor should be freed before Extension");
|
||||
let id = ext.ctx().next_sys_id();
|
||||
let sys_inst = ext.reqnot().request(api::NewSystem { depends, id, system: self.decl.id }).await;
|
||||
let sys_inst =
|
||||
ext.client().request(api::NewSystem { depends, id, system: self.decl.id }).await.unwrap();
|
||||
let data = System(Rc::new(SystemInstData {
|
||||
deps,
|
||||
decl_id: self.decl.id,
|
||||
ext: ext.clone(),
|
||||
ctx: ext.ctx().clone(),
|
||||
lex_filter: sys_inst.lex_filter,
|
||||
line_types: join_all(sys_inst.line_types.iter().map(|m| Tok::from_api(*m, &ext.ctx().i)))
|
||||
.await,
|
||||
line_types: join_all(sys_inst.line_types.iter().map(|m| es(*m))).await,
|
||||
id,
|
||||
prelude: join_all(sys_inst.prelude.iter().map(|tok| Sym::from_api(*tok, &ext.ctx().i))).await,
|
||||
prelude: join_all(sys_inst.prelude.iter().map(|tok| Sym::from_api(*tok))).await,
|
||||
owned_atoms: RwLock::new(HashMap::new()),
|
||||
const_paths: MemoMap::new(),
|
||||
}));
|
||||
|
||||
@@ -4,25 +4,25 @@ use std::cell::RefCell;
|
||||
use std::rc::{Rc, Weak};
|
||||
use std::slice;
|
||||
|
||||
use async_lock::RwLock;
|
||||
use async_once_cell::OnceCell;
|
||||
use derive_destructure::destructure;
|
||||
use futures::{FutureExt, StreamExt, stream};
|
||||
use futures_locks::RwLock;
|
||||
use hashbrown::HashMap;
|
||||
use hashbrown::hash_map::Entry;
|
||||
use itertools::Itertools;
|
||||
use memo_map::MemoMap;
|
||||
use orchid_base::clone;
|
||||
use orchid_base::error::{OrcRes, Reporter, mk_errv};
|
||||
use orchid_base::interner::Tok;
|
||||
use orchid_base::error::{OrcRes, mk_errv, report};
|
||||
use orchid_base::interner::{IStr, IStrv, es, is, iv};
|
||||
use orchid_base::location::{CodeGenInfo, Pos};
|
||||
use orchid_base::name::{NameLike, Sym, VPath};
|
||||
use orchid_base::reqnot::Requester;
|
||||
use orchid_base::reqnot::ClientExt;
|
||||
|
||||
use crate::api;
|
||||
use crate::ctx::Ctx;
|
||||
use crate::dealias::{ChildErrorKind, Tree, absolute_path, resolv_glob, walk};
|
||||
use crate::expr::{Expr, ExprParseCtx, PathSetBuilder};
|
||||
use crate::expr::{Expr, PathSetBuilder};
|
||||
use crate::parsed::{ItemKind, ParsedMemberKind, ParsedModule};
|
||||
use crate::system::System;
|
||||
|
||||
@@ -45,7 +45,7 @@ impl Root {
|
||||
#[must_use]
|
||||
pub async fn from_api(api: api::Module, sys: &System) -> Self {
|
||||
let consts = MemoMap::new();
|
||||
let mut tfac = TreeFromApiCtx { consts: &consts, path: sys.i().i(&[][..]).await, sys };
|
||||
let mut tfac = TreeFromApiCtx { consts: &consts, path: iv(&[][..]).await, sys };
|
||||
let root = Module::from_api(api, &mut tfac).await;
|
||||
Root(Rc::new(RwLock::new(RootData { root, consts, ctx: sys.ctx().clone() })))
|
||||
}
|
||||
@@ -60,7 +60,7 @@ impl Root {
|
||||
Ok(Self(Rc::new(RwLock::new(RootData { root, consts, ctx: this.ctx.clone() }))))
|
||||
}
|
||||
#[must_use]
|
||||
pub async fn add_parsed(&self, parsed: &ParsedModule, pars_prefix: Sym, rep: &Reporter) -> Self {
|
||||
pub async fn add_parsed(&self, parsed: &ParsedModule, pars_prefix: Sym) -> Self {
|
||||
let mut ref_this = self.0.write().await;
|
||||
let this = &mut *ref_this;
|
||||
let mut deferred_consts = HashMap::new();
|
||||
@@ -72,7 +72,6 @@ impl Root {
|
||||
pars_prefix: pars_prefix.clone(),
|
||||
root: &this.root,
|
||||
ctx: &this.ctx,
|
||||
rep,
|
||||
};
|
||||
let mut module = Module::from_parsed(parsed, pars_prefix.clone(), &mut tfpctx).await;
|
||||
for step in pars_prefix.iter().rev() {
|
||||
@@ -89,9 +88,8 @@ impl Root {
|
||||
*this.ctx.root.write().await = new.downgrade();
|
||||
for (path, (sys_id, pc_id)) in deferred_consts {
|
||||
let sys = this.ctx.system_inst(sys_id).await.expect("System dropped since parsing");
|
||||
let api_expr = sys.reqnot().request(api::FetchParsedConst(sys.id(), pc_id)).await;
|
||||
let mut xp_ctx = ExprParseCtx { ctx: &this.ctx, exprs: sys.ext().exprs() };
|
||||
let expr = Expr::from_api(&api_expr, PathSetBuilder::new(), &mut xp_ctx).await;
|
||||
let api_expr = sys.client().request(api::FetchParsedConst(sys.id(), pc_id)).await.unwrap();
|
||||
let expr = Expr::from_api(&api_expr, PathSetBuilder::new(), this.ctx.clone()).await;
|
||||
new.0.write().await.consts.insert(path, expr);
|
||||
}
|
||||
new
|
||||
@@ -111,7 +109,7 @@ impl Root {
|
||||
}
|
||||
match module {
|
||||
Ok(_) => Err(mk_errv(
|
||||
ctx.i.i("module used as constant").await,
|
||||
is("module used as constant").await,
|
||||
format!("{name} is a module, not a constant"),
|
||||
[pos],
|
||||
)),
|
||||
@@ -119,7 +117,7 @@ impl Root {
|
||||
ChildErrorKind::Private => panic!("public_only is false"),
|
||||
ChildErrorKind::Constant => panic!("Tree refers to constant not in table"),
|
||||
ChildErrorKind::Missing => Err(mk_errv(
|
||||
ctx.i.i("Constant does not exist").await,
|
||||
is("Constant does not exist").await,
|
||||
format!("{name} does not refer to a constant"),
|
||||
[pos],
|
||||
)),
|
||||
@@ -145,12 +143,12 @@ impl Default for WeakRoot {
|
||||
pub struct TreeFromApiCtx<'a> {
|
||||
pub sys: &'a System,
|
||||
pub consts: &'a MemoMap<Sym, Expr>,
|
||||
pub path: Tok<Vec<Tok<String>>>,
|
||||
pub path: IStrv,
|
||||
}
|
||||
impl<'a> TreeFromApiCtx<'a> {
|
||||
#[must_use]
|
||||
pub async fn push<'c>(&'c self, name: Tok<String>) -> TreeFromApiCtx<'c> {
|
||||
let path = self.sys.ctx().i.i(&self.path.iter().cloned().chain([name]).collect_vec()).await;
|
||||
pub async fn push<'c>(&'c self, name: IStr) -> TreeFromApiCtx<'c> {
|
||||
let path = iv(&self.path.iter().cloned().chain([name]).collect_vec()).await;
|
||||
TreeFromApiCtx { path, consts: self.consts, sys: self.sys }
|
||||
}
|
||||
}
|
||||
@@ -163,23 +161,22 @@ pub struct ResolvedImport {
|
||||
|
||||
#[derive(Clone, Default)]
|
||||
pub struct Module {
|
||||
pub imports: HashMap<Tok<String>, Result<ResolvedImport, Vec<ResolvedImport>>>,
|
||||
pub members: HashMap<Tok<String>, Rc<Member>>,
|
||||
pub imports: HashMap<IStr, Result<ResolvedImport, Vec<ResolvedImport>>>,
|
||||
pub members: HashMap<IStr, Rc<Member>>,
|
||||
}
|
||||
impl Module {
|
||||
#[must_use]
|
||||
pub async fn from_api(api: api::Module, ctx: &mut TreeFromApiCtx<'_>) -> Self {
|
||||
let mut members = HashMap::new();
|
||||
for mem in api.members {
|
||||
let mem_name = ctx.sys.i().ex(mem.name).await;
|
||||
let mem_name = es(mem.name).await;
|
||||
let vname = VPath::new(ctx.path.iter().cloned()).name_with_suffix(mem_name.clone());
|
||||
let name = vname.to_sym(ctx.sys.i()).await;
|
||||
let name = vname.to_sym().await;
|
||||
let (lazy, kind) = match mem.kind {
|
||||
api::MemberKind::Lazy(id) =>
|
||||
(Some(LazyMemberHandle { id, sys: ctx.sys.id(), path: name.clone() }), None),
|
||||
api::MemberKind::Const(val) => {
|
||||
let mut expr_ctx = ExprParseCtx { ctx: ctx.sys.ctx(), exprs: ctx.sys.ext().exprs() };
|
||||
let expr = Expr::from_api(&val, PathSetBuilder::new(), &mut expr_ctx).await;
|
||||
let expr = Expr::from_api(&val, PathSetBuilder::new(), ctx.sys.ctx().clone()).await;
|
||||
ctx.consts.insert(name.clone(), expr);
|
||||
(None, Some(MemberKind::Const))
|
||||
},
|
||||
@@ -207,23 +204,23 @@ impl Module {
|
||||
let mut glob_imports_by_name = HashMap::<_, Vec<_>>::new();
|
||||
for import in parsed.get_imports().into_iter().filter(|i| i.name.is_none()) {
|
||||
let pos = import.sr.pos();
|
||||
match absolute_path(&path, &import.path, &ctx.ctx.i).await {
|
||||
Err(e) => ctx.rep.report(e.err_obj(&ctx.ctx.i, pos, &import.path.to_string()).await),
|
||||
match absolute_path(&path, &import.path).await {
|
||||
Err(e) => report(e.err_obj(pos, &import.path.to_string()).await),
|
||||
Ok(abs_path) => {
|
||||
let names_res = match abs_path.strip_prefix(&ctx.pars_prefix[..]) {
|
||||
None => {
|
||||
let mut tree_ctx = (ctx.ctx.clone(), ctx.consts);
|
||||
resolv_glob(&path, ctx.root, &abs_path, pos, &ctx.ctx.i, &mut tree_ctx).await
|
||||
resolv_glob(&path, ctx.root, &abs_path, pos, &mut tree_ctx).await
|
||||
},
|
||||
Some(sub_tgt) => {
|
||||
let sub_path = (path.strip_prefix(&ctx.pars_prefix[..]))
|
||||
.expect("from_parsed called with path outside pars_prefix");
|
||||
resolv_glob(sub_path, ctx.pars_root, sub_tgt, pos, &ctx.ctx.i, &mut ()).await
|
||||
resolv_glob(sub_path, ctx.pars_root, sub_tgt, pos, &mut ()).await
|
||||
},
|
||||
};
|
||||
let abs_path = abs_path.to_sym(&ctx.ctx.i).await;
|
||||
let abs_path = abs_path.to_sym().await;
|
||||
match names_res {
|
||||
Err(e) => ctx.rep.report(e),
|
||||
Err(e) => report(e),
|
||||
Ok(names) =>
|
||||
for name in names {
|
||||
match glob_imports_by_name.entry(name) {
|
||||
@@ -246,30 +243,28 @@ impl Module {
|
||||
prelude_item.last_seg(),
|
||||
Ok(ResolvedImport {
|
||||
target: prelude_item,
|
||||
pos: CodeGenInfo::new_details(sys.ctor().name_tok().await, "In prelude", &ctx.ctx.i)
|
||||
.await
|
||||
.pos(),
|
||||
pos: CodeGenInfo::new_details(sys.ctor().name_tok().await, "In prelude").await.pos(),
|
||||
}),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
let conflicting_imports_msg = ctx.ctx.i.i("Conflicting imports").await;
|
||||
let conflicting_imports_msg = is("Conflicting imports").await;
|
||||
for (key, values) in imports_by_name {
|
||||
if values.len() == 1 {
|
||||
let import = values.into_iter().next().unwrap();
|
||||
let sr = import.sr.clone();
|
||||
let abs_path_res = absolute_path(&path, &import.clone().mspath(), &ctx.ctx.i).await;
|
||||
let abs_path_res = absolute_path(&path, &import.clone().mspath()).await;
|
||||
match abs_path_res {
|
||||
Err(e) => ctx.rep.report(e.err_obj(&ctx.ctx.i, sr.pos(), &import.to_string()).await),
|
||||
Err(e) => report(e.err_obj(sr.pos(), &import.to_string()).await),
|
||||
Ok(abs_path) => {
|
||||
let target = abs_path.to_sym(&ctx.ctx.i).await;
|
||||
let target = abs_path.to_sym().await;
|
||||
imports.insert(key, Ok(ResolvedImport { target, pos: sr.pos() }));
|
||||
},
|
||||
}
|
||||
} else {
|
||||
for item in values {
|
||||
ctx.rep.report(mk_errv(
|
||||
report(mk_errv(
|
||||
conflicting_imports_msg.clone(),
|
||||
format!("{key} is imported multiple times from different modules"),
|
||||
[item.sr.pos()],
|
||||
@@ -279,12 +274,11 @@ impl Module {
|
||||
}
|
||||
for (key, values) in glob_imports_by_name {
|
||||
if !imports.contains_key(&key) {
|
||||
let i = &ctx.ctx.i;
|
||||
let values = stream::iter(values)
|
||||
.then(|(n, sr)| {
|
||||
clone!(key; async move {
|
||||
ResolvedImport {
|
||||
target: n.to_vname().suffix([key.clone()]).to_sym(i).await,
|
||||
target: n.to_vname().suffix([key.clone()]).to_sym().await,
|
||||
pos: sr.pos(),
|
||||
}
|
||||
})
|
||||
@@ -294,12 +288,12 @@ impl Module {
|
||||
imports.insert(key, if values.len() == 1 { Ok(values[0].clone()) } else { Err(values) });
|
||||
}
|
||||
}
|
||||
let self_referential_msg = ctx.ctx.i.i("Self-referential import").await;
|
||||
let self_referential_msg = is("Self-referential import").await;
|
||||
for (key, value) in imports.iter() {
|
||||
let Ok(import) = value else { continue };
|
||||
if import.target.strip_prefix(&path[..]).is_some_and(|t| t.starts_with(slice::from_ref(key)))
|
||||
{
|
||||
ctx.rep.report(mk_errv(
|
||||
report(mk_errv(
|
||||
self_referential_msg.clone(),
|
||||
format!("import {} points to itself or a path within itself", &import.target),
|
||||
[import.pos.clone()],
|
||||
@@ -310,7 +304,7 @@ impl Module {
|
||||
for item in &parsed.items {
|
||||
match &item.kind {
|
||||
ItemKind::Member(mem) => {
|
||||
let path = path.to_vname().suffix([mem.name.clone()]).to_sym(&ctx.ctx.i).await;
|
||||
let path = path.to_vname().suffix([mem.name.clone()]).to_sym().await;
|
||||
let kind = OnceCell::from(MemberKind::from_parsed(&mem.kind, path.clone(), ctx).await);
|
||||
members.insert(
|
||||
mem.name.clone(),
|
||||
@@ -387,7 +381,6 @@ pub struct FromParsedCtx<'a> {
|
||||
pars_prefix: Sym,
|
||||
pars_root: &'a ParsedModule,
|
||||
root: &'a Module,
|
||||
rep: &'a Reporter,
|
||||
ctx: &'a Ctx,
|
||||
consts: &'a MemoMap<Sym, Expr>,
|
||||
deferred_consts: &'a mut HashMap<Sym, (api::SysId, api::ParsedConstId)>,
|
||||
@@ -397,7 +390,7 @@ impl Tree for Module {
|
||||
type Ctx<'a> = (Ctx, &'a MemoMap<Sym, Expr>);
|
||||
async fn child(
|
||||
&self,
|
||||
key: Tok<String>,
|
||||
key: IStr,
|
||||
public_only: bool,
|
||||
(ctx, consts): &mut Self::Ctx<'_>,
|
||||
) -> crate::dealias::ChildResult<'_, Self> {
|
||||
@@ -412,7 +405,7 @@ impl Tree for Module {
|
||||
MemberKind::Const => Err(ChildErrorKind::Constant),
|
||||
}
|
||||
}
|
||||
fn children(&self, public_only: bool) -> hashbrown::HashSet<Tok<String>> {
|
||||
fn children(&self, public_only: bool) -> hashbrown::HashSet<IStr> {
|
||||
self.members.iter().filter(|(_, v)| !public_only || v.public).map(|(k, _)| k.clone()).collect()
|
||||
}
|
||||
}
|
||||
@@ -463,8 +456,7 @@ impl LazyMemberHandle {
|
||||
let sys = ctx.system_inst(self.sys).await.expect("Missing system for lazy member");
|
||||
match sys.get_tree(self.id).await {
|
||||
api::MemberKind::Const(c) => {
|
||||
let mut pctx = ExprParseCtx { ctx: &ctx, exprs: sys.ext().exprs() };
|
||||
let expr = Expr::from_api(&c, PathSetBuilder::new(), &mut pctx).await;
|
||||
let expr = Expr::from_api(&c, PathSetBuilder::new(), ctx.clone()).await;
|
||||
let (.., path) = self.destructure();
|
||||
consts.insert(path, expr);
|
||||
MemberKind::Const
|
||||
|
||||
@@ -3,10 +3,19 @@ name = "orchid-std"
|
||||
version = "0.1.0"
|
||||
edition = "2024"
|
||||
|
||||
[[bin]]
|
||||
name = "orchid-std"
|
||||
path = "src/main.rs"
|
||||
|
||||
[lib]
|
||||
crate-type = ["cdylib", "lib"]
|
||||
path = "src/lib.rs"
|
||||
|
||||
[dependencies]
|
||||
async-fn-stream = { version = "0.1.0", path = "../async-fn-stream" }
|
||||
async-once-cell = "0.5.4"
|
||||
futures = { version = "0.3.31", features = ["std"], default-features = false }
|
||||
hashbrown = "0.16.0"
|
||||
hashbrown = "0.16.1"
|
||||
itertools = "0.14.0"
|
||||
never = "0.1.0"
|
||||
once_cell = "1.21.3"
|
||||
@@ -15,12 +24,14 @@ orchid-api-derive = { version = "0.1.0", path = "../orchid-api-derive" }
|
||||
orchid-api-traits = { version = "0.1.0", path = "../orchid-api-traits" }
|
||||
orchid-base = { version = "0.1.0", path = "../orchid-base" }
|
||||
orchid-extension = { version = "0.1.0", path = "../orchid-extension", features = [
|
||||
"tokio",
|
||||
"tokio",
|
||||
] }
|
||||
ordered-float = "5.0.0"
|
||||
rust_decimal = "1.37.2"
|
||||
ordered-float = "5.1.0"
|
||||
pastey = "0.2.1"
|
||||
rust_decimal = "1.39.0"
|
||||
subslice-offset = "0.1.1"
|
||||
substack = "1.1.1"
|
||||
tokio = { version = "1.47.1", features = ["full"] }
|
||||
tokio = { version = "1.49.0", features = ["full"] }
|
||||
|
||||
[dev-dependencies]
|
||||
test_executors = "0.3.5"
|
||||
test_executors = "0.4.1"
|
||||
|
||||
@@ -2,8 +2,21 @@ mod macros;
|
||||
mod std;
|
||||
|
||||
pub use std::number::num_atom::{Float, HomoArray, Int, Num};
|
||||
pub use std::option::OrcOpt;
|
||||
pub use std::reflection::sym_atom::{SymAtom, sym_expr};
|
||||
pub use std::std_system::StdSystem;
|
||||
pub use std::string::str_atom::OrcString;
|
||||
pub use std::tuple::{HomoTpl, Tpl, Tuple, UntypedTuple};
|
||||
|
||||
pub use macros::macro_system::MacroSystem;
|
||||
pub use macros::mactree::{MacTok, MacTree};
|
||||
use orchid_api as api;
|
||||
use orchid_extension::binary::orchid_extension_main_body;
|
||||
use orchid_extension::entrypoint::ExtensionBuilder;
|
||||
|
||||
pub extern "C" fn orchid_extension_main(cx: api::binary::ExtensionContext) {
|
||||
orchid_extension_main_body(
|
||||
cx,
|
||||
ExtensionBuilder::new("orchid-std::main").system(StdSystem).system(MacroSystem),
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,13 +1,15 @@
|
||||
use std::borrow::Cow;
|
||||
|
||||
use never::Never;
|
||||
use orchid_extension::atom::{Atomic, TypAtom};
|
||||
use orchid_base::format::fmt;
|
||||
use orchid_extension::atom::{Atomic, TAtom};
|
||||
use orchid_extension::atom_owned::{OwnedAtom, OwnedVariant, own};
|
||||
use orchid_extension::conv::{ToExpr, TryFromExpr};
|
||||
use orchid_extension::conv::ToExpr;
|
||||
use orchid_extension::coroutine_exec::exec;
|
||||
use orchid_extension::expr::Expr;
|
||||
use orchid_extension::gen_expr::GExpr;
|
||||
|
||||
use crate::macros::mactree::{MacTok, MacTree, map_mactree};
|
||||
use crate::macros::mactree::{MacTok, MacTree};
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct InstantiateTplCall {
|
||||
@@ -24,26 +26,33 @@ impl OwnedAtom for InstantiateTplCall {
|
||||
type Refs = Never;
|
||||
// Technically must be supported but shouldn't actually ever be called
|
||||
async fn call_ref(&self, arg: Expr) -> GExpr {
|
||||
eprintln!(
|
||||
"Copying partially applied instantiate_tpl call. This is an internal value.\
|
||||
\nIt should be fully consumed within generated code."
|
||||
);
|
||||
if !self.argv.is_empty() {
|
||||
eprintln!(
|
||||
"Copying partially applied instantiate_tpl call. This is an internal value.\
|
||||
\nIt should be fully consumed within generated code."
|
||||
);
|
||||
}
|
||||
self.clone().call(arg).await
|
||||
}
|
||||
async fn call(mut self, arg: Expr) -> GExpr {
|
||||
match TypAtom::<MacTree>::try_from_expr(arg).await {
|
||||
Err(e) => return Err::<Never, _>(e).to_expr().await,
|
||||
Ok(t) => self.argv.push(own(t).await),
|
||||
};
|
||||
if self.argv.len() < self.argc {
|
||||
return self.to_expr().await;
|
||||
}
|
||||
let mut args = self.argv.into_iter();
|
||||
let ret = map_mactree(&self.tpl, &mut false, &mut |mt| match mt.tok() {
|
||||
MacTok::Slot => Some(args.next().expect("Not enough arguments to fill all slots")),
|
||||
_ => None,
|
||||
});
|
||||
assert!(args.next().is_none(), "Too many arguments for all slots");
|
||||
ret.to_expr().await
|
||||
exec(async move |mut h| {
|
||||
match h.exec::<TAtom<MacTree>>(arg.clone()).await {
|
||||
Err(_) => panic!("Expected a macro param, found {}", fmt(&arg).await),
|
||||
Ok(t) => self.argv.push(own(&t).await),
|
||||
};
|
||||
if self.argv.len() < self.argc {
|
||||
return self.to_gen().await;
|
||||
}
|
||||
let mut args = self.argv.into_iter();
|
||||
let ret = self.tpl.map(&mut false, &mut |mt| match mt.tok() {
|
||||
MacTok::Slot => Some(args.next().expect("Not enough arguments to fill all slots")),
|
||||
_ => None,
|
||||
});
|
||||
assert!(args.next().is_none(), "Too many arguments for all slots");
|
||||
ret.to_gen().await
|
||||
})
|
||||
.await
|
||||
.to_gen()
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,118 +3,115 @@ use std::pin::pin;
|
||||
use futures::{FutureExt, StreamExt, stream};
|
||||
use hashbrown::HashMap;
|
||||
use itertools::Itertools;
|
||||
use orchid_base::error::{OrcRes, Reporter};
|
||||
use orchid_base::error::{OrcRes, report, with_reporter};
|
||||
use orchid_base::interner::is;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::parse::{
|
||||
Comment, ParseCtx, Parsed, Snippet, expect_tok, token_errv, try_pop_no_fluff,
|
||||
};
|
||||
use orchid_base::parse::{Comment, Parsed, Snippet, expect_tok, token_errv, try_pop_no_fluff};
|
||||
use orchid_base::sym;
|
||||
use orchid_base::tree::Paren;
|
||||
use orchid_extension::atom::TAtom;
|
||||
use orchid_extension::conv::TryFromExpr;
|
||||
use orchid_extension::gen_expr::{atom, call, sym_ref};
|
||||
use orchid_extension::parser::{ConstCtx, PSnippet, PTok, PTokTree, ParsCtx, ParsedLine, Parser};
|
||||
|
||||
use crate::macros::mactree::{MacTok, MacTree, glossary_v, map_mactree_v};
|
||||
use crate::macros::mactree::{MacTok, MacTree, MacTreeSeq};
|
||||
use crate::macros::ph_lexer::PhAtom;
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct LetLine;
|
||||
impl Parser for LetLine {
|
||||
const LINE_HEAD: &'static str = "let";
|
||||
async fn parse<'a>(
|
||||
ctx: ParsCtx<'a>,
|
||||
_: ParsCtx<'a>,
|
||||
exported: bool,
|
||||
comments: Vec<Comment>,
|
||||
line: PSnippet<'a>,
|
||||
) -> OrcRes<Vec<ParsedLine>> {
|
||||
let sr = line.sr();
|
||||
let Parsed { output: name_tok, tail } = try_pop_no_fluff(&ctx, line).await?;
|
||||
let Parsed { output: name_tok, tail } = try_pop_no_fluff(line).await?;
|
||||
let Some(name) = name_tok.as_name() else {
|
||||
let err = token_errv(&ctx, name_tok, "Constant must have a name", |t| {
|
||||
let err = token_errv(name_tok, "Constant must have a name", |t| {
|
||||
format!("Expected a name but found {t}")
|
||||
});
|
||||
return Err(err.await);
|
||||
};
|
||||
let Parsed { tail, .. } = expect_tok(&ctx, tail, ctx.i().i("=").await).await?;
|
||||
let aliased = parse_tokv(tail, &ctx).await;
|
||||
let Parsed { tail, .. } = expect_tok(tail, is("=").await).await?;
|
||||
let aliased = parse_tokv(tail).await;
|
||||
Ok(vec![ParsedLine::cnst(&line.sr(), &comments, exported, name, async move |ctx| {
|
||||
let rep = Reporter::new();
|
||||
let dealiased = dealias_mac_v(aliased, &ctx, &rep).await;
|
||||
let macro_input = MacTok::S(Paren::Round, dealiased).at(sr.pos());
|
||||
if let Some(e) = rep.errv() {
|
||||
return Err(e);
|
||||
}
|
||||
Ok(call([
|
||||
sym_ref(sym!(macros::lower; ctx.i()).await),
|
||||
call([sym_ref(sym!(macros::resolve; ctx.i()).await), atom(macro_input)]),
|
||||
]))
|
||||
let macro_input =
|
||||
MacTok::S(Paren::Round, with_reporter(dealias_mac_v(&aliased, &ctx)).await?).at(sr.pos());
|
||||
Ok(call(sym_ref(sym!(macros::resolve)), [atom(macro_input)]))
|
||||
})])
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn dealias_mac_v(aliased: Vec<MacTree>, ctx: &ConstCtx, rep: &Reporter) -> Vec<MacTree> {
|
||||
let keys = glossary_v(&aliased).collect_vec();
|
||||
pub async fn dealias_mac_v(aliased: &MacTreeSeq, ctx: &ConstCtx) -> MacTreeSeq {
|
||||
let keys = aliased.glossary().iter().cloned().collect_vec();
|
||||
let mut names: HashMap<_, _> = HashMap::new();
|
||||
let mut stream = pin!(ctx.names(&keys).zip(stream::iter(&keys)));
|
||||
while let Some((canonical, local)) = stream.next().await {
|
||||
match canonical {
|
||||
Err(e) => rep.report(e),
|
||||
Err(e) => report(e),
|
||||
Ok(name) => {
|
||||
names.insert(local.clone(), name);
|
||||
},
|
||||
}
|
||||
}
|
||||
map_mactree_v(&aliased, &mut false, &mut |tree| match &*tree.tok {
|
||||
aliased.map(&mut false, &mut |tree| match &*tree.tok {
|
||||
MacTok::Name(n) => names.get(n).map(|new_n| MacTok::Name(new_n.clone()).at(tree.pos())),
|
||||
_ => None,
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn parse_tokv(line: PSnippet<'_>, ctx: &impl ParseCtx) -> Vec<MacTree> {
|
||||
pub async fn parse_tokv(line: PSnippet<'_>) -> MacTreeSeq {
|
||||
if let Some((idx, arg)) = line.iter().enumerate().find_map(|(i, x)| Some((i, x.as_lambda()?))) {
|
||||
let (head, lambda) = line.split_at(idx as u32);
|
||||
let (_, body) = lambda.pop_front().unwrap();
|
||||
let body = parse_tokv(body, ctx).boxed_local().await;
|
||||
let mut all = parse_tokv_no_lambdas(&head, ctx).await;
|
||||
match parse_tok(arg, ctx).await {
|
||||
let body = parse_tokv(body).boxed_local().await;
|
||||
let mut all = parse_tokv_no_lambdas(&head).await;
|
||||
match parse_tok(arg).await {
|
||||
Some(arg) => all.push(MacTok::Lambda(arg, body).at(lambda.sr().pos())),
|
||||
None => ctx.rep().report(
|
||||
token_errv(ctx, arg, "Lambda argument fluff", |arg| {
|
||||
None => report(
|
||||
token_errv(arg, "Lambda argument fluff", |arg| {
|
||||
format!("Lambda arguments must be a valid token, found meaningless fragment {arg}")
|
||||
})
|
||||
.await,
|
||||
),
|
||||
};
|
||||
all
|
||||
MacTreeSeq::new(all)
|
||||
} else {
|
||||
parse_tokv_no_lambdas(&line, ctx).await
|
||||
MacTreeSeq::new(parse_tokv_no_lambdas(&line).await)
|
||||
}
|
||||
}
|
||||
|
||||
async fn parse_tokv_no_lambdas(line: &[PTokTree], ctx: &impl ParseCtx) -> Vec<MacTree> {
|
||||
stream::iter(line).filter_map(|tt| parse_tok(tt, ctx)).collect().await
|
||||
async fn parse_tokv_no_lambdas(line: &[PTokTree]) -> Vec<MacTree> {
|
||||
stream::iter(line).filter_map(parse_tok).collect::<Vec<_>>().await
|
||||
}
|
||||
|
||||
pub async fn parse_tok(tree: &PTokTree, ctx: &impl ParseCtx) -> Option<MacTree> {
|
||||
pub async fn parse_tok(tree: &PTokTree) -> Option<MacTree> {
|
||||
let tok = match &tree.tok {
|
||||
PTok::Bottom(errv) => MacTok::Bottom(errv.clone()),
|
||||
PTok::BR | PTok::Comment(_) => return None,
|
||||
PTok::Name(n) => MacTok::Name(Sym::new([n.clone()], ctx.i()).await.unwrap()),
|
||||
PTok::Name(n) => MacTok::Name(Sym::new([n.clone()]).await.unwrap()),
|
||||
PTok::NS(..) => match tree.as_multiname() {
|
||||
Ok(mn) => MacTok::Name(mn.to_sym(ctx.i()).await),
|
||||
Ok(mn) => MacTok::Name(mn.to_sym().await),
|
||||
Err(nested) => {
|
||||
ctx.rep().report(
|
||||
token_errv(ctx, tree, ":: can only be followed by a name in an expression", |tok| {
|
||||
report(
|
||||
token_errv(tree, ":: can only be followed by a name in an expression", |tok| {
|
||||
format!("Expected name, found {tok}")
|
||||
})
|
||||
.await,
|
||||
);
|
||||
return parse_tok(nested, ctx).boxed_local().await;
|
||||
return parse_tok(nested).boxed_local().await;
|
||||
},
|
||||
},
|
||||
PTok::Handle(expr) => MacTok::Value(expr.clone()),
|
||||
PTok::Handle(expr) => match TAtom::<PhAtom>::try_from_expr(expr.clone()).await {
|
||||
Err(_) => MacTok::Value(expr.clone()),
|
||||
Ok(ta) => MacTok::Ph(ta.value.to_full().await),
|
||||
},
|
||||
PTok::NewExpr(never) => match *never {},
|
||||
PTok::LambdaHead(_) => panic!("Lambda-head handled in the sequence parser"),
|
||||
PTok::S(p, body) =>
|
||||
MacTok::S(*p, parse_tokv(Snippet::new(tree, body), ctx).boxed_local().await),
|
||||
PTok::S(p, body) => MacTok::S(*p, parse_tokv(Snippet::new(tree, body)).boxed_local().await),
|
||||
};
|
||||
Some(tok.at(tree.sr().pos()))
|
||||
}
|
||||
|
||||
@@ -1,83 +1,68 @@
|
||||
use hashbrown::HashMap;
|
||||
use itertools::Itertools;
|
||||
use orchid_base::error::Reporter;
|
||||
use orchid_base::sym;
|
||||
use orchid_extension::atom::TypAtom;
|
||||
use orchid_extension::atom::TAtom;
|
||||
use orchid_extension::atom_owned::own;
|
||||
use orchid_extension::conv::ToExpr;
|
||||
use orchid_extension::coroutine_exec::exec;
|
||||
use orchid_extension::gen_expr::{atom, call, sym_ref};
|
||||
use orchid_extension::reflection::{ReflMemKind, refl};
|
||||
use orchid_extension::tree::{GenMember, comments, fun, prefix};
|
||||
use substack::Substack;
|
||||
use orchid_extension::gen_expr::{call, sym_ref};
|
||||
use orchid_extension::tree::{GenMember, fun, prefix};
|
||||
|
||||
use crate::Int;
|
||||
use crate::macros::instantiate_tpl::InstantiateTplCall;
|
||||
use crate::macros::macro_line::{Macro, Matcher};
|
||||
use crate::macros::mactree::{LowerCtx, MacTree};
|
||||
use crate::macros::recur_state::RecurState;
|
||||
use crate::macros::resolve::{ResolveCtx, resolve};
|
||||
use crate::macros::mactree::MacTree;
|
||||
use crate::macros::resolve::resolve;
|
||||
use crate::macros::utils::{build_macro, mactree, mactreev};
|
||||
use crate::{HomoTpl, UntypedTuple};
|
||||
|
||||
pub fn gen_macro_lib() -> Vec<GenMember> {
|
||||
pub async fn gen_macro_lib() -> Vec<GenMember> {
|
||||
prefix("macros", [
|
||||
comments(
|
||||
["This is an internal function, you can't obtain a value of its argument type.", "hidden"],
|
||||
fun(true, "instantiate_tpl", |tpl: TypAtom<MacTree>, right: Int| async move {
|
||||
InstantiateTplCall {
|
||||
tpl: own(tpl).await,
|
||||
argc: right.0.try_into().unwrap(),
|
||||
argv: Vec::new(),
|
||||
}
|
||||
}),
|
||||
),
|
||||
fun(true, "resolve", |tpl: TypAtom<MacTree>| async move {
|
||||
call([
|
||||
sym_ref(sym!(macros::resolve_recur; tpl.untyped.ctx().i()).await),
|
||||
atom(RecurState::Bottom),
|
||||
tpl.untyped.ex().to_expr().await,
|
||||
])
|
||||
}),
|
||||
fun(true, "lower", |tpl: TypAtom<MacTree>| async move {
|
||||
let ctx = LowerCtx { sys: tpl.untyped.ctx().clone(), rep: &Reporter::new() };
|
||||
let res = own(tpl).await.lower(ctx, Substack::Bottom).await;
|
||||
if let Some(e) = Reporter::new().errv() { Err(e) } else { Ok(res) }
|
||||
}),
|
||||
fun(true, "resolve_recur", |state: TypAtom<RecurState>, tpl: TypAtom<MacTree>| async move {
|
||||
exec("macros::resolve_recur", async move |mut h| {
|
||||
let ctx = tpl.ctx().clone();
|
||||
let root = refl(&ctx);
|
||||
let tpl = own(tpl.clone()).await;
|
||||
let mut macros = HashMap::new();
|
||||
for n in tpl.glossary() {
|
||||
if let Ok(ReflMemKind::Const) = root.get_by_path(n).await.map(|m| m.kind()) {
|
||||
let Ok(mac) = h.exec::<TypAtom<Macro>>(sym_ref(n.clone())).await else { continue };
|
||||
let mac = own(mac).await;
|
||||
macros.entry(mac.0.own_kws[0].clone()).or_insert(mac);
|
||||
}
|
||||
}
|
||||
let mut named = HashMap::new();
|
||||
let mut priod = Vec::new();
|
||||
for (_, mac) in macros.iter() {
|
||||
for rule in mac.0.rules.iter() {
|
||||
if rule.glossary.is_subset(tpl.glossary()) {
|
||||
match &rule.pattern {
|
||||
Matcher::Named(m) =>
|
||||
named.entry(m.head()).or_insert(Vec::new()).push((m, mac, rule)),
|
||||
Matcher::Priod(p) => priod.push((mac.0.prio, (p, mac, rule))),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
let priod = priod.into_iter().sorted_unstable_by_key(|(p, _)| *p).map(|(_, r)| r).collect();
|
||||
let mut rctx = ResolveCtx { h, recur: own(state).await, ctx: ctx.clone(), named, priod };
|
||||
let resolve_res = resolve(&mut rctx, &tpl).await;
|
||||
std::mem::drop(rctx);
|
||||
match resolve_res {
|
||||
Some(out_tree) => out_tree.to_expr().await,
|
||||
None => tpl.to_expr().await,
|
||||
}
|
||||
})
|
||||
.await
|
||||
}),
|
||||
fun(true, "resolve", async |tpl: TAtom<MacTree>| resolve(own(&tpl).await).await),
|
||||
// TODO test whether any of this worked
|
||||
prefix("common", [
|
||||
build_macro(None, ["..", "_"]).finish(),
|
||||
build_macro(Some(1), ["+"])
|
||||
.rule(mactreev!("...$" lhs 0 macros::common::+ "...$" rhs 1), [async |[lhs, rhs]| {
|
||||
call(sym_ref(sym!(std::number::add)), [resolve(lhs).await, resolve(rhs).await])
|
||||
}])
|
||||
.finish(),
|
||||
build_macro(Some(2), ["*"])
|
||||
.rule(mactreev!("...$" lhs 0 macros::common::* "...$" rhs 1), [async |[lhs, rhs]| {
|
||||
call(sym_ref(sym!(std::number::mul)), [resolve(lhs).await, resolve(rhs).await])
|
||||
}])
|
||||
.finish(),
|
||||
build_macro(None, ["comma_list", ","])
|
||||
.rule(
|
||||
mactreev!(macros::common::comma_list ( "...$" head 0 macros::common::, "...$" tail 1)),
|
||||
[async |[head, tail]| {
|
||||
exec(async |mut h| {
|
||||
let recur = resolve(mactree!(macros::common::comma_list "push" tail ;)).await;
|
||||
let mut tail = h.exec::<HomoTpl<TAtom<MacTree>>>(recur).await?;
|
||||
tail.0.insert(0, h.exec(head).await?);
|
||||
Ok(tail)
|
||||
})
|
||||
.await
|
||||
}],
|
||||
)
|
||||
.rule(mactreev!(macros::common::comma_list ( "...$" final_tail 0 )), [async |[tail]| {
|
||||
HomoTpl(vec![tail.to_gen().await])
|
||||
}])
|
||||
.rule(mactreev!(macros::common::comma_list()), [async |[]| UntypedTuple(Vec::new())])
|
||||
.finish(),
|
||||
build_macro(None, ["semi_list", ";"])
|
||||
.rule(
|
||||
mactreev!(macros::common::semi_list ( "...$" head 0 macros::common::; "...$" tail 1)),
|
||||
[async |[head, tail]| {
|
||||
exec(async |mut h| {
|
||||
let recur = resolve(mactree!(macros::common::semi_list "push" tail ;)).await;
|
||||
let mut tail = h.exec::<HomoTpl<TAtom<MacTree>>>(recur).await?;
|
||||
tail.0.insert(0, h.exec(head).await?);
|
||||
Ok(tail)
|
||||
})
|
||||
.await
|
||||
}],
|
||||
)
|
||||
.rule(mactreev!(macros::common::semi_list ( "...$" final_tail 0 )), [async |[tail]| {
|
||||
HomoTpl(vec![tail.to_gen().await])
|
||||
}])
|
||||
.rule(mactreev!(macros::common::semi_list()), [async |[]| UntypedTuple(Vec::new())])
|
||||
.finish(),
|
||||
]),
|
||||
])
|
||||
}
|
||||
|
||||
@@ -1,32 +1,25 @@
|
||||
use std::borrow::Cow;
|
||||
use std::cell::RefCell;
|
||||
use std::rc::Rc;
|
||||
|
||||
use async_fn_stream::stream;
|
||||
use async_once_cell::OnceCell;
|
||||
use futures::{StreamExt, stream};
|
||||
use hashbrown::{HashMap, HashSet};
|
||||
use futures::StreamExt;
|
||||
use itertools::Itertools;
|
||||
use never::Never;
|
||||
use orchid_base::error::{OrcRes, Reporter, mk_errv};
|
||||
use orchid_base::interner::Tok;
|
||||
use orchid_base::location::Pos;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::error::{OrcRes, mk_errv, report, with_reporter};
|
||||
use orchid_base::interner::is;
|
||||
use orchid_base::parse::{
|
||||
Comment, ParseCtx, Parsed, Snippet, expect_end, expect_tok, line_items, token_errv,
|
||||
try_pop_no_fluff,
|
||||
Comment, Parsed, Snippet, expect_end, expect_tok, line_items, token_errv, try_pop_no_fluff,
|
||||
};
|
||||
use orchid_base::tree::{Paren, Token};
|
||||
use orchid_base::{clone, sym};
|
||||
use orchid_extension::atom::{Atomic, TypAtom};
|
||||
use orchid_extension::atom_owned::{OwnedAtom, OwnedVariant};
|
||||
use orchid_extension::atom::TAtom;
|
||||
use orchid_extension::conv::{ToExpr, TryFromExpr};
|
||||
use orchid_extension::gen_expr::{atom, call, sym_ref};
|
||||
use orchid_extension::gen_expr::{call, sym_ref};
|
||||
use orchid_extension::parser::{PSnippet, ParsCtx, ParsedLine, Parser};
|
||||
|
||||
use crate::macros::let_line::{dealias_mac_v, parse_tokv};
|
||||
use crate::macros::mactree::{glossary_v, map_mactree_v};
|
||||
use crate::macros::recur_state::{RecurState, RulePath};
|
||||
use crate::macros::rule::matcher::{NamedMatcher, PriodMatcher};
|
||||
use crate::macros::macro_value::{Macro, MacroData, Rule};
|
||||
use crate::macros::mactree::MacTreeSeq;
|
||||
use crate::macros::rule::matcher::Matcher;
|
||||
use crate::{Int, MacTok};
|
||||
|
||||
#[derive(Default)]
|
||||
@@ -41,61 +34,64 @@ impl Parser for MacroLine {
|
||||
) -> OrcRes<Vec<ParsedLine>> {
|
||||
if exported {
|
||||
return Err(mk_errv(
|
||||
ctx.i().i("macros are always exported").await,
|
||||
is("macros are always exported").await,
|
||||
"The export keyword is forbidden here to avoid confusion\n\
|
||||
because macros are exported by default",
|
||||
[line.sr()],
|
||||
));
|
||||
}
|
||||
let module = ctx.module();
|
||||
let Parsed { output, tail } = try_pop_no_fluff(&ctx, line).await?;
|
||||
let Parsed { output: prio_or_body, tail } = try_pop_no_fluff(line).await?;
|
||||
let bad_first_item_err = || {
|
||||
token_errv(&ctx, output, "Expected priority or block", |s| {
|
||||
token_errv(prio_or_body, "Expected priority or block", |s| {
|
||||
format!("Expected a priority number or a () block, found {s}")
|
||||
})
|
||||
};
|
||||
let (prio, body) = match &output.tok {
|
||||
Token::S(Paren::Round, body) => (None, body),
|
||||
Token::Handle(expr) => match TypAtom::<Int>::try_from_expr(expr.clone()).await {
|
||||
let (prio, body) = match &prio_or_body.tok {
|
||||
Token::S(Paren::Round, body) => {
|
||||
expect_end(tail).await?;
|
||||
(None, body)
|
||||
},
|
||||
Token::Handle(expr) => match TAtom::<Int>::try_from_expr(expr.clone()).await {
|
||||
Err(e) => {
|
||||
return Err(e + bad_first_item_err().await);
|
||||
},
|
||||
Ok(prio) => {
|
||||
let Token::S(Paren::Round, block) = &output.tok else {
|
||||
let Parsed { output: body, tail } = try_pop_no_fluff(tail).await?;
|
||||
let Token::S(Paren::Round, block) = &body.tok else {
|
||||
return Err(
|
||||
token_errv(&ctx, output, "Expected () block", |s| {
|
||||
token_errv(prio_or_body, "Expected () block", |s| {
|
||||
format!("Expected a () block, found {s}")
|
||||
})
|
||||
.await,
|
||||
);
|
||||
};
|
||||
expect_end(tail).await?;
|
||||
(Some(prio), block)
|
||||
},
|
||||
},
|
||||
_ => return Err(bad_first_item_err().await),
|
||||
};
|
||||
expect_end(&ctx, tail).await?;
|
||||
let lines = line_items(&ctx, Snippet::new(output, body)).await;
|
||||
let lines = line_items(Snippet::new(prio_or_body, body)).await;
|
||||
let Some((kw_line, rule_lines)) = lines.split_first() else { return Ok(Vec::new()) };
|
||||
let mut keywords = HashMap::new();
|
||||
let Parsed { tail: kw_tail, .. } =
|
||||
expect_tok(&ctx, kw_line.tail, ctx.i().i("keywords").await).await?;
|
||||
let mut keywords = Vec::new();
|
||||
let Parsed { tail: kw_tail, .. } = expect_tok(kw_line.tail, is("keywords").await).await?;
|
||||
for kw_tok in kw_tail.iter().filter(|kw| !kw.is_fluff()) {
|
||||
match kw_tok.as_name() {
|
||||
Some(kw) => {
|
||||
keywords.insert(kw, kw_tok.sr());
|
||||
keywords.push((kw, kw_tok.sr()));
|
||||
},
|
||||
None => ctx.rep().report(
|
||||
token_errv(&ctx, kw_tok, "invalid macro keywords list", |tok| {
|
||||
None => report(
|
||||
token_errv(kw_tok, "invalid macro keywords list", |tok| {
|
||||
format!("The keywords list must be a sequence of names; received {tok}")
|
||||
})
|
||||
.await,
|
||||
),
|
||||
}
|
||||
}
|
||||
let Some(macro_name) = keywords.keys().next().cloned() else {
|
||||
let Some((macro_name, _)) = keywords.first().cloned() else {
|
||||
return Err(mk_errv(
|
||||
ctx.i().i("macro with no keywords").await,
|
||||
is("macro with no keywords").await,
|
||||
"Macros must define at least one macro of their own.",
|
||||
[kw_line.tail.sr()],
|
||||
));
|
||||
@@ -103,126 +99,75 @@ impl Parser for MacroLine {
|
||||
let mut rules = Vec::new();
|
||||
let mut lines = Vec::new();
|
||||
for (idx, line) in rule_lines.iter().enumerate().map(|(n, v)| (n as u32, v)) {
|
||||
let path = RulePath { module: module.clone(), main_kw: macro_name.clone(), rule: idx };
|
||||
let sr = line.tail.sr();
|
||||
let name = ctx.i().i(&path.name()).await;
|
||||
let Parsed { tail, .. } = expect_tok(&ctx, line.tail, ctx.i().i("rule").await).await?;
|
||||
let arrow_token = ctx.i().i("=>").await;
|
||||
let name = is(&format!("rule::{}::{}", macro_name, idx)).await;
|
||||
let Parsed { tail, .. } = expect_tok(line.tail, is("rule").await).await?;
|
||||
let arrow_token = is("=>").await;
|
||||
let Some((pattern, body)) = tail.split_once(|tok| tok.is_kw(arrow_token.clone())) else {
|
||||
ctx.rep().report(mk_errv(
|
||||
ctx.i().i("Missing => in rule").await,
|
||||
report(mk_errv(
|
||||
is("Missing => in rule").await,
|
||||
"Rule lines are of the form `rule ...pattern => ...body`",
|
||||
[line.tail.sr()],
|
||||
));
|
||||
continue;
|
||||
};
|
||||
let pattern = parse_tokv(pattern, &ctx).await;
|
||||
let pattern = parse_tokv(pattern).await;
|
||||
let mut placeholders = Vec::new();
|
||||
map_mactree_v(&pattern, &mut false, &mut |tok| {
|
||||
pattern.map(&mut false, &mut |tok| {
|
||||
if let MacTok::Ph(ph) = tok.tok() {
|
||||
placeholders.push((ph.clone(), tok.pos()))
|
||||
}
|
||||
None
|
||||
});
|
||||
let mut body_mactree = parse_tokv(body, &ctx).await;
|
||||
let mut body_mactree = parse_tokv(body).await;
|
||||
for (ph, ph_pos) in placeholders.iter().rev() {
|
||||
let name = ctx.module().suffix([ph.name.clone()], ctx.i()).await;
|
||||
body_mactree = vec![
|
||||
MacTok::Lambda(MacTok::Name(name).at(ph_pos.clone()), body_mactree).at(ph_pos.clone()),
|
||||
]
|
||||
let name = ctx.module().suffix([ph.name.clone()]).await;
|
||||
body_mactree =
|
||||
MacTreeSeq::new([
|
||||
MacTok::Lambda(MacTok::Name(name).at(ph_pos.clone()), body_mactree).at(ph_pos.clone())
|
||||
])
|
||||
}
|
||||
let body_sr = body.sr();
|
||||
rules.push((name.clone(), placeholders, rules.len() as u32, sr.pos(), pattern));
|
||||
rules.push((name.clone(), placeholders, pattern));
|
||||
lines.push(ParsedLine::cnst(&sr, &line.output, true, name, async move |ctx| {
|
||||
let rep = Reporter::new();
|
||||
let body = dealias_mac_v(body_mactree, &ctx, &rep).await;
|
||||
let macro_input = MacTok::S(Paren::Round, body).at(body_sr.pos());
|
||||
if let Some(e) = rep.errv() {
|
||||
return Err(e);
|
||||
}
|
||||
Ok(call([
|
||||
sym_ref(sym!(macros::resolve_recur; ctx.i()).await),
|
||||
atom(RecurState::base(path)),
|
||||
macro_input.to_expr().await,
|
||||
]))
|
||||
let macro_input =
|
||||
MacTok::S(Paren::Round, with_reporter(dealias_mac_v(&body_mactree, &ctx)).await?)
|
||||
.at(body_sr.pos());
|
||||
Ok(call(sym_ref(sym!(macros::resolve)), [macro_input.to_gen().await]))
|
||||
}))
|
||||
}
|
||||
let mac_cell = Rc::new(OnceCell::new());
|
||||
let keywords = Rc::new(keywords);
|
||||
let rules = Rc::new(RefCell::new(Some(rules)));
|
||||
let rules = Rc::new(rules);
|
||||
for (kw, sr) in &*keywords {
|
||||
clone!(mac_cell, keywords, rules, module, prio);
|
||||
lines.push(ParsedLine::cnst(&sr.clone(), &comments, true, kw.clone(), async move |cctx| {
|
||||
let mac = mac_cell
|
||||
.get_or_init(async {
|
||||
let rep = Reporter::new();
|
||||
let rules = rules.borrow_mut().take().expect("once cell initializer runs");
|
||||
let rules = stream::iter(rules)
|
||||
.then(|(body_name, placeholders, index, pos, pattern_macv)| {
|
||||
let cctx = &cctx;
|
||||
let rep = &rep;
|
||||
let prio = &prio;
|
||||
async move {
|
||||
let pattern_abs = dealias_mac_v(pattern_macv, cctx, rep).await;
|
||||
let glossary = glossary_v(&pattern_abs).collect();
|
||||
let pattern_res = match prio {
|
||||
None => NamedMatcher::new(&pattern_abs, cctx.i()).await.map(Matcher::Named),
|
||||
Some(_) => PriodMatcher::new(&pattern_abs, cctx.i()).await.map(Matcher::Priod),
|
||||
};
|
||||
let placeholders = placeholders.into_iter().map(|(ph, _)| ph.name).collect_vec();
|
||||
match pattern_res {
|
||||
Ok(pattern) =>
|
||||
Some(Rule { index, pos, body_name, pattern, glossary, placeholders }),
|
||||
Err(e) => {
|
||||
rep.report(e);
|
||||
None
|
||||
},
|
||||
}
|
||||
clone!(mac_cell, rules, module, macro_name, prio);
|
||||
let kw_key = is(&format!("__macro__{kw}")).await;
|
||||
lines.push(ParsedLine::cnst(&sr.clone(), &comments, true, kw_key, async move |cctx| {
|
||||
let mac_future = async {
|
||||
let rules = with_reporter(
|
||||
stream(async |mut h| {
|
||||
for (body, ph_names, pattern_rel) in rules.iter() {
|
||||
let pattern = dealias_mac_v(pattern_rel, &cctx).await;
|
||||
let ph_names = ph_names.iter().map(|(ph, _)| ph.name.clone()).collect_vec();
|
||||
match Matcher::new(pattern.clone()).await {
|
||||
Ok(matcher) =>
|
||||
h.emit(Rule { body: body.clone(), matcher, pattern, ph_names }).await,
|
||||
Err(e) => report(e),
|
||||
}
|
||||
})
|
||||
.flat_map(stream::iter)
|
||||
.collect::<Vec<_>>()
|
||||
.await;
|
||||
let own_kws = keywords.keys().cloned().collect_vec();
|
||||
Macro(Rc::new(MacroData { module, prio: prio.map(|i| i.0 as u64), rules, own_kws }))
|
||||
})
|
||||
.await;
|
||||
atom(mac.clone())
|
||||
}
|
||||
})
|
||||
.collect::<Vec<_>>(),
|
||||
)
|
||||
.await?;
|
||||
Ok(Macro(Rc::new(MacroData {
|
||||
canonical_name: module.suffix([macro_name]).await,
|
||||
module,
|
||||
prio: prio.map(|i| i.0 as u64),
|
||||
rules,
|
||||
})))
|
||||
};
|
||||
mac_cell.get_or_init(mac_future).await.clone().to_gen().await
|
||||
}))
|
||||
}
|
||||
Ok(lines)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct MacroData {
|
||||
pub module: Sym,
|
||||
pub prio: Option<u64>,
|
||||
pub rules: Vec<Rule>,
|
||||
pub own_kws: Vec<Tok<String>>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Macro(pub Rc<MacroData>);
|
||||
#[derive(Debug)]
|
||||
pub struct Rule {
|
||||
pub index: u32,
|
||||
pub pos: Pos,
|
||||
pub pattern: Matcher,
|
||||
pub glossary: HashSet<Sym>,
|
||||
pub placeholders: Vec<Tok<String>>,
|
||||
pub body_name: Tok<String>,
|
||||
}
|
||||
#[derive(Debug)]
|
||||
pub enum Matcher {
|
||||
Named(NamedMatcher),
|
||||
Priod(PriodMatcher),
|
||||
}
|
||||
impl Atomic for Macro {
|
||||
type Data = ();
|
||||
type Variant = OwnedVariant;
|
||||
}
|
||||
impl OwnedAtom for Macro {
|
||||
type Refs = Never;
|
||||
async fn val(&self) -> Cow<'_, Self::Data> { Cow::Owned(()) }
|
||||
}
|
||||
|
||||
@@ -1,32 +1,35 @@
|
||||
use never::Never;
|
||||
use orchid_base::interner::Interner;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::reqnot::Receipt;
|
||||
use orchid_base::reqnot::{Receipt, ReqHandle};
|
||||
use orchid_base::sym;
|
||||
use orchid_extension::atom::{AtomDynfo, AtomicFeatures};
|
||||
use orchid_extension::entrypoint::ExtReq;
|
||||
use orchid_extension::lexer::LexerObj;
|
||||
use orchid_extension::other_system::SystemHandle;
|
||||
use orchid_extension::parser::ParserObj;
|
||||
use orchid_extension::system::{System, SystemCard};
|
||||
use orchid_extension::system_ctor::SystemCtor;
|
||||
use orchid_extension::tree::GenMember;
|
||||
use orchid_extension::tree::{GenMember, merge_trivial};
|
||||
|
||||
use crate::macros::instantiate_tpl::InstantiateTplCall;
|
||||
use crate::macros::let_line::LetLine;
|
||||
use crate::macros::macro_lib::gen_macro_lib;
|
||||
use crate::macros::macro_line::{Macro, MacroLine};
|
||||
use crate::macros::macro_line::MacroLine;
|
||||
use crate::macros::macro_value::Macro;
|
||||
use crate::macros::mactree_lexer::MacTreeLexer;
|
||||
use crate::macros::recur_state::RecurState;
|
||||
use crate::macros::match_macros::{MatcherAtom, gen_match_macro_lib};
|
||||
use crate::macros::ph_lexer::{PhAtom, PhLexer};
|
||||
use crate::macros::std_macros::gen_std_macro_lib;
|
||||
use crate::macros::utils::MacroBodyArgCollector;
|
||||
use crate::{MacTree, StdSystem};
|
||||
|
||||
#[derive(Default)]
|
||||
#[derive(Debug, Default)]
|
||||
pub struct MacroSystem;
|
||||
impl SystemCtor for MacroSystem {
|
||||
type Deps = StdSystem;
|
||||
type Instance = Self;
|
||||
const NAME: &'static str = "orchid::macros";
|
||||
const VERSION: f64 = 0.00_01;
|
||||
fn inst(_: SystemHandle<StdSystem>) -> Self::Instance { Self }
|
||||
fn inst(&self, _: SystemHandle<StdSystem>) -> Self::Instance { Self }
|
||||
}
|
||||
impl SystemCard for MacroSystem {
|
||||
type Ctor = Self;
|
||||
@@ -35,15 +38,32 @@ impl SystemCard for MacroSystem {
|
||||
[
|
||||
Some(InstantiateTplCall::dynfo()),
|
||||
Some(MacTree::dynfo()),
|
||||
Some(RecurState::dynfo()),
|
||||
Some(Macro::dynfo()),
|
||||
Some(PhAtom::dynfo()),
|
||||
Some(MacroBodyArgCollector::dynfo()),
|
||||
Some(MatcherAtom::dynfo()),
|
||||
]
|
||||
}
|
||||
}
|
||||
impl System for MacroSystem {
|
||||
async fn request(_: ExtReq<'_>, req: Self::Req) -> Receipt<'_> { match req {} }
|
||||
async fn prelude(_: &Interner) -> Vec<Sym> { vec![] }
|
||||
fn lexers() -> Vec<LexerObj> { vec![&MacTreeLexer] }
|
||||
async fn request<'a>(_: Box<dyn ReqHandle<'a> + 'a>, req: Never) -> Receipt<'a> { match req {} }
|
||||
async fn prelude() -> Vec<Sym> {
|
||||
vec![
|
||||
sym!(macros::common::+),
|
||||
sym!(macros::common::*),
|
||||
sym!(macros::common::,),
|
||||
sym!(macros::common::;),
|
||||
sym!(macros::common::..),
|
||||
sym!(macros::common::_),
|
||||
sym!(std::tuple::t),
|
||||
sym!(pattern::match),
|
||||
sym!(pattern::ref),
|
||||
sym!(pattern::=>),
|
||||
]
|
||||
}
|
||||
fn lexers() -> Vec<LexerObj> { vec![&MacTreeLexer, &PhLexer] }
|
||||
fn parsers() -> Vec<ParserObj> { vec![&LetLine, &MacroLine] }
|
||||
fn env() -> Vec<GenMember> { gen_macro_lib() }
|
||||
async fn env() -> Vec<GenMember> {
|
||||
merge_trivial([gen_macro_lib().await, gen_std_macro_lib().await, gen_match_macro_lib().await])
|
||||
}
|
||||
}
|
||||
|
||||
38
orchid-std/src/macros/macro_value.rs
Normal file
38
orchid-std/src/macros/macro_value.rs
Normal file
@@ -0,0 +1,38 @@
|
||||
use std::borrow::Cow;
|
||||
use std::rc::Rc;
|
||||
|
||||
use never::Never;
|
||||
use orchid_base::interner::IStr;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_extension::atom::Atomic;
|
||||
use orchid_extension::atom_owned::{OwnedAtom, OwnedVariant};
|
||||
|
||||
use crate::macros::mactree::MacTreeSeq;
|
||||
use crate::macros::rule::matcher::Matcher;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct MacroData {
|
||||
pub canonical_name: Sym,
|
||||
pub module: Sym,
|
||||
pub prio: Option<u64>,
|
||||
pub rules: Vec<Rule>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Macro(pub Rc<MacroData>);
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct Rule {
|
||||
pub pattern: MacTreeSeq,
|
||||
pub matcher: Matcher,
|
||||
pub ph_names: Vec<IStr>,
|
||||
pub body: IStr,
|
||||
}
|
||||
impl Atomic for Macro {
|
||||
type Data = ();
|
||||
type Variant = OwnedVariant;
|
||||
}
|
||||
impl OwnedAtom for Macro {
|
||||
type Refs = Never;
|
||||
async fn val(&self) -> Cow<'_, Self::Data> { Cow::Owned(()) }
|
||||
}
|
||||
@@ -5,26 +5,99 @@ use std::rc::Rc;
|
||||
use futures::FutureExt;
|
||||
use futures::future::join_all;
|
||||
use hashbrown::HashSet;
|
||||
use itertools::Itertools;
|
||||
use orchid_base::error::{OrcErrv, Reporter, mk_errv};
|
||||
use orchid_base::format::{FmtCtx, FmtUnit, Format, Variants, fmt};
|
||||
use orchid_base::interner::Tok;
|
||||
use orchid_api_derive::Coding;
|
||||
use orchid_base::error::OrcErrv;
|
||||
use orchid_base::format::{FmtCtx, FmtUnit, Format, Variants};
|
||||
use orchid_base::interner::IStr;
|
||||
use orchid_base::location::Pos;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::tl_cache;
|
||||
use orchid_base::tree::{Paren, indent};
|
||||
use orchid_extension::atom::Atomic;
|
||||
use orchid_extension::atom_owned::{OwnedAtom, OwnedVariant};
|
||||
use orchid_extension::conv::ToExpr;
|
||||
use orchid_extension::expr::Expr;
|
||||
use orchid_extension::gen_expr::{GExpr, arg, bot, call, lambda, sym_ref};
|
||||
use orchid_extension::system::SysCtx;
|
||||
use substack::Substack;
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct LowerCtx<'a> {
|
||||
pub sys: SysCtx,
|
||||
pub rep: &'a Reporter,
|
||||
fn union_rc_sets(seq: impl IntoIterator<Item = Rc<HashSet<Sym>>>) -> Rc<HashSet<Sym>> {
|
||||
let mut acc = Rc::<HashSet<Sym>>::default();
|
||||
for right in seq {
|
||||
if acc.is_empty() {
|
||||
acc = right;
|
||||
continue;
|
||||
}
|
||||
if right.is_empty() {
|
||||
continue;
|
||||
}
|
||||
acc = match (Rc::try_unwrap(acc), Rc::try_unwrap(right)) {
|
||||
(Ok(mut left), Ok(right)) => {
|
||||
left.extend(right);
|
||||
Rc::new(left)
|
||||
},
|
||||
(Ok(mut owned), Err(borrowed)) | (Err(borrowed), Ok(mut owned)) => {
|
||||
owned.extend(borrowed.iter().cloned());
|
||||
Rc::new(owned)
|
||||
},
|
||||
(Err(left), Err(right)) => Rc::new(left.union(&right).cloned().collect()),
|
||||
}
|
||||
}
|
||||
acc
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct MacTreeSeq {
|
||||
pub items: Rc<Vec<MacTree>>,
|
||||
pub top_glossary: Rc<HashSet<Sym>>,
|
||||
pub glossary: Rc<HashSet<Sym>>,
|
||||
}
|
||||
impl MacTreeSeq {
|
||||
pub fn new(i: impl IntoIterator<Item = MacTree>) -> Self {
|
||||
let mut items = Vec::new();
|
||||
let mut top_glossary = HashSet::new();
|
||||
let mut glossary = HashSet::new();
|
||||
for item in i {
|
||||
glossary.extend(item.glossary().iter().cloned());
|
||||
if let MacTok::Name(n) = item.tok() {
|
||||
top_glossary.insert(n.clone());
|
||||
}
|
||||
items.push(item);
|
||||
}
|
||||
Self { items: Rc::new(items), top_glossary: Rc::new(top_glossary), glossary: Rc::new(glossary) }
|
||||
}
|
||||
pub fn map<F: FnMut(MacTree) -> Option<MacTree>>(&self, changed: &mut bool, map: &mut F) -> Self {
|
||||
Self::new(self.items.iter().map(|tree| ro(changed, |changed| tree.map(changed, map))))
|
||||
}
|
||||
pub fn glossary(&self) -> &HashSet<Sym> { &self.glossary }
|
||||
pub fn concat(self, other: Self) -> Self {
|
||||
if self.items.is_empty() {
|
||||
return other;
|
||||
} else if other.items.is_empty() {
|
||||
return self;
|
||||
}
|
||||
let items = match (Rc::try_unwrap(self.items), Rc::try_unwrap(other.items)) {
|
||||
(Ok(mut left), Ok(mut right)) => {
|
||||
left.append(&mut right);
|
||||
left
|
||||
},
|
||||
(Ok(mut left), Err(right)) => {
|
||||
left.extend_from_slice(&right[..]);
|
||||
left
|
||||
},
|
||||
(Err(left), Ok(mut right)) => {
|
||||
right.splice(0..0, left.iter().cloned());
|
||||
right
|
||||
},
|
||||
(Err(left), Err(right)) => left.iter().chain(&right[..]).cloned().collect(),
|
||||
};
|
||||
Self {
|
||||
items: Rc::new(items),
|
||||
top_glossary: union_rc_sets([self.top_glossary, other.top_glossary]),
|
||||
glossary: union_rc_sets([self.glossary, other.glossary]),
|
||||
}
|
||||
}
|
||||
}
|
||||
impl Format for MacTreeSeq {
|
||||
async fn print<'a>(&'a self, c: &'a (impl FmtCtx + ?Sized + 'a)) -> FmtUnit {
|
||||
mtreev_fmt(&self.items[..], c).await
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
@@ -37,48 +110,21 @@ impl MacTree {
|
||||
pub fn tok(&self) -> &MacTok { &self.tok }
|
||||
pub fn pos(&self) -> Pos { self.pos.clone() }
|
||||
pub fn glossary(&self) -> &HashSet<Sym> { &self.glossary }
|
||||
pub async fn lower(&self, ctx: LowerCtx<'_>, args: Substack<'_, Sym>) -> GExpr {
|
||||
let expr = match self.tok() {
|
||||
MacTok::Bottom(e) => bot(e.clone()),
|
||||
MacTok::Lambda(arg, body) => {
|
||||
let MacTok::Name(name) = &*arg.tok else {
|
||||
let err = mk_errv(
|
||||
ctx.sys.i().i("Syntax error after macros").await,
|
||||
"This token ends up as a binding, consider replacing it with a name",
|
||||
[arg.pos()],
|
||||
);
|
||||
ctx.rep.report(err.clone());
|
||||
return bot(err);
|
||||
};
|
||||
lambda(args.len() as u64, lower_v(body, ctx, args.push(name.clone())).await)
|
||||
pub fn map<F: FnMut(Self) -> Option<Self>>(&self, changed: &mut bool, map: &mut F) -> Self {
|
||||
let tok = match map(self.clone()) {
|
||||
Some(new_tok) => {
|
||||
*changed = true;
|
||||
return new_tok;
|
||||
},
|
||||
MacTok::Name(name) => match args.iter().enumerate().find(|(_, n)| *n == name) {
|
||||
None => sym_ref(name.clone()),
|
||||
Some((i, _)) => arg((args.len() - i) as u64),
|
||||
None => match &*self.tok {
|
||||
MacTok::Lambda(arg, body) =>
|
||||
MacTok::Lambda(ro(changed, |changed| arg.map(changed, map)), body.map(changed, map)),
|
||||
MacTok::Name(_) | MacTok::Value(_) => return self.clone(),
|
||||
MacTok::Slot | MacTok::Ph(_) | MacTok::Bottom(_) => return self.clone(),
|
||||
MacTok::S(p, body) => MacTok::S(*p, body.map(changed, map)),
|
||||
},
|
||||
MacTok::Ph(ph) => {
|
||||
let err = mk_errv(
|
||||
ctx.sys.i().i("Placeholder in value").await,
|
||||
format!("Placeholder {ph} is only supported in macro patterns"),
|
||||
[self.pos()],
|
||||
);
|
||||
ctx.rep.report(err.clone());
|
||||
return bot(err);
|
||||
},
|
||||
MacTok::S(Paren::Round, body) => call(lower_v(body, ctx, args).await),
|
||||
MacTok::S(..) => {
|
||||
let err = mk_errv(
|
||||
ctx.sys.i().i("[] or {} after macros").await,
|
||||
format!("{} didn't match any macro", fmt(self, ctx.sys.i()).await),
|
||||
[self.pos()],
|
||||
);
|
||||
ctx.rep.report(err.clone());
|
||||
return bot(err);
|
||||
},
|
||||
MacTok::Slot => panic!("Uninstantiated template should never be exposed"),
|
||||
MacTok::Value(v) => v.clone().to_expr().await,
|
||||
};
|
||||
expr.at(self.pos())
|
||||
if *changed { tok.at(self.pos()) } else { self.clone() }
|
||||
}
|
||||
}
|
||||
impl Atomic for MacTree {
|
||||
@@ -90,7 +136,8 @@ impl OwnedAtom for MacTree {
|
||||
|
||||
async fn val(&self) -> Cow<'_, Self::Data> { Cow::Owned(()) }
|
||||
async fn print_atom<'a>(&'a self, c: &'a (impl FmtCtx + ?Sized + 'a)) -> FmtUnit {
|
||||
self.tok.print(c).await
|
||||
tl_cache!(Rc<Variants>: Rc::new(Variants::default().bounded("'{0}")))
|
||||
.units([self.tok.print(c).await])
|
||||
}
|
||||
}
|
||||
impl Format for MacTree {
|
||||
@@ -99,57 +146,49 @@ impl Format for MacTree {
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn lower_v(v: &[MacTree], ctx: LowerCtx<'_>, args: Substack<'_, Sym>) -> Vec<GExpr> {
|
||||
join_all(v.iter().map(|t| t.lower(ctx.clone(), args.clone())).collect::<Vec<_>>()).await
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum MacTok {
|
||||
S(Paren, Vec<MacTree>),
|
||||
S(Paren, MacTreeSeq),
|
||||
Name(Sym),
|
||||
/// Only permitted in arguments to `instantiate_tpl`
|
||||
Slot,
|
||||
Value(Expr),
|
||||
Lambda(MacTree, Vec<MacTree>),
|
||||
Lambda(MacTree, MacTreeSeq),
|
||||
/// Only permitted in "pattern" values produced by macro blocks, which are
|
||||
/// never accessed as variables by usercode
|
||||
Ph(Ph),
|
||||
Bottom(OrcErrv),
|
||||
}
|
||||
impl MacTok {
|
||||
pub fn build_glossary(&self) -> HashSet<Sym> {
|
||||
pub fn build_glossary(&self) -> Rc<HashSet<Sym>> {
|
||||
match self {
|
||||
MacTok::Bottom(_) | MacTok::Ph(_) | MacTok::Slot | MacTok::Value(_) => HashSet::new(),
|
||||
MacTok::Name(sym) => HashSet::from([sym.clone()]),
|
||||
MacTok::S(_, body) => body.iter().flat_map(|mt| &*mt.glossary).cloned().collect(),
|
||||
MacTok::Bottom(_) | MacTok::Ph(_) | MacTok::Slot | MacTok::Value(_) => Rc::default(),
|
||||
MacTok::Name(sym) => Rc::new(HashSet::from([sym.clone()])),
|
||||
MacTok::S(_, body) => union_rc_sets(body.items.iter().map(|mt| mt.glossary.clone())),
|
||||
MacTok::Lambda(arg, body) =>
|
||||
body.iter().chain([arg]).flat_map(|mt| &*mt.glossary).cloned().collect(),
|
||||
union_rc_sets(body.items.iter().chain([arg]).map(|mt| mt.glossary.clone())),
|
||||
}
|
||||
}
|
||||
pub fn at(self, pos: impl Into<Pos>) -> MacTree {
|
||||
MacTree { pos: pos.into(), glossary: Rc::new(self.build_glossary()), tok: Rc::new(self) }
|
||||
MacTree { pos: pos.into(), glossary: self.build_glossary(), tok: Rc::new(self) }
|
||||
}
|
||||
}
|
||||
impl Format for MacTok {
|
||||
async fn print<'a>(&'a self, c: &'a (impl FmtCtx + ?Sized + 'a)) -> FmtUnit {
|
||||
match self {
|
||||
Self::Value(v) => v.print(c).await,
|
||||
Self::Lambda(arg, b) => FmtUnit::new(
|
||||
tl_cache!(Rc<Variants>: Rc::new(Variants::default()
|
||||
.unbounded("\\{0b}.{1l}")
|
||||
.bounded("(\\{0b}.{1b})"))),
|
||||
[arg.print(c).boxed_local().await, mtreev_fmt(b, c).await],
|
||||
),
|
||||
Self::Lambda(arg, b) => tl_cache!(Rc<Variants>: Rc::new(Variants::default()
|
||||
.unbounded("\\{0} {1l}")
|
||||
.bounded("(\\{0} {1b})")))
|
||||
.units([arg.print(c).boxed_local().await, b.print(c).await]),
|
||||
Self::Name(n) => format!("{n}").into(),
|
||||
Self::Ph(ph) => format!("{ph}").into(),
|
||||
Self::S(p, body) => FmtUnit::new(
|
||||
match *p {
|
||||
Paren::Round => Rc::new(Variants::default().bounded("({0b})")),
|
||||
Paren::Curly => Rc::new(Variants::default().bounded("{{0b}}")),
|
||||
Paren::Square => Rc::new(Variants::default().bounded("[{0b}]")),
|
||||
},
|
||||
[mtreev_fmt(body, c).await],
|
||||
),
|
||||
Self::S(p, body) => match *p {
|
||||
Paren::Round => tl_cache!(Rc<Variants>: Rc::new(Variants::default().bounded("({0b})"))),
|
||||
Paren::Curly => tl_cache!(Rc<Variants>: Rc::new(Variants::default().bounded("{{0b}}"))),
|
||||
Paren::Square => tl_cache!(Rc<Variants>: Rc::new(Variants::default().bounded("[{0b}]"))),
|
||||
}
|
||||
.units([body.print(c).await]),
|
||||
Self::Slot => "$SLOT".into(),
|
||||
Self::Bottom(err) if err.len() == 1 => format!("Bottom({}) ", err.one().unwrap()).into(),
|
||||
Self::Bottom(err) => format!("Botttom(\n{}) ", indent(&err.to_string())).into(),
|
||||
@@ -161,12 +200,12 @@ pub async fn mtreev_fmt<'b>(
|
||||
v: impl IntoIterator<Item = &'b MacTree>,
|
||||
c: &(impl FmtCtx + ?Sized),
|
||||
) -> FmtUnit {
|
||||
FmtUnit::sequence(" ", None, join_all(v.into_iter().map(|t| t.print(c))).await)
|
||||
FmtUnit::sequence("", " ", "", None, join_all(v.into_iter().map(|t| t.print(c))).await)
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
|
||||
pub struct Ph {
|
||||
pub name: Tok<String>,
|
||||
pub name: IStr,
|
||||
pub kind: PhKind,
|
||||
}
|
||||
impl Display for Ph {
|
||||
@@ -181,42 +220,12 @@ impl Display for Ph {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
|
||||
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, Coding)]
|
||||
pub enum PhKind {
|
||||
Scalar,
|
||||
Vector { at_least_one: bool, priority: u8 },
|
||||
}
|
||||
|
||||
pub fn map_mactree<F: FnMut(MacTree) -> Option<MacTree>>(
|
||||
src: &MacTree,
|
||||
changed: &mut bool,
|
||||
map: &mut F,
|
||||
) -> MacTree {
|
||||
let tok = match map(src.clone()) {
|
||||
Some(new_tok) => {
|
||||
*changed = true;
|
||||
return new_tok;
|
||||
},
|
||||
None => match &*src.tok {
|
||||
MacTok::Lambda(arg, body) => MacTok::Lambda(
|
||||
ro(changed, |changed| map_mactree(arg, changed, map)),
|
||||
map_mactree_v(body, changed, map),
|
||||
),
|
||||
MacTok::Name(_) | MacTok::Value(_) => return src.clone(),
|
||||
MacTok::Slot | MacTok::Ph(_) | MacTok::Bottom(_) => return src.clone(),
|
||||
MacTok::S(p, body) => MacTok::S(*p, map_mactree_v(body, changed, map)),
|
||||
},
|
||||
};
|
||||
if *changed { tok.at(src.pos()) } else { src.clone() }
|
||||
}
|
||||
pub fn map_mactree_v<F: FnMut(MacTree) -> Option<MacTree>>(
|
||||
src: &[MacTree],
|
||||
changed: &mut bool,
|
||||
map: &mut F,
|
||||
) -> Vec<MacTree> {
|
||||
src.iter().map(|tree| ro(changed, |changed| map_mactree(tree, changed, map))).collect_vec()
|
||||
}
|
||||
|
||||
/// reverse "or". Inside, the flag is always false, but raising it will raise
|
||||
/// the outside flag too.
|
||||
fn ro<T>(flag: &mut bool, cb: impl FnOnce(&mut bool) -> T) -> T {
|
||||
@@ -225,7 +234,3 @@ fn ro<T>(flag: &mut bool, cb: impl FnOnce(&mut bool) -> T) -> T {
|
||||
*flag |= new_flag;
|
||||
val
|
||||
}
|
||||
|
||||
pub fn glossary_v(src: &[MacTree]) -> impl Iterator<Item = Sym> {
|
||||
src.iter().flat_map(|mt| mt.glossary()).cloned()
|
||||
}
|
||||
|
||||
@@ -1,46 +1,44 @@
|
||||
use std::ops::RangeInclusive;
|
||||
|
||||
use futures::FutureExt;
|
||||
use itertools::chain;
|
||||
use orchid_base::error::{OrcRes, mk_errv};
|
||||
use orchid_base::parse::ParseCtx;
|
||||
use orchid_base::sym;
|
||||
use orchid_base::interner::is;
|
||||
use orchid_base::tokens::PARENS;
|
||||
use orchid_base::tree::Paren;
|
||||
use orchid_extension::lexer::{LexContext, Lexer, err_not_applicable};
|
||||
use orchid_extension::parser::p_tree2gen;
|
||||
use orchid_extension::tree::{GenTok, GenTokTree, ref_tok, x_tok};
|
||||
use orchid_extension::tree::{GenTok, GenTokTree, x_tok};
|
||||
|
||||
use crate::macros::instantiate_tpl::InstantiateTplCall;
|
||||
use crate::macros::let_line::parse_tok;
|
||||
use crate::macros::mactree::{MacTok, MacTree};
|
||||
use crate::macros::mactree::{MacTok, MacTree, MacTreeSeq};
|
||||
|
||||
#[derive(Default)]
|
||||
#[derive(Debug, Default)]
|
||||
pub struct MacTreeLexer;
|
||||
impl Lexer for MacTreeLexer {
|
||||
const CHAR_FILTER: &'static [RangeInclusive<char>] = &['\''..='\''];
|
||||
async fn lex<'a>(tail: &'a str, ctx: &'a LexContext<'a>) -> OrcRes<(&'a str, GenTokTree)> {
|
||||
async fn lex<'a>(tail: &'a str, lctx: &'a LexContext<'a>) -> OrcRes<(&'a str, GenTokTree)> {
|
||||
let Some(tail2) = tail.strip_prefix('\'') else {
|
||||
return Err(err_not_applicable(ctx.i()).await);
|
||||
return Err(err_not_applicable().await);
|
||||
};
|
||||
let tail3 = tail2.trim_start();
|
||||
let mut args = Vec::new();
|
||||
return match mac_tree(tail3, &mut args, ctx).await {
|
||||
return match mac_tree(tail3, &mut args, lctx).await {
|
||||
Ok((tail4, mactree)) => {
|
||||
let range = ctx.pos_tt(tail, tail4);
|
||||
let range = lctx.pos_tt(tail, tail4);
|
||||
let tok = match &args[..] {
|
||||
[] => x_tok(mactree).await,
|
||||
_ => {
|
||||
let call = ([
|
||||
ref_tok(sym!(macros::instantiate_tpl; ctx.i()).await).await.at(range.clone()),
|
||||
x_tok(mactree).await.at(range.clone()),
|
||||
]
|
||||
.into_iter())
|
||||
.chain(args.into_iter());
|
||||
let instantiate_tpl_call =
|
||||
InstantiateTplCall { argc: args.len(), argv: vec![], tpl: mactree };
|
||||
let call = chain!([x_tok(instantiate_tpl_call).await.at(range.clone())], args);
|
||||
GenTok::S(Paren::Round, call.collect())
|
||||
},
|
||||
};
|
||||
Ok((tail4, tok.at(range)))
|
||||
},
|
||||
Err(e) => Ok((tail2, GenTok::Bottom(e).at(ctx.pos_lt(1, tail2)))),
|
||||
Err(e) => Ok((tail2, GenTok::Bottom(e).at(lctx.pos_lt(1, tail2)))),
|
||||
};
|
||||
async fn mac_tree<'a>(
|
||||
tail: &'a str,
|
||||
@@ -53,13 +51,12 @@ impl Lexer for MacTreeLexer {
|
||||
return loop {
|
||||
let tail2 = body_tail.trim_start();
|
||||
if let Some(tail3) = tail2.strip_prefix(*rp) {
|
||||
break Ok((tail3, MacTok::S(*paren, items).at(ctx.pos_tt(tail, tail3).pos())));
|
||||
let tok = MacTok::S(*paren, MacTreeSeq::new(items));
|
||||
break Ok((tail3, tok.at(ctx.pos_tt(tail, tail3).pos())));
|
||||
} else if tail2.is_empty() {
|
||||
return Err(mk_errv(
|
||||
ctx.i().i("Unclosed block").await,
|
||||
format!("Expected closing {rp}"),
|
||||
[ctx.pos_lt(1, tail)],
|
||||
));
|
||||
return Err(mk_errv(is("Unclosed block").await, format!("Expected closing {rp}"), [
|
||||
ctx.pos_lt(1, tail),
|
||||
]));
|
||||
}
|
||||
let (new_tail, new_item) = mac_tree(tail2, args, ctx).boxed_local().await?;
|
||||
body_tail = new_tail;
|
||||
@@ -85,10 +82,10 @@ impl Lexer for MacTreeLexer {
|
||||
body.push(body_tok);
|
||||
tail3 = tail5;
|
||||
}
|
||||
Ok((tail3, MacTok::Lambda(param, body).at(ctx.pos_tt(tail, tail3).pos())))
|
||||
Ok((tail3, MacTok::Lambda(param, MacTreeSeq::new(body)).at(ctx.pos_tt(tail, tail3).pos())))
|
||||
} else {
|
||||
let (tail2, sub) = ctx.recurse(tail).await?;
|
||||
let parsed = parse_tok(&sub, ctx).await.expect("Unexpected invalid token");
|
||||
let parsed = parse_tok(&sub).await.expect("Unexpected invalid token");
|
||||
Ok((tail2, parsed))
|
||||
}
|
||||
}
|
||||
|
||||
188
orchid-std/src/macros/match_macros.rs
Normal file
188
orchid-std/src/macros/match_macros.rs
Normal file
@@ -0,0 +1,188 @@
|
||||
use std::borrow::Cow;
|
||||
|
||||
use async_fn_stream::stream;
|
||||
use futures::future::join_all;
|
||||
use futures::{Stream, StreamExt, stream};
|
||||
use never::Never;
|
||||
use orchid_api::ExprTicket;
|
||||
use orchid_api_derive::Coding;
|
||||
use orchid_base::error::{OrcRes, mk_errv};
|
||||
use orchid_base::format::fmt;
|
||||
use orchid_base::interner::is;
|
||||
use orchid_base::name::Sym;
|
||||
use orchid_base::sym;
|
||||
use orchid_extension::atom::{Atomic, TAtom};
|
||||
use orchid_extension::atom_owned::{OwnedAtom, OwnedVariant, own};
|
||||
use orchid_extension::conv::ToExpr;
|
||||
use orchid_extension::coroutine_exec::{ExecHandle, exec};
|
||||
use orchid_extension::expr::{Expr, ExprHandle};
|
||||
use orchid_extension::gen_expr::{GExpr, arg, bot, call, lambda, sym_ref};
|
||||
use orchid_extension::tree::{GenMember, fun, prefix};
|
||||
|
||||
use crate::macros::resolve::resolve;
|
||||
use crate::macros::utils::{build_macro, mactree, mactreev};
|
||||
use crate::std::reflection::sym_atom::SymAtom;
|
||||
use crate::std::tuple::Tuple;
|
||||
use crate::{HomoTpl, MacTok, MacTree, OrcOpt, Tpl, UntypedTuple, api};
|
||||
|
||||
#[derive(Clone, Coding)]
|
||||
pub struct MatcherData {
|
||||
keys: Vec<api::TStrv>,
|
||||
matcher: ExprTicket,
|
||||
}
|
||||
impl MatcherData {
|
||||
async fn matcher(&self) -> Expr { Expr::from_handle(ExprHandle::from_ticket(self.matcher).await) }
|
||||
pub async fn run_matcher(
|
||||
&self,
|
||||
h: &mut ExecHandle<'_>,
|
||||
val: impl ToExpr,
|
||||
) -> OrcRes<OrcOpt<HomoTpl<Expr>>> {
|
||||
h.exec::<OrcOpt<HomoTpl<Expr>>>(call(self.matcher().await.to_gen().await, [val.to_gen().await]))
|
||||
.await
|
||||
}
|
||||
pub fn keys(&self) -> impl Stream<Item = Sym> {
|
||||
stream(async |mut h| {
|
||||
for tk in &self.keys {
|
||||
h.emit(Sym::from_api(*tk).await).await
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
#[derive(Clone)]
|
||||
pub struct MatcherAtom {
|
||||
/// The names that subresults may be bound to
|
||||
pub(super) keys: Vec<Sym>,
|
||||
/// Takes the value-to-be-matched, returns an `option (tuple T1..TN)` of the
|
||||
/// subresults to be bound to the names returned by [Self::keys]
|
||||
pub(super) matcher: Expr,
|
||||
}
|
||||
impl Atomic for MatcherAtom {
|
||||
type Data = MatcherData;
|
||||
type Variant = OwnedVariant;
|
||||
}
|
||||
impl OwnedAtom for MatcherAtom {
|
||||
type Refs = Never;
|
||||
async fn val(&self) -> std::borrow::Cow<'_, Self::Data> {
|
||||
Cow::Owned(MatcherData {
|
||||
keys: self.keys.iter().map(|t| t.to_api()).collect(),
|
||||
matcher: self.matcher.handle().ticket(),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn gen_match_macro_lib() -> Vec<GenMember> {
|
||||
prefix("pattern", [
|
||||
fun(
|
||||
true,
|
||||
"match_one",
|
||||
async |mat: TAtom<MatcherAtom>, value: Expr, then: Expr, default: Expr| {
|
||||
exec(async move |mut h| match mat.run_matcher(&mut h, value).await? {
|
||||
OrcOpt(Some(values)) =>
|
||||
Ok(call(then.to_gen().await, join_all(values.0.into_iter().map(|x| x.to_gen())).await)),
|
||||
OrcOpt(None) => Ok(default.to_gen().await),
|
||||
})
|
||||
.await
|
||||
},
|
||||
),
|
||||
fun(true, "matcher", async |names: HomoTpl<TAtom<SymAtom>>, matcher: Expr| MatcherAtom {
|
||||
keys: join_all(names.0.iter().map(async |atm| Sym::from_api(atm.0).await)).await,
|
||||
matcher,
|
||||
}),
|
||||
build_macro(None, ["match", "match_rule", "_row", "=>"])
|
||||
.rule(mactreev!("pattern::match" "...$" value 0 { "..$" rules 0 }), [
|
||||
async |[value, rules]| {
|
||||
exec(async move |mut h| {
|
||||
let rule_lines = h
|
||||
.exec::<TAtom<Tuple>>(call(sym_ref(sym!(macros::resolve)), [
|
||||
mactree!(macros::common::semi_list "push" rules.clone();).to_gen().await,
|
||||
]))
|
||||
.await?;
|
||||
let mut rule_atoms = Vec::<(TAtom<MatcherAtom>, Expr)>::new();
|
||||
for line_exprh in rule_lines.iter() {
|
||||
let line_mac = h
|
||||
.exec::<TAtom<MacTree>>(Expr::from_handle(
|
||||
ExprHandle::from_ticket(*line_exprh).await,
|
||||
))
|
||||
.await?;
|
||||
let Tpl((matcher, body)) = h
|
||||
.exec(call(sym_ref(sym!(macros::resolve)), [
|
||||
mactree!(pattern::_row "push" own(&line_mac).await ;).to_gen().await,
|
||||
]))
|
||||
.await?;
|
||||
rule_atoms.push((matcher, body));
|
||||
}
|
||||
let base_case = lambda(0, [bot(mk_errv(
|
||||
is("No branches match").await,
|
||||
"None of the patterns matches this value",
|
||||
[rules.pos()],
|
||||
))]);
|
||||
let match_expr = stream::iter(rule_atoms.into_iter().rev())
|
||||
.fold(base_case, async |tail, (mat, body)| {
|
||||
lambda(0, [call(sym_ref(sym!(pattern::match_one)), [
|
||||
mat.to_gen().await,
|
||||
arg(0),
|
||||
body.to_gen().await,
|
||||
call(tail, [arg(0)]),
|
||||
])])
|
||||
})
|
||||
.await;
|
||||
Ok(call(match_expr, [resolve(value).await]))
|
||||
})
|
||||
.await
|
||||
},
|
||||
])
|
||||
.rule(mactreev!(pattern::match_rule (( "...$" pattern 0 ))), [async |[pattern]| {
|
||||
resolve(mactree!(pattern::match_rule "push" pattern; )).await
|
||||
}])
|
||||
.rule(mactreev!(pattern::match_rule ( macros::common::_ )), [async |[]| {
|
||||
Ok(MatcherAtom {
|
||||
keys: Vec::new(),
|
||||
matcher: lambda(0, [OrcOpt(Some(Tpl(()))).to_gen().await]).create().await,
|
||||
})
|
||||
}])
|
||||
.rule(mactreev!(pattern::_row ( "...$" pattern 0 pattern::=> "...$" value 1 )), [
|
||||
async |[pattern, mut value]| {
|
||||
exec(async move |mut h| -> OrcRes<Tpl<(TAtom<MatcherAtom>, GExpr)>> {
|
||||
let Ok(pat) = h
|
||||
.exec::<TAtom<MatcherAtom>>(call(sym_ref(sym!(macros::resolve)), [
|
||||
mactree!(pattern::match_rule "push" pattern.clone();).to_gen().await,
|
||||
]))
|
||||
.await
|
||||
else {
|
||||
return Err(mk_errv(
|
||||
is("Invalid pattern").await,
|
||||
format!("Could not parse {} as a match pattern", fmt(&pattern).await),
|
||||
[pattern.pos()],
|
||||
));
|
||||
};
|
||||
value = (pat.keys())
|
||||
.fold(value, async |value, name| mactree!("l_" name; ( "push" value ; )))
|
||||
.await;
|
||||
Ok(Tpl((pat, resolve(value).await)))
|
||||
})
|
||||
.await
|
||||
},
|
||||
])
|
||||
.finish(),
|
||||
fun(true, "ref_body", async |val| OrcOpt(Some(UntypedTuple(vec![val])))),
|
||||
build_macro(None, ["ref"])
|
||||
.rule(mactreev!(pattern::match_rule(pattern::ref "$" name)), [async |[name]| {
|
||||
let MacTok::Name(name) = name.tok() else {
|
||||
return Err(mk_errv(
|
||||
is("pattern 'ref' requires a name to bind to").await,
|
||||
format!(
|
||||
"'ref' was interpreted as a binding matcher, \
|
||||
but it was followed by {} instead of a name",
|
||||
fmt(&name).await
|
||||
),
|
||||
[name.pos()],
|
||||
));
|
||||
};
|
||||
Ok(MatcherAtom {
|
||||
keys: vec![name.clone()],
|
||||
matcher: sym_ref(sym!(pattern::ref_body)).to_expr().await,
|
||||
})
|
||||
}])
|
||||
.finish(),
|
||||
])
|
||||
}
|
||||
@@ -3,10 +3,14 @@ mod let_line;
|
||||
mod macro_lib;
|
||||
mod macro_line;
|
||||
pub mod macro_system;
|
||||
mod macro_value;
|
||||
pub mod mactree;
|
||||
mod mactree_lexer;
|
||||
pub mod recur_state;
|
||||
pub mod match_macros;
|
||||
mod ph_lexer;
|
||||
mod resolve;
|
||||
mod rule;
|
||||
pub mod std_macros;
|
||||
mod utils;
|
||||
|
||||
use mactree::{MacTok, MacTree};
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user