Bus firewall framework aims to provide a kernel API to set the configuration
of the harware blocks in charge of busses access control.
Framework architecture is inspirated by pinctrl framework:
- a default configuration could be applied before bind the driver.
If a configuration could not be applied the driver is not bind
to avoid doing accesses on prohibited regions.
- configurations could be apllied dynamically by drivers.
- device node provides the bus firewall configurations.
An example of bus firewall controller is STM32 ETZPC hardware block
which got 3 possible configurations:
- trust: hardware blocks are only accessible by software running on trust
zone (i.e op-tee firmware).
- non-secure: hardware blocks are accessible by non-secure software (i.e.
linux kernel).
- coprocessor: hardware blocks are only accessible by the coprocessor.
Up to 94 hardware blocks of the soc could be managed by ETZPC.
At least two other hardware blocks can take benefits of this:
- ARM TZC-400: http://infocenter.arm.com/help/topic/com.arm.doc.100325_0001_02_en/arm_core…
which is able to manage up to 8 regions in address space.
- IMX Ressource Domain Controller (RDC): supports four domains and up to eight regions
Version 2 has been rebased on top of v5.5
- Change framework name to "firewall" because the targeted hardware block
are acting like firewall on the busses.
- Mark Brown had reviewed the previous version but it was on kernel 5.1 and I change
the name framework so I have decided to remove it.
- Use yaml file to describe the bindings
Benjamin
Benjamin Gaignard (7):
dt-bindings: bus: Add firewall bindings
bus: Introduce firewall controller framework
base: Add calls to firewall controller
dt-bindings: bus: Add STM32 ETZPC firewall controller
bus: firewall: Add driver for STM32 ETZPC controller
ARM: dts: stm32: Add firewall node for stm32mp157 SoC
ARM: dts: stm32: enable firewall controller node on stm32mp157c-ed1
.../bindings/bus/firewall/firewall-consumer.yaml | 25 ++
.../bindings/bus/firewall/firewall-provider.yaml | 18 ++
.../bindings/bus/firewall/st,stm32-etzpc.yaml | 41 ++++
arch/arm/boot/dts/stm32mp157c-ev1.dts | 2 +
arch/arm/boot/dts/stm32mp157c.dtsi | 7 +
drivers/base/dd.c | 9 +
drivers/bus/Kconfig | 2 +
drivers/bus/Makefile | 2 +
drivers/bus/firewall/Kconfig | 14 ++
drivers/bus/firewall/Makefile | 2 +
drivers/bus/firewall/firewall.c | 264 +++++++++++++++++++++
drivers/bus/firewall/stm32-etzpc.c | 140 +++++++++++
include/dt-bindings/bus/firewall/stm32-etzpc.h | 90 +++++++
include/linux/firewall.h | 70 ++++++
14 files changed, 686 insertions(+)
create mode 100644 Documentation/devicetree/bindings/bus/firewall/firewall-consumer.yaml
create mode 100644 Documentation/devicetree/bindings/bus/firewall/firewall-provider.yaml
create mode 100644 Documentation/devicetree/bindings/bus/firewall/st,stm32-etzpc.yaml
create mode 100644 drivers/bus/firewall/Kconfig
create mode 100644 drivers/bus/firewall/Makefile
create mode 100644 drivers/bus/firewall/firewall.c
create mode 100644 drivers/bus/firewall/stm32-etzpc.c
create mode 100644 include/dt-bindings/bus/firewall/stm32-etzpc.h
create mode 100644 include/linux/firewall.h
--
2.15.0
Hi all,
Please send your agenda items for the upcoming call on Thursday.
Thanks & regards,
Nathalie
-----Original Appointment-----
From: Nathalie Chan King Choy
Sent: Monday, January 27, 2020 3:37 PM
To: Nathalie Chan King Choy; system-dt(a)lists.openampproject.org
Cc: nathalie-ckc(a)kestrel-omnitech.com; Kepa, Krzysztof (GE Global Research); Don Harbin; Bruce Ashfield; Wesley Skeffington; ilias.apalodimas(a)linaro.org; Milea, Danut Gabriel (Danut); Joakim Bech; Pierre Guironnet de Massas; Markham, Joel (GE Global Research, US); Tony McDowell; robherring2(a)gmail.com; Ed T. Mooring; Loic PALLARDY; Grant Likely
Subject: System Device Tree call
When: Thursday, February 13, 2020 10:00 AM-11:00 AM (UTC-08:00) Pacific Time (US & Canada).
Where: Zoom
Hi all,
Please join the call on Zoom: https://zoom.us/my/openampproject
(If you need the meeting ID, it's 9031895760)
The notes from the previous call (Jan 22) can be found on the OpenAMP wiki at this link:
https://github.com/OpenAMP/open-amp/wiki/System-DT-Meeting-Notes-2020#2020J…
Action items from the previous call:
* Stefano: Document a little bit more how the model works
** Remove reserved memory
** Add top-level use-case FAQ (e.g. how to do peripheral assignment to SW)
** Consider putting a qualifier word before "domain" to make it more specific
* Everyone: Try to poke holes in the model. Good to have hard questions to think through & answer
* Rob: Prototype proposal of changing root
* Nathalie: co-ordinate next call over email (2 weeks from now doesn't work b/c Rob can't make it)
For info about the list, link to the archives, to unsubscribe yourself, or
for someone to subscribe themselves, visit:
https://lists.openampproject.org/mailman/listinfo/system-dt
For information about the System Device Trees effort, including a link to
the intro presentation from Linaro Connect SAN19:
https://github.com/OpenAMP/open-amp/wiki/System-Device-Trees
Best regards,
Nathalie C. Chan King Choy
Project Manager focused on Open Source and Community
On Tue, 11 Feb 2020 at 02:03, Stefano Stabellini via System-dt
<system-dt(a)lists.openampproject.org> wrote:
>
> Hi all,
>
> During the last system device tree call, we agreed it would be helpful
> to have a document explaining how to use the proposed new device tree
> bindings to describe heterogeneous systems and solve typical problems,
> such as memory reservations for multiple domains, multiple interrupts
> controllers, etc.
>
> Tomas and I wrote the following document (also attached: feel free to
> use pandoc to convert it into html if you prefer to read it that way).
> It includes an introduction to system device tree, a short description
> of the bindings, and how to use them to solve common problems. I hope
> that it will help create a common understanding of the problems we are
> trying to solve and the potential solutions. I also attached the full
> system device tree example as reference.
>
> Cheers,
>
> Stefano
>
>
>
> System Device Tree Concepts
> ===========================
>
> System Device Trees extends traditional Device Trees to handle
> heterogeneous SoCs with multiple CPUs and Execution Domains. An
> Execution Domain can be seen as an address space that is running a
> software image, whether an operating system, a hypervisor or firmware
> that has a set of cpus, memory and devices attached to it. I.e. Each
> individual CPU/core that is not part of an SMP cluster is a separate
> Execution Domain as is the different Execution Levels on an ARMv8-A
> architecture. Trusted and not trusted environment can also be viewed as
> separate Execution Domains.
>
> A design goal of System Device Trees is that no current client of Device
> Trees should have to change at all, unless it wants to take advantage of
> the extra information. This means that Linux in particular does not need
> to change since it will see a Device Tree that it can handle with the
> current implementation, potentially with some extra information it can
> ignore.
>
> System Device Trees must handle two types of heterogeneous additions:
>
> 1. Being able to specify different cpu clusters and the actual memory
> and devices hard-wired to them
> - This is done through the new Hardware Descriptions, such as
> "cpu,cluster" and "indirect-bus"
> - This information is provided by the SoC vendor and is typically
> fixed for a given SoC/board
> 2. Being able to assign hardware resources that can be configured by
> software to be used by one or more Execution Domains
> - This is done through the Execution Domain configuration
> - This information is provided by a System Architect and will be
> different for different use cases, even for the same board
> - E.g. How much memory and which devices goes to Linux vs. an
> RTOS can be different from one boot to another
> - This information should be separated from the hard-wired
> information for two reasons
> - A different persona will add and edit the information
> - Configuration should be separated from specification since it
> has a different rate of change
>
> The System Device Trees and Execution Domain information are used in two
> major use cases:
>
> 1. Exclusively on the host by using a tool like Lopper that will "prune"
> the System Device Tree
> - Each domain will get its own "traditional" Device Tree that only
> sees one address space and has one "cpus" node, etc.
> - Lopper has pluggable backends to it can also generate information
> for clients that is using a different format
> - E.g. It can generate a bunch of "#defines" that can be
> included and compiled in to an RTOS
> 2. System Device Trees can be used by a "master" target environment that
> manages multiple Execution Domains:
> - a firmware that can set up hardware protection and use it to
> restart individual domains
> - E.g. Protect the Linux memory so the R5 OS can't reach it
> - any other operating system or hypervisor that has sub-domains
> - E.g. Xen can use the Execution Domains to get info about the Xen
> guests (also called domains)
> - E.g. Linux could use the default domain for its own
> configuration and the domains to manage other CPUs
> - Since System Device Trees are backwards compatible with Device
> Trees, the only changes needed in Linux would be any new code
> taking advantage of the Domain information
> - a default master has access to all resources (CPUs, memories,
> devices), it has to make sure it stops using the resource
> itself when it "gives it away" to a sub-domain
>
> There is a concept of a default Execution Domain in System Device Trees,
> which corresponds to /cpus. The default domain is compatible with the
> current traditional Device Tree. It is useful for a couple of reasons:
>
> 1. As a way to specify the default place to assign added hardware (see
> use case #1)
> - A default domain does not have to list the all the HW resources
> allocated to it. It gets everything not allocated elsewhere by
> Lopper.
> - This minimizes the amount of information needed in the Domain
> configuration.
> - This is also useful for dynamic hardware such as add-on boards and
> FPGA images that are adding new devices.
> 2. The default domain can be used to specify what a master environment
> sees (see use case #2)
> - E.g. the default domain is what is configuring Linux or Xen, while
> the other domains specify domains to be managed by the master
>
>
> System Device Tree Hardware Description
> =======================================
>
> To turn system device tree into a reality we are introducing a few new
> concepts. They enable us to describe a system with multiple cpus
> clusters and potentially different address mappings for each of them
> (i.e. a device could be seen at different addresses from different cpus
> clusters).
>
> The new concepts are:
>
> - Multiple top level "cpus,cluster" nodes to describe heterogeneous CPU
> clusters.
> - "indirect-bus": a new type of bus that does not automatically map to
> the parent address space (i.e. not automatically visible).
> - An "address-map" property to express the different address mappings of
> the different cpus clusters and to map indirect-buses.
>
> The following is a brief example to show how they can be used together:
>
>
> /* default cluster */
> cpus {
> cpu@0 {
> };
> cpu@1 {
> };
> };
>
> /* additional R5 cluster */
> cpus_r5: cpus-cluster@0 {
> compatible = "cpus,cluster";
>
> /* specifies address mappings */
> address-map = <0xf9000000 &amba_rpu 0xf9000000 0x10000>;
>
> cpu@0 {
> };
>
> cpu@1 {
> };
> };
>
> amba_rpu: indirect-bus@f9000000 {
> compatible = "indirect-bus";
> };
>
>
> In this example we can see:
> - two cpus clusters, one of them is the default top-level cpus node
> - an indirect-bus "amba_rpu" which is not visible to the top-level cpus
> node
> - the cpus_r5 cluster can see amba_rpu because it is explicitly mapped
> using the address-map property
>
>
> Devices only physically accessible from one of the two clusters should
> be placed under an indirect-bus as appropriate. For instance, in the
> following example we can see how interrupts controllers are expressed:
>
>
> /* default cluster */
> cpus {
> };
>
> /* additional R5 cluster */
> cpus_r5: cpus-cluster@0 {
> compatible = "cpus,cluster";
>
> /* specifies address mappings */
> address-map = <0xf9000000 &amba_rpu 0xf9000000 0x10000>;
> };
>
> /* bus only accessible by cpus */
> amba_apu: bus@f9000000 {
> compatible = "simple-bus";
>
> gic_a72: interrupt-controller@f9000000 {
> };
> };
>
> /* bus only accessible by cpus_r5 */
> amba_rpu: indirect-bus@f9000000 {
> compatible = "indirect-bus";
>
> gic_r5: interrupt-controller@f9000000 {
> };
> };
>
>
> gic_a72 is accessible by /cpus, but not by cpus_r5, because amba_apu is
> not present in the address-map of cpus_r5.
>
> gic_r5 is visible to cpus_r5, because it is present in the address map
> of cpus_r5. gic_r5 is not visible to /cpus because indirect-bus doesn't
> automatically map to the parent address space, and /cpus doesn't have an
> address-map property in the example.
>
> Relying on the fact that each interrupt controller is correctly visible
> to the right cpus cluster, it is possible to express interrupt routing
> from a device to multiple clusters. For instance:
>
>
> amba: bus@f1000000 {
> compatible = "simple-bus";
> ranges;
>
> #interrupt-cells = <3>;
> interrupt-map-pass-thru = <0xffffffff 0xffffffff 0xffffffff>;
> interrupt-map-mask = <0x0 0x0 0x0>;
> interrupt-map = <0x0 0x0 0x0 &gic_a72 0x0 0x0 0x0>,
> <0x0 0x0 0x0 &gic_r5 0x0 0x0 0x0>;
>
> can@ff060000 {
> compatible = "xlnx,canfd-2.0";
> reg = <0x0 0xff060000 0x0 0x6000>;
> interrupts = <0x0 0x14 0x1>;
> };
> };
>
> In this example, all devices under amba, including can@ff060000, have
> their interrupts routed to both gic_r5 and gic_a72.
>
> Memory only physically accessible by one of the clusters can be placed
> under an indirect-bus like any other device types. However, normal
> memory is usually physically accessible by all clusters. It is just a
> software configuration that splits memory into ranges and assigns a
> range for each execution domain. Software configurations are explained
> below.
>
>
> Execution Domains
> =================
>
> An execution domain is a collection of software, firmware, and board
> configurations that enable an operating system or an application to run
> a cpus cluster. With multiple cpus clusters in a system it is natural to
> have multiple execution domains, at least one per cpus cluster. There
> can be more than one execution domain for each cluster, with
> virtualization or non-lockstep execution (for cpus clusters that support
> it). Execution domains are configured and added at a later stage by a
> software architect.
>
> Execution domains are expressed by a new node "openamp,domain"
> compatible. Being a configuration rather than a description, their
> natural place is under /chosen or under a similar new top-level node. In
> this example, I used /domains:
>
>
> domains {
> openamp_r5 {
> compatible = "openamp,domain-v1";
> cpus = <&cpus_r5 0x2 0x80000000>;
> memory = <0x0 0x0 0x0 0x8000000>;
> access = <&can@ff060000>;
> };
> };
>
>
> An openamp,domain node contains information about:
> - cpus: physical cpus on which the software is running on
> - memory: memory assigned to the domain
> - access: any devices configured to be only accessible by a domain
>
> The access list is an array of links to devices that are configured to
> be only accessible by an execution domain, using bus firewalls or
> similar technologies.
>
> The memory range assigned to an execution domain is expressed by the
> memory property. It needs to be a subset of the physical memory in the
> system. The memory property can also be used to express memory sharing
> between domains:
>
>
> domains {
> openamp_r5 {
> compatible = "openamp,domain-v1";
> memory = <0x0 0x0 0x0 0x8000000 0x8 0x0 0x0 0x10000>;
> };
> openamp_a72 {
> compatible = "openamp,domain-v1";
> memory = <0x0 0x8000000 0x0 0x80000000 0x8 0x0 0x0 0x10000>;
> };
> };
>
Are those domains also define a "reset domain"? I mean, you may want
to selectively reset a domain (within a particular clock domain).
Or shall reset-domains be defined and then domains would be defined on
subsets of those?
> In this example, a 16 pages range starting at 0x800000000 is shared
> between two domains.
>
> In a system device tree without a default cpus cluster (no top-level
> cpus node), lopper figures out memory assignment for each domain by
> looking at the memory property under each "openamp,domain" node. In a
> device tree with a top-level cpus cluster, and potentially a legacy OS
> running on it, we might want to "hide" the memory reservation for other
> clusters from /cpus. We can do that with /reserved-memory:
>
> reserved-memory {
> #address-cells = <0x2>;
> #size-cells = <0x2>;
> ranges;
>
> memory_r5@0 {
> compatible = "openamp,domain-memory-v1";
> reg = <0x0 0x0 0x0 0x8000000>;
> };
> };
>
> The purpose of memory_r5@0 is to let the default execution domain know
> that it shouldn't use the 0x0-0x8000000 memory range because it is
> reserved for use by other domains.
>
> /reserved-memory and /chosen are top-level nodes dedicated to
> configurations, rather than hardware description. Each execution domain
> might need similar configurations, hence, chosen and reserved-memory are
> also specified under each openamp,domain node for domains specific
> configurations. The top-level /reserved-memory and /chosen nodes remain in
> place for the default execution domain. As an example:
>
> /chosen -> configuration for a legacy OS running on /cpus
> /reserved-memory -> reserved memory for a legacy OS running on /cpus
>
> /domains/openamp_r5/chosen -> configuration for the domain "openamp_r5"
> /domains/openamp_r5/reserved-memory -> reserved memory for "openamp_r5"--
> System-dt mailing list
> System-dt(a)lists.openampproject.org
> https://lists.openampproject.org/mailman/listinfo/system-dt
--
François-Frédéric Ozog | Director Linaro Edge & Fog Computing Group
T: +33.67221.6485
francois.ozog(a)linaro.org | Skype: ffozog
Hi all,
During the last system device tree call, we agreed it would be helpful
to have a document explaining how to use the proposed new device tree
bindings to describe heterogeneous systems and solve typical problems,
such as memory reservations for multiple domains, multiple interrupts
controllers, etc.
Tomas and I wrote the following document (also attached: feel free to
use pandoc to convert it into html if you prefer to read it that way).
It includes an introduction to system device tree, a short description
of the bindings, and how to use them to solve common problems. I hope
that it will help create a common understanding of the problems we are
trying to solve and the potential solutions. I also attached the full
system device tree example as reference.
Cheers,
Stefano
System Device Tree Concepts
===========================
System Device Trees extends traditional Device Trees to handle
heterogeneous SoCs with multiple CPUs and Execution Domains. An
Execution Domain can be seen as an address space that is running a
software image, whether an operating system, a hypervisor or firmware
that has a set of cpus, memory and devices attached to it. I.e. Each
individual CPU/core that is not part of an SMP cluster is a separate
Execution Domain as is the different Execution Levels on an ARMv8-A
architecture. Trusted and not trusted environment can also be viewed as
separate Execution Domains.
A design goal of System Device Trees is that no current client of Device
Trees should have to change at all, unless it wants to take advantage of
the extra information. This means that Linux in particular does not need
to change since it will see a Device Tree that it can handle with the
current implementation, potentially with some extra information it can
ignore.
System Device Trees must handle two types of heterogeneous additions:
1. Being able to specify different cpu clusters and the actual memory
and devices hard-wired to them
- This is done through the new Hardware Descriptions, such as
"cpu,cluster" and "indirect-bus"
- This information is provided by the SoC vendor and is typically
fixed for a given SoC/board
2. Being able to assign hardware resources that can be configured by
software to be used by one or more Execution Domains
- This is done through the Execution Domain configuration
- This information is provided by a System Architect and will be
different for different use cases, even for the same board
- E.g. How much memory and which devices goes to Linux vs. an
RTOS can be different from one boot to another
- This information should be separated from the hard-wired
information for two reasons
- A different persona will add and edit the information
- Configuration should be separated from specification since it
has a different rate of change
The System Device Trees and Execution Domain information are used in two
major use cases:
1. Exclusively on the host by using a tool like Lopper that will "prune"
the System Device Tree
- Each domain will get its own "traditional" Device Tree that only
sees one address space and has one "cpus" node, etc.
- Lopper has pluggable backends to it can also generate information
for clients that is using a different format
- E.g. It can generate a bunch of "#defines" that can be
included and compiled in to an RTOS
2. System Device Trees can be used by a "master" target environment that
manages multiple Execution Domains:
- a firmware that can set up hardware protection and use it to
restart individual domains
- E.g. Protect the Linux memory so the R5 OS can't reach it
- any other operating system or hypervisor that has sub-domains
- E.g. Xen can use the Execution Domains to get info about the Xen
guests (also called domains)
- E.g. Linux could use the default domain for its own
configuration and the domains to manage other CPUs
- Since System Device Trees are backwards compatible with Device
Trees, the only changes needed in Linux would be any new code
taking advantage of the Domain information
- a default master has access to all resources (CPUs, memories,
devices), it has to make sure it stops using the resource
itself when it "gives it away" to a sub-domain
There is a concept of a default Execution Domain in System Device Trees,
which corresponds to /cpus. The default domain is compatible with the
current traditional Device Tree. It is useful for a couple of reasons:
1. As a way to specify the default place to assign added hardware (see
use case #1)
- A default domain does not have to list the all the HW resources
allocated to it. It gets everything not allocated elsewhere by
Lopper.
- This minimizes the amount of information needed in the Domain
configuration.
- This is also useful for dynamic hardware such as add-on boards and
FPGA images that are adding new devices.
2. The default domain can be used to specify what a master environment
sees (see use case #2)
- E.g. the default domain is what is configuring Linux or Xen, while
the other domains specify domains to be managed by the master
System Device Tree Hardware Description
=======================================
To turn system device tree into a reality we are introducing a few new
concepts. They enable us to describe a system with multiple cpus
clusters and potentially different address mappings for each of them
(i.e. a device could be seen at different addresses from different cpus
clusters).
The new concepts are:
- Multiple top level "cpus,cluster" nodes to describe heterogeneous CPU
clusters.
- "indirect-bus": a new type of bus that does not automatically map to
the parent address space (i.e. not automatically visible).
- An "address-map" property to express the different address mappings of
the different cpus clusters and to map indirect-buses.
The following is a brief example to show how they can be used together:
/* default cluster */
cpus {
cpu@0 {
};
cpu@1 {
};
};
/* additional R5 cluster */
cpus_r5: cpus-cluster@0 {
compatible = "cpus,cluster";
/* specifies address mappings */
address-map = <0xf9000000 &amba_rpu 0xf9000000 0x10000>;
cpu@0 {
};
cpu@1 {
};
};
amba_rpu: indirect-bus@f9000000 {
compatible = "indirect-bus";
};
In this example we can see:
- two cpus clusters, one of them is the default top-level cpus node
- an indirect-bus "amba_rpu" which is not visible to the top-level cpus
node
- the cpus_r5 cluster can see amba_rpu because it is explicitly mapped
using the address-map property
Devices only physically accessible from one of the two clusters should
be placed under an indirect-bus as appropriate. For instance, in the
following example we can see how interrupts controllers are expressed:
/* default cluster */
cpus {
};
/* additional R5 cluster */
cpus_r5: cpus-cluster@0 {
compatible = "cpus,cluster";
/* specifies address mappings */
address-map = <0xf9000000 &amba_rpu 0xf9000000 0x10000>;
};
/* bus only accessible by cpus */
amba_apu: bus@f9000000 {
compatible = "simple-bus";
gic_a72: interrupt-controller@f9000000 {
};
};
/* bus only accessible by cpus_r5 */
amba_rpu: indirect-bus@f9000000 {
compatible = "indirect-bus";
gic_r5: interrupt-controller@f9000000 {
};
};
gic_a72 is accessible by /cpus, but not by cpus_r5, because amba_apu is
not present in the address-map of cpus_r5.
gic_r5 is visible to cpus_r5, because it is present in the address map
of cpus_r5. gic_r5 is not visible to /cpus because indirect-bus doesn't
automatically map to the parent address space, and /cpus doesn't have an
address-map property in the example.
Relying on the fact that each interrupt controller is correctly visible
to the right cpus cluster, it is possible to express interrupt routing
from a device to multiple clusters. For instance:
amba: bus@f1000000 {
compatible = "simple-bus";
ranges;
#interrupt-cells = <3>;
interrupt-map-pass-thru = <0xffffffff 0xffffffff 0xffffffff>;
interrupt-map-mask = <0x0 0x0 0x0>;
interrupt-map = <0x0 0x0 0x0 &gic_a72 0x0 0x0 0x0>,
<0x0 0x0 0x0 &gic_r5 0x0 0x0 0x0>;
can@ff060000 {
compatible = "xlnx,canfd-2.0";
reg = <0x0 0xff060000 0x0 0x6000>;
interrupts = <0x0 0x14 0x1>;
};
};
In this example, all devices under amba, including can@ff060000, have
their interrupts routed to both gic_r5 and gic_a72.
Memory only physically accessible by one of the clusters can be placed
under an indirect-bus like any other device types. However, normal
memory is usually physically accessible by all clusters. It is just a
software configuration that splits memory into ranges and assigns a
range for each execution domain. Software configurations are explained
below.
Execution Domains
=================
An execution domain is a collection of software, firmware, and board
configurations that enable an operating system or an application to run
a cpus cluster. With multiple cpus clusters in a system it is natural to
have multiple execution domains, at least one per cpus cluster. There
can be more than one execution domain for each cluster, with
virtualization or non-lockstep execution (for cpus clusters that support
it). Execution domains are configured and added at a later stage by a
software architect.
Execution domains are expressed by a new node "openamp,domain"
compatible. Being a configuration rather than a description, their
natural place is under /chosen or under a similar new top-level node. In
this example, I used /domains:
domains {
openamp_r5 {
compatible = "openamp,domain-v1";
cpus = <&cpus_r5 0x2 0x80000000>;
memory = <0x0 0x0 0x0 0x8000000>;
access = <&can@ff060000>;
};
};
An openamp,domain node contains information about:
- cpus: physical cpus on which the software is running on
- memory: memory assigned to the domain
- access: any devices configured to be only accessible by a domain
The access list is an array of links to devices that are configured to
be only accessible by an execution domain, using bus firewalls or
similar technologies.
The memory range assigned to an execution domain is expressed by the
memory property. It needs to be a subset of the physical memory in the
system. The memory property can also be used to express memory sharing
between domains:
domains {
openamp_r5 {
compatible = "openamp,domain-v1";
memory = <0x0 0x0 0x0 0x8000000 0x8 0x0 0x0 0x10000>;
};
openamp_a72 {
compatible = "openamp,domain-v1";
memory = <0x0 0x8000000 0x0 0x80000000 0x8 0x0 0x0 0x10000>;
};
};
In this example, a 16 pages range starting at 0x800000000 is shared
between two domains.
In a system device tree without a default cpus cluster (no top-level
cpus node), lopper figures out memory assignment for each domain by
looking at the memory property under each "openamp,domain" node. In a
device tree with a top-level cpus cluster, and potentially a legacy OS
running on it, we might want to "hide" the memory reservation for other
clusters from /cpus. We can do that with /reserved-memory:
reserved-memory {
#address-cells = <0x2>;
#size-cells = <0x2>;
ranges;
memory_r5@0 {
compatible = "openamp,domain-memory-v1";
reg = <0x0 0x0 0x0 0x8000000>;
};
};
The purpose of memory_r5@0 is to let the default execution domain know
that it shouldn't use the 0x0-0x8000000 memory range because it is
reserved for use by other domains.
/reserved-memory and /chosen are top-level nodes dedicated to
configurations, rather than hardware description. Each execution domain
might need similar configurations, hence, chosen and reserved-memory are
also specified under each openamp,domain node for domains specific
configurations. The top-level /reserved-memory and /chosen nodes remain in
place for the default execution domain. As an example:
/chosen -> configuration for a legacy OS running on /cpus
/reserved-memory -> reserved memory for a legacy OS running on /cpus
/domains/openamp_r5/chosen -> configuration for the domain "openamp_r5"
/domains/openamp_r5/reserved-memory -> reserved memory for "openamp_r5"
Hi all!
[ tl;dr - we're trying to organise a sprint - see the end ]
Due to the Linux on Arm meetup this week, quite a few of our DT
community happened to be in Cambridge together last night. We took the
opportunity to get together for food and beer, and we naturally ended
up mulling over a whole range of things, not least the ongoing System
DT design work.
As is the nature of this kind of gathering, we had a whole range of
opinions and ideas, much more than I can faithfully recall to share
here in great detail. Grant has been keen to try and keep system-dt
compatible with Linux DT usage if that's possible, and Stefano has
been working with that in mind. Rob is concerned about how things
might scale with lots of domains, for example. Olof is keen to talk
about a higher-level design to make it easier to express complex
layouts. And so on.
There is one thing that *did* stand out: we agreed that it's probably
time to try and get all of the interested people in a room together
for a few days with a whiteboard, to really thrash out what's needed
and how to achieve it. With the right people, we can go through all
our needs, pick up the ideas we have and hopefully get some prototypes
ready to evaluate.
So, who's interested? I believe all the people in CC here are likely
keen to be involved. Anybody else?
Although we have Linaro Connect in Budapest at the end of March,
that's really *not* a good place to try and meet up, for a couple of
reasons: not everybody will be there, and we'll have too many
distractions to be able to focus on this. Let's *not* do that.
AFAICS we're geographically roughly split between US/Canada and
Europe, so I'm thinking a week in either the US or Europe, some time
in the next 2-3 months:
* Would a week before or after Connect work, in Europe (16th-20th
March, or 30th March to 3rd April)? I can look for options in the
UK easily, with maybe either Arm or Linaro hosting.
* Alternatively, a week's meeting in the US, deliberately avoiding
the Connect week so we don't wipe people out with travel: Maybe 2-6
or 9-13 March? Or push back into April if that's too short notice,
of course. Would Xilinx be able to host something for us, maybe? Or
another possibility might be the Linaro office in Nashua?
I'm open to other suggestions for time and venue. Let's try to make
something work here?
Cheers,
Steve
--
Steve McIntyre steve.mcintyre(a)linaro.org
<http://www.linaro.org/> Linaro.org | Open source software for ARM SoCs
Week after Connect would work too. (before Connect is Netconf in Canada).
On Mon, 10 Feb 2020 at 15:29, Steve McIntyre via System-dt <
system-dt(a)lists.openampproject.org> wrote:
> On Mon, Feb 10, 2020 at 11:17:35AM +0000, Grant Likely wrote:
> >On 06/02/2020 15:59, Steve McIntyre wrote:
> >> Hi all!
> >>
> >> [ tl;dr - we're trying to organise a sprint - see the end [...]
> >> > AFAICS we're geographically roughly split between US/Canada and
> >> Europe, so I'm thinking a week in either the US or Europe, some time
> >> in the next 2-3 months:
> >>
> >> * Would a week before or after Connect work, in Europe (16th-20th
> >> March, or 30th March to 3rd April)? I can look for options in the
> >> UK easily, with maybe either Arm or Linaro hosting.
> >
> >The week before isn't great, but the week after would work. I would want
> >to travel home over the weekend.
> >
> >> * Alternatively, a week's meeting in the US, deliberately avoiding
> >> the Connect week so we don't wipe people out with travel: Maybe 2-6
> >> or 9-13 March? Or push back into April if that's too short notice,
> >> of course. Would Xilinx be able to host something for us, maybe? Or
> >> another possibility might be the Linaro office in Nashua?
> >
> >The 2nd-3rd could work for me. I may already be traveling to California
> >for this week to attend OCP Summit (4th-5th March) and the LF Member
> >Meeting 10th-12th.
>
> Thanks Grant!
>
> Any more dates please? Feel free to just mail me privately rather than
> to the lists - I'm filling in the data provided into a spreadsheet to
> help with planning.
>
> Cheers,
> --
> Steve McIntyre steve.mcintyre(a)linaro.org
> <http://www.linaro.org/> Linaro.org | Open source software for ARM SoCs
>
> --
> System-dt mailing list
> System-dt(a)lists.openampproject.org
> https://lists.openampproject.org/mailman/listinfo/system-dt
>
--
François-Frédéric Ozog | *Director Linaro Edge & Fog Computing Group*
T: +33.67221.6485
francois.ozog(a)linaro.org | Skype: ffozog
Hi Steve,
A sprint sounds like a good idea. Xilinx would be very happy to host it
here in San Jose, both March 2-6 (preferred) or March 9-13 would work.
Otherwise, it would be challenging for Tomas and me to join a meeting in
Europe outside of Linaro Connect. I know you suggested to rule out
Linaro Connect, but I think we could make it work as a focused colocated
event in Budapest the same week. Could that be an option?
Cheers,
Stefano
On Fri, 7 Feb 2020, Steve McIntyre wrote:
> Hey Loic,
>
> Cool, that's very helpful thanks! :-)
>
> I'm building a spreadsheet of possible dates now as people share their
> availability. </hint>
>
> Cheers,
>
> Steve
>
> On Fri, Feb 07, 2020 at 03:48:45PM +0000, Loic PALLARDY wrote:
> >Hi Steve,
> >
> >Sure, ST is interested to participate to a sprint on System Device tree definition.
> >I can propose ST Le Mans office to host this event in Europe (Direct train access from CDG airport, only 5min by walk from train station).
> >
> >Regards,
> >Loic
> >
> >> -----Original Message-----
> >> From: dte-interest(a)linaro.org <dte-interest(a)linaro.org> On Behalf Of
> >> Nathalie Chan King Choy
> >> Sent: jeudi 6 février 2020 18:48
> >> To: Francois Ozog <francois.ozog(a)linaro.org>; Steve McIntyre
> >> <steve.mcintyre(a)linaro.org>
> >> Cc: Stefano Stabellini <stefanos(a)xilinx.com>; dte-all(a)linaro.org; Bruce
> >> Ashfield <brucea(a)xilinx.com>; devicetree-spec(a)vger.kernel.org; Rob
> >> Herring <Rob.Herring(a)arm.com>; Mark Brown <mark.brown(a)arm.com>;
> >> Benjamin Gaignard <benjamin.gaignard(a)linaro.org>; Olof Johansson
> >> <olof(a)lixom.net>; Arnd Bergmann <arnd(a)linaro.org>
> >> Subject: RE: [System-dt] System DT - thinking a sprint would help
> >>
> >> Hi Steve,
> >>
> >> Additional folks who spoke during the last System DT call & not shown on the
> >> CC list were:
> >> Loic from ST
> >> Etsam & Dan from MGC
> >> Tomas from Xilinx
> >>
> >> @Loic, Etsam, Dan, Tomas: Are you guys interested?
> >>
> >> Thanks & regards,
> >> Nathalie
> >>
> >> > -----Original Message-----
> >> > From: System-dt <system-dt-bounces(a)lists.openampproject.org> On
> >> Behalf
> >> > Of Francois Ozog via System-dt
> >> > Sent: Thursday, February 6, 2020 9:05 AM
> >> > To: Steve McIntyre <steve.mcintyre(a)linaro.org>
> >> > Cc: Stefano Stabellini <stefanos(a)xilinx.com>; dte-all(a)linaro.org; Bruce
> >> > Ashfield <brucea(a)xilinx.com>; devicetree-spec(a)vger.kernel.org; Rob
> >> > Herring <Rob.Herring(a)arm.com>; Mark Brown <mark.brown(a)arm.com>;
> >> > Benjamin Gaignard <benjamin.gaignard(a)linaro.org>; Olof Johansson
> >> > <olof(a)lixom.net>; system-dt(a)lists.openampproject.org; Arnd Bergmann
> >> > <arnd(a)linaro.org>
> >> > Subject: Re: [System-dt] System DT - thinking a sprint would help
> >> >
> >> > EXTERNAL EMAIL
> >> >
> >> > count me in.
> >> >
> >> >
> >> > On Thu, 6 Feb 2020 at 16:59, Steve McIntyre <steve.mcintyre(a)linaro.org>
> >> > wrote:
> >> > >
> >> > > Hi all!
> >> > >
> >> > > [ tl;dr - we're trying to organise a sprint - see the end ]
> >> > >
> >> > > Due to the Linux on Arm meetup this week, quite a few of our DT
> >> > > community happened to be in Cambridge together last night. We took
> >> the
> >> > > opportunity to get together for food and beer, and we naturally ended
> >> > > up mulling over a whole range of things, not least the ongoing System
> >> > > DT design work.
> >> > >
> >> > > As is the nature of this kind of gathering, we had a whole range of
> >> > > opinions and ideas, much more than I can faithfully recall to share
> >> > > here in great detail. Grant has been keen to try and keep system-dt
> >> > > compatible with Linux DT usage if that's possible, and Stefano has
> >> > > been working with that in mind. Rob is concerned about how things
> >> > > might scale with lots of domains, for example. Olof is keen to talk
> >> > > about a higher-level design to make it easier to express complex
> >> > > layouts. And so on.
> >> > >
> >> > > There is one thing that *did* stand out: we agreed that it's probably
> >> > > time to try and get all of the interested people in a room together
> >> > > for a few days with a whiteboard, to really thrash out what's needed
> >> > > and how to achieve it. With the right people, we can go through all
> >> > > our needs, pick up the ideas we have and hopefully get some prototypes
> >> > > ready to evaluate.
> >> > >
> >> > > So, who's interested? I believe all the people in CC here are likely
> >> > > keen to be involved. Anybody else?
> >> > >
> >> > > Although we have Linaro Connect in Budapest at the end of March,
> >> > > that's really *not* a good place to try and meet up, for a couple of
> >> > > reasons: not everybody will be there, and we'll have too many
> >> > > distractions to be able to focus on this. Let's *not* do that.
> >> > >
> >> > > AFAICS we're geographically roughly split between US/Canada and
> >> > > Europe, so I'm thinking a week in either the US or Europe, some time
> >> > > in the next 2-3 months:
> >> > >
> >> > > * Would a week before or after Connect work, in Europe (16th-20th
> >> > > March, or 30th March to 3rd April)? I can look for options in the
> >> > > UK easily, with maybe either Arm or Linaro hosting.
> >> > >
> >> > > * Alternatively, a week's meeting in the US, deliberately avoiding
> >> > > the Connect week so we don't wipe people out with travel: Maybe 2-6
> >> > > or 9-13 March? Or push back into April if that's too short notice,
> >> > > of course. Would Xilinx be able to host something for us, maybe? Or
> >> > > another possibility might be the Linaro office in Nashua?
> >> > >
> >> > > I'm open to other suggestions for time and venue. Let's try to make
> >> > > something work here?