Bus firewall framework aims to provide a kernel API to set the configuration
of the harware blocks in charge of busses access control.
Framework architecture is inspirated by pinctrl framework:
- a default configuration could be applied before bind the driver.
If a configuration could not be applied the driver is not bind
to avoid doing accesses on prohibited regions.
- configurations could be apllied dynamically by drivers.
- device node provides the bus firewall configurations.
An example of bus firewall controller is STM32 ETZPC hardware block
which got 3 possible configurations:
- trust: hardware blocks are only accessible by software running on trust
zone (i.e op-tee firmware).
- non-secure: hardware blocks are accessible by non-secure software (i.e.
linux kernel).
- coprocessor: hardware blocks are only accessible by the coprocessor.
Up to 94 hardware blocks of the soc could be managed by ETZPC.
At least two other hardware blocks can take benefits of this:
- ARM TZC-400: http://infocenter.arm.com/help/topic/com.arm.doc.100325_0001_02_en/arm_core…
which is able to manage up to 8 regions in address space.
- IMX Ressource Domain Controller (RDC): supports four domains and up to eight regions
Version 2 has been rebased on top of v5.5
- Change framework name to "firewall" because the targeted hardware block
are acting like firewall on the busses.
- Mark Brown had reviewed the previous version but it was on kernel 5.1 and I change
the name framework so I have decided to remove it.
- Use yaml file to describe the bindings
Benjamin
Benjamin Gaignard (7):
dt-bindings: bus: Add firewall bindings
bus: Introduce firewall controller framework
base: Add calls to firewall controller
dt-bindings: bus: Add STM32 ETZPC firewall controller
bus: firewall: Add driver for STM32 ETZPC controller
ARM: dts: stm32: Add firewall node for stm32mp157 SoC
ARM: dts: stm32: enable firewall controller node on stm32mp157c-ed1
.../bindings/bus/firewall/firewall-consumer.yaml | 25 ++
.../bindings/bus/firewall/firewall-provider.yaml | 18 ++
.../bindings/bus/firewall/st,stm32-etzpc.yaml | 41 ++++
arch/arm/boot/dts/stm32mp157c-ev1.dts | 2 +
arch/arm/boot/dts/stm32mp157c.dtsi | 7 +
drivers/base/dd.c | 9 +
drivers/bus/Kconfig | 2 +
drivers/bus/Makefile | 2 +
drivers/bus/firewall/Kconfig | 14 ++
drivers/bus/firewall/Makefile | 2 +
drivers/bus/firewall/firewall.c | 264 +++++++++++++++++++++
drivers/bus/firewall/stm32-etzpc.c | 140 +++++++++++
include/dt-bindings/bus/firewall/stm32-etzpc.h | 90 +++++++
include/linux/firewall.h | 70 ++++++
14 files changed, 686 insertions(+)
create mode 100644 Documentation/devicetree/bindings/bus/firewall/firewall-consumer.yaml
create mode 100644 Documentation/devicetree/bindings/bus/firewall/firewall-provider.yaml
create mode 100644 Documentation/devicetree/bindings/bus/firewall/st,stm32-etzpc.yaml
create mode 100644 drivers/bus/firewall/Kconfig
create mode 100644 drivers/bus/firewall/Makefile
create mode 100644 drivers/bus/firewall/firewall.c
create mode 100644 drivers/bus/firewall/stm32-etzpc.c
create mode 100644 include/dt-bindings/bus/firewall/stm32-etzpc.h
create mode 100644 include/linux/firewall.h
--
2.15.0
Hi all,
Please join the call on Zoom: https://zoom.us/my/openampproject
(If you need the meeting ID, it's 9031895760)
The notes from the previous call (Jan 22) can be found on the OpenAMP wiki at this link:
https://github.com/OpenAMP/open-amp/wiki/System-DT-Meeting-Notes-2020#2020J…
Action items from the previous call:
* Stefano: Document a little bit more how the model works
** Remove reserved memory
** Add top-level use-case FAQ (e.g. how to do peripheral assignment to SW)
** Consider putting a qualifier word before "domain" to make it more specific
* Everyone: Try to poke holes in the model. Good to have hard questions to think through & answer
* Rob: Prototype proposal of changing root
* Nathalie: co-ordinate next call over email (2 weeks from now doesn't work b/c Rob can't make it)
For info about the list, link to the archives, to unsubscribe yourself, or
for someone to subscribe themselves, visit:
https://lists.openampproject.org/mailman/listinfo/system-dt
For information about the System Device Trees effort, including a link to
the intro presentation from Linaro Connect SAN19:
https://github.com/OpenAMP/open-amp/wiki/System-Device-Trees
Best regards,
Nathalie C. Chan King Choy
Project Manager focused on Open Source and Community
Hi all,
The notes from the Jan 22, 2020 call are posted on the OpenAMP wiki:
https://github.com/OpenAMP/open-amp/wiki/System-DT-Meeting-Notes-2020#2020J…
Action items:
* Stefano: Document a little bit more how the model works
o Remove reserved memory
o Add top-level use-case FAQ (e.g. how to do peripheral assignment to SW)
o Consider putting a qualifier word before "domain" to make it more specific
* Everyone: Try to poke holes in the model. Good to have hard questions to think through & answer
* Rob: Prototype proposal of changing root
* Nathalie: co-ordinate next call over email (2 weeks from now doesn't work b/c Rob can't make it)
Have a great weekend,
Nathalie
On Wed, Jan 22, 2020 at 8:35 AM Driscoll, Dan <dan_driscoll(a)mentor.com> wrote:
>
> Not sure if this will help and I know this is quite verbose, but, from what I see, we are converging on things that make sense to us at Mentor given our focus in this area.
>
> We have been using a device tree based approach to system partitioning for the last 3-4 years and here are some points we have learned:
>
> * To separate what we call the "system definition" (ie resource partitioning) from the hardware description, we have what we called a "System Definition Tree" or SDT file (found it kind of funny that SDT was also chosen for System Device Tree)
> * The SDT is a separate file that uses device tree syntax, but does NOT describe hardware, but rather sub-systems / partitioning using the hardware definition found in the DTS
> * The SDT file #includes the hardware description (ie top-level DTS file) and references nodes from this DT, so this keeps the 2 clearly separated (system definition versus hardware definition)
Do you have any public examples of this? Might be helpful.
Regarding the separation, How do you really separate the config and
h/w desc? The h/w desc already has some amount of configuration in it
and the tooling has to be aware of what h/w can be configured. Take
for example, you want to assign cpus to domains (openamp domain or
execution context in this use). You can make this link in either
direction:
domain to cpu:
domain0: domain-cfg {
assigned-cpus = <&cpu0>;
};
cpu to domain:
&cpu0 {
assigned-domain = <&domain0>;
};
There's no difference in complexity to generate either one and both
ways are separate from the h/w description at the source level. The
primary difference is the separation in the final built DT. Does that
matter? If so, then you'd pick the first method. However, we already
have things in h/w description that you may want to configure. For
example, flow control support for a UART which already has a defined
way to configure it (a property in the uart node). So both ways are
probably going to have to be supported.
Rob
Sorry - left off the mailing list.
-----Original Message-----
From: Driscoll, Dan
Sent: Wednesday, January 22, 2020 8:32 AM
To: 'Tomas Evensen' <tomase(a)xilinx.com>; Bjorn Andersson <bjorn.andersson(a)linaro.org>; Rob Herring <robh(a)kernel.org>
Cc: Stefano Stabellini <stefanos(a)xilinx.com>; Rob Herring <Rob.Herring(a)arm.com>; Raghuraman, Arvind <Arvind_Raghuraman(a)mentor.com>; Anjum, Etsam <Etsam_Anjum(a)mentor.com>; Humayun, Waqar <Waqar_Humayun(a)mentor.com>
Subject: RE: [System-dt] software domains and top level nodes
Not sure if this will help and I know this is quite verbose, but, from what I see, we are converging on things that make sense to us at Mentor given our focus in this area.
We have been using a device tree based approach to system partitioning for the last 3-4 years and here are some points we have learned:
* To separate what we call the "system definition" (ie resource partitioning) from the hardware description, we have what we called a "System Definition Tree" or SDT file (found it kind of funny that SDT was also chosen for System Device Tree)
* The SDT is a separate file that uses device tree syntax, but does NOT describe hardware, but rather sub-systems / partitioning using the hardware definition found in the DTS
* The SDT file #includes the hardware description (ie top-level DTS file) and references nodes from this DT, so this keeps the 2 clearly separated (system definition versus hardware definition)
* Related to the previous point, we don't use the chosen node to encompass the system definition and have our own bindings for "machine" nodes in the SDT (equivalent to "domain" nodes in current discussion)
* I think putting all of this info in the chosen node doesn't really solve any real problems and just makes things confusing - having new bindings for system definition / domains / etc, that live outside the chosen node are, to me, much cleaner as the chosen node already has uses for different software that use DT
* Each machine (domain) node has similar attributes as discussed in the thread below - memory, cpus, devices, chosen, etc - in addition to some other attributes we use to help determine how our tooling processes these machine nodes.
* For instance, machines have a "role" attribute that indicates if a machine is a "virtual" machine (hypervisor uses), a "remote" machine (OpenAMP remote), a "master" machine (OpenAMP master), etc.
* Obviously there are lots of permutations and combinations that can occur and need to be accommodated such as running a hypervisor on a Cortex-A SMP cluster and ALSO running an OpenAMP master / remote configuration between the Cortex-A cluster (could be a guest OS or could be the hypervisor) and a Cortex-R cluster
* We also have other attributes we use to help package all of the deployable content (ie remote images, guest OS images, etc), but this is outside the scope of these discussions
So, there are 2 problems that need to be solved (as, I think, this group has been considering):
1. Adding necessary hardware description to current device trees so they FULLY describe the hardware (ie heterogeneous SoCs with different subsystems / clusters, device access, interrupt routing, sharing of memory / devices, etc)
2. How to define the partitioning of #1 so a tool can create multiple usable device trees for each software context in a system
I am not enough of an expert in #1 to help extensively here, but for #2, we have been doing this for 3-4 years now and have released commercial products that use device trees for this same purpose so hopefully we can help guide things here.
Our biggest problems right now are that #1 doesn't exist (ie we have been extending existing device trees for SoCs to fully describe them and we are doing this in a way that isn't "clean") and there are a few other areas in our machine definition / bindings that are flimsy as well.
I guess I would like to see us get #1 fully defined before talking too much about #2 as I think the hardware description should stand on its own (ie doesn't depend on any new bindings defined for #2).
Dan
-----Original Message-----
From: System-dt [mailto:system-dt-bounces@lists.openampproject.org] On Behalf Of Tomas Evensen via System-dt
Sent: Tuesday, January 21, 2020 7:43 PM
To: Bjorn Andersson <bjorn.andersson(a)linaro.org>; Rob Herring <robh(a)kernel.org>
Cc: Stefano Stabellini <stefanos(a)xilinx.com>; system-dt(a)lists.openampproject.org; Rob Herring <Rob.Herring(a)arm.com>
Subject: Re: [System-dt] software domains and top level nodes
One of the things we have tried to achieve with System Device Trees is to make sure we separate the HW description from the domain configuration that typically is done by a different person.
That is, you don't want to have to edit or rewrite the parts that describes the HW in order to describe what memory, devices, cpus goes where.
Take an example where 2 cpus can either be configured to
a) work together and see the same memory/devices (SMP for example), or
b) be separated into two different domains running different OSes with different meory/devices.
So you have either one or two domains for those two cpus.
In this case I don't know that you want the "configurer" to have to go in and rewrite the file to use a different number of domains depending on the situation.
FWIW,
Tomas
On 1/21/20, 3:57 PM, "System-dt on behalf of Bjorn Andersson via System-dt" <system-dt-bounces(a)lists.openampproject.org on behalf of system-dt(a)lists.openampproject.org> wrote:
EXTERNAL EMAIL
On Tue 21 Jan 13:18 PST 2020, Rob Herring via System-dt wrote:
[..]
> To flip all this around, what if domains become the top-level structure:
>
> domain@0 {
> chosen {};
> cpus {};
> memory@0 {};
> reserved-memory {};
> };
>
> domain@1 {
> chosen {};
> cpus {};
> memory@800000 {};
> reserved-memory {};
> };
>
I like this suggestion, as this both creates a natural grouping and
could allow for describing domain-specific hardware as subtrees in each
domain.
Regards,
Bjorn
> The content of all the currently top-level nodes don't need to change.
> The OS's would be modified to treat a domain node as the root node
> which shouldn't be very invasive. Then everything else just works as
> is.
>
> This could still have other nodes at the (real) root or links from one
> domain to another. I haven't thought thru that part, but I think this
> structure can only help because it removes the notion that the root
> has a specific cpu view.
>
> Rob
> --
> System-dt mailing list
> System-dt(a)lists.openampproject.org
> https://lists.openampproject.org/mailman/listinfo/system-dt
--
System-dt mailing list
System-dt(a)lists.openampproject.org
https://lists.openampproject.org/mailman/listinfo/system-dt
--
System-dt mailing list
System-dt(a)lists.openampproject.org
https://lists.openampproject.org/mailman/listinfo/system-dt
One of the things we have tried to achieve with System Device Trees is to make sure we separate the HW description from the domain configuration that typically is done by a different person.
That is, you don't want to have to edit or rewrite the parts that describes the HW in order to describe what memory, devices, cpus goes where.
Take an example where 2 cpus can either be configured to
a) work together and see the same memory/devices (SMP for example), or
b) be separated into two different domains running different OSes with different meory/devices.
So you have either one or two domains for those two cpus.
In this case I don't know that you want the "configurer" to have to go in and rewrite the file to use a different number of domains depending on the situation.
FWIW,
Tomas
On 1/21/20, 3:57 PM, "System-dt on behalf of Bjorn Andersson via System-dt" <system-dt-bounces(a)lists.openampproject.org on behalf of system-dt(a)lists.openampproject.org> wrote:
EXTERNAL EMAIL
On Tue 21 Jan 13:18 PST 2020, Rob Herring via System-dt wrote:
[..]
> To flip all this around, what if domains become the top-level structure:
>
> domain@0 {
> chosen {};
> cpus {};
> memory@0 {};
> reserved-memory {};
> };
>
> domain@1 {
> chosen {};
> cpus {};
> memory@800000 {};
> reserved-memory {};
> };
>
I like this suggestion, as this both creates a natural grouping and
could allow for describing domain-specific hardware as subtrees in each
domain.
Regards,
Bjorn
> The content of all the currently top-level nodes don't need to change.
> The OS's would be modified to treat a domain node as the root node
> which shouldn't be very invasive. Then everything else just works as
> is.
>
> This could still have other nodes at the (real) root or links from one
> domain to another. I haven't thought thru that part, but I think this
> structure can only help because it removes the notion that the root
> has a specific cpu view.
>
> Rob
> --
> System-dt mailing list
> System-dt(a)lists.openampproject.org
> https://lists.openampproject.org/mailman/listinfo/system-dt
--
System-dt mailing list
System-dt(a)lists.openampproject.org
https://lists.openampproject.org/mailman/listinfo/system-dt
On Fri, Jan 17, 2020 at 5:30 PM Stefano Stabellini via System-dt
<system-dt(a)lists.openampproject.org> wrote:
>
> Hi all,
>
> I would like to follow-up on system device tree and specifically on one
> of the action items from the last call.
>
> Rob raised the interesting question of what is the interaction between
> the new system device tree concepts and the top level nodes (memory,
> reserved-memory, cpus, chosen).
>
> I am going to write here my observations.
Some questions inline, but they're really rhetorical questions for my
response at the end.
>
> As a short summary, the system device tree concepts are:
>
> - Multiple top level "cpus,cluster" nodes to describe heterogenous CPU
> clusters.
> - A new "indirect-bus" which is a type of bus that does not
> automatically map to the parent address space.
> - An address-map property to express the different address mappings of
> the different cpus clusters and can be used to map indirect-bus nodes.
>
> These new nodes and properties allow us to describe multiple
> heterogenous cpus clusters with potentially different address mappings,
> which can be expressed using indirect-bus and address-map.
>
> We also have new concepts for software domains configurations:
>
> - Multiple "openamp,domain" nodes (currently proposed under /chosen) to
> specify software configurations and MPU configurations.
> - A new "access" property under each "openamp,domain" node with links to
> nodes accessible from the cpus cluster.
>
> Openamp,domain nodes allow us to define the cpus cluster and set of
> hardware resources that together form a software domain. The access
> property defines the list of resources available to one particular
> cluster and maps well into MPU configurations (sometimes called
> "firewall configurations" during the calls.)
>
> See the attached full example.
>
>
> I am going to go through the major top level nodes and expand on how
> the new concepts affect them.
>
>
> /cpus
> =====
>
> /cpus is the top level node that contains the description of the cpus in
> the system. With system device tree, it is not the only cpus cluster,
> additional cpus clusters can be described by other top level nodes
> compatible with "cpus,cluster". However, /cpus remains the default
> cluster. An OS reading device tree should assume that it is running on
> /cpus. From a compatibility perspective, if an OS doesn't understand or
> recognize the other "cpus,cluster" nodes, it can ignore them, and just
> process /cpus.
>
> Buses compatible with "indirect-bus" do not map automatically to the
> parent address space, which means that /cpus won't be able to access
> them, unless an address-map property is specified under /cpus to express
> the mapping. This is the only new limitation introduced for /cpus.
> Again, from a compatibility perspective an OS that doesn't understand
> the address-map property would just ignore both it and the bus, so
> again, it is an opt-in new functionality.
>
>
> So far in my examples "openamp,domain" nodes refer to "cpus,cluster"
> nodes only, not to /cpus. There is a question on whether we want to
> allow "openamp,domain" nodes to define a software domain running on
> /cpus. We could go either way, but for simplicity I think we can avoid
> it.
>
> "openamp,domain" nodes express accessibility restrictions while /cpus is
> meant to be able to access everything by default. If we want to specify
> hard accessibility settings for all clusters, it is possible to write a
> pure system device tree without /cpus, where all cpus clusters are
> described by "cpus,cluster" nodes and there is no expectation that an OS
> will be able to use it without going through some transformations by
> lopper (or other tools.)
>
>
> /chosen
> =======
>
> The /chosen node is used for software configurations, such as bootargs
> (Linux command line). When multiple "openamp,domains" nodes are present
> the configurations directly under /chosen continue to refer to the
> software running on /cpus, while domain specific configurations need to
> go under each domain node.
>
> As an example:
>
> - /chosen/bootargs refers to the software running on /cpus
> - /chosen/openamp_r5/bootargs refers to the openamp_r5 domain
>
>
> /memory
> =======
>
> The /memory node describes the main memory in the system. Like for any
> device node, all cpus clusters can address it.
Not really true. You could have memory regions not accessible by some cpus.
> indirect-bus and
> address-map can be used to express addressing differences.
>
> It might be required to carve out special memory reservations for each
> domain. These configurations are expressed under /reserved-memory as we
> do today for any other reserved regions.
What about a symmetric case where say you have 4 domains and want to
divide main memory into 4 regions?
> /reserved-memory
> ================
>
> /reserved-memory is used to describe particular reserved memory regions
> for special use by software. With system device tree /reserved-memory
> becomes useful to describe domain specific memory reservations too.
> Memory ranges for special use by "openamp,domain" nodes are expressed
> under /reserved-memory following the usual set of rules. Each
> "openamp,domain" node links to any relevant reserved-memory regions using
> the access property. The rest is to be used by /cpus.
>
> For instance:
>
> - /reserved-memory/memory_r5 is linked and used by /chosen/openamp_r5
> - other regions under /reserved-memory, not linked by any
> "openamp,domain" nodes, go to the default /cpus
So the code that parses /reserved-memory has to look up something
elsewhere to determine if each child node applies? That's fairly
invasive to the existing handling of /reserved-memory.
Also, a reserved region could have different addresses for different
CPUs. Basically, /reserved-memory doesn't have an address, but
inherits the root addressing. That makes it a bit of an oddball. We
need to handle both shared and non-shared reserved regions.
Shared-memory for IPC is commonly described here for example.
> We should use a specific compatible string to identify reserved memory
> regions meant for openamp,domain nodes, so that a legacy OS will safely
> ignore them. I added
>
> compatible = "openamp,domain-memory-v1";
That doesn't really scale. If we don't care about legacy OS support,
then every node will have this?
I don't really like the asymmetric structure of all this. While having
a default view for existing OS seems worthwhile, as soon as there's a
more symmetric use case it becomes much more invasive and OS parsing
for all the above has to be adapted. We need to design for 100
domains.
To flip all this around, what if domains become the top-level structure:
domain@0 {
chosen {};
cpus {};
memory@0 {};
reserved-memory {};
};
domain@1 {
chosen {};
cpus {};
memory@800000 {};
reserved-memory {};
};
The content of all the currently top-level nodes don't need to change.
The OS's would be modified to treat a domain node as the root node
which shouldn't be very invasive. Then everything else just works as
is.
This could still have other nodes at the (real) root or links from one
domain to another. I haven't thought thru that part, but I think this
structure can only help because it removes the notion that the root
has a specific cpu view.
Rob
Le mar. 21 janv. 2020 à 17:56, Bjorn Andersson via System-dt <
system-dt(a)lists.openampproject.org> a écrit :
> On Tue 21 Jan 13:18 PST 2020, Rob Herring via System-dt wrote:
> [..]
> > To flip all this around, what if domains become the top-level structure:
> >
> > domain@0 {
> > chosen {};
> > cpus {};
> > memory@0 {};
> > reserved-memory {};
> > };
> >
> > domain@1 {
> > chosen {};
> > cpus {};
> > memory@800000 {};
> > reserved-memory {};
> > };
> >
>
> I like this suggestion, as this both creates a natural grouping and
> could allow for describing domain-specific hardware as subtrees in each
> domain.
>
There seem to be a need of hierarchy of domains :
- clocks domains that allow cpu clusters to be reset without impacting
other clusters.
- Memory domains with physically isolated address spaces within a cluster
or memory with very different access costs when using GenZ memory
> Regards,
> Bjorn
>
> > The content of all the currently top-level nodes don't need to change.
> > The OS's would be modified to treat a domain node as the root node
> > which shouldn't be very invasive. Then everything else just works as
> > is.
> >
> > This could still have other nodes at the (real) root or links from one
> > domain to another. I haven't thought thru that part, but I think this
> > structure can only help because it removes the notion that the root
> > has a specific cpu view.
> >
> > Rob
> > --
> > System-dt mailing list
> > System-dt(a)lists.openampproject.org
> > https://lists.openampproject.org/mailman/listinfo/system-dt
> --
> System-dt mailing list
> System-dt(a)lists.openampproject.org
> https://lists.openampproject.org/mailman/listinfo/system-dt
>
--
François-Frédéric Ozog | *Director Linaro Edge & Fog Computing Group*
T: +33.67221.6485
francois.ozog(a)linaro.org | Skype: ffozog
On Tue 21 Jan 13:18 PST 2020, Rob Herring via System-dt wrote:
[..]
> To flip all this around, what if domains become the top-level structure:
>
> domain@0 {
> chosen {};
> cpus {};
> memory@0 {};
> reserved-memory {};
> };
>
> domain@1 {
> chosen {};
> cpus {};
> memory@800000 {};
> reserved-memory {};
> };
>
I like this suggestion, as this both creates a natural grouping and
could allow for describing domain-specific hardware as subtrees in each
domain.
Regards,
Bjorn
> The content of all the currently top-level nodes don't need to change.
> The OS's would be modified to treat a domain node as the root node
> which shouldn't be very invasive. Then everything else just works as
> is.
>
> This could still have other nodes at the (real) root or links from one
> domain to another. I haven't thought thru that part, but I think this
> structure can only help because it removes the notion that the root
> has a specific cpu view.
>
> Rob
> --
> System-dt mailing list
> System-dt(a)lists.openampproject.org
> https://lists.openampproject.org/mailman/listinfo/system-dt
Hi all,
This Wednesday, January 22nd is the next System Device Tree call. Please send your suggestions for agenda items.
Thanks & regards,
Nathalie
-----Original Appointment-----
From: Nathalie Chan King Choy
Sent: Friday, December 13, 2019 3:39 PM
To: Nathalie Chan King Choy; system-dt(a)lists.openampproject.org
Cc: Raghuraman, Arvind; nathalie-ckc(a)kestrel-omnitech.com; Bruce Ashfield; Ed T. Mooring; Tony McDowell; Varis, Pekka; Milea, Danut Gabriel (Danut); joakim.bech(a)linaro.org; Vincent Chardon; Markham, Joel (GE Global Research, US); robherring2(a)gmail.com; Kepa, Krzysztof (GE Global Research); don.harbin(a)linaro.org; mathieu.poirier(a)linaro.org; Mark Hambleton; ilias.apalodimas(a)linaro.org; Ravikumar Chakaravarthy; Michael May; Sakamoto, Hirokazu; Grant Likely; Petr Lukas; francois.ozog(a)linaro.org; Clément Leger; Loic PALLARDY; Tomas Evensen
Subject: System Device Tree call - January
When: Wednesday, January 22, 2020 9:00 AM-10:00 AM (UTC-08:00) Pacific Time (US & Canada).
Where: Zoom
Hi all,
Please join the call on Zoom: https://zoom.us/my/openampproject
(If you need the meeting ID, it's 9031895760)
The notes from the previous call (Dec 11) can be found on the OpenAMP wiki at this link:
https://github.com/OpenAMP/open-amp/wiki/System-DT-Meeting-Notes-2019#2019D…
Action items from the previous call:
* Rob: Reply to Stefano's latest response on list
* All: Give a look at Stefano's example. Reply on the list about what doesn't work for your use case
* Stefano: Bring access field back into the proposal for discussion sooner than later
* Stefano: Include chosen node proposal in example to list so we have whole view of what we are trying to achieve. Target early 2020. Looking for intent & what it's doing. Can get into syntax later.
* Stefano: Note open question: What about top level memory node & reserved memory b/c that's connected?
* Stefano: Start a thread on CPUs, chosen, memory, reserved memory nodes on list & we can discuss each of them & cases we might have missed
* Bruce: Send out info on how to prune System DT into DTs. Target early 2020.
* Nathalie: Confirm if Rob can make Jan 8th 22nd at 9am PST timeslot
For info about the list, link to the archives, to unsubscribe yourself, or
for someone to subscribe themselves, visit:
https://lists.openampproject.org/mailman/listinfo/system-dt
For information about the System Device Trees effort, including a link to
the intro presentation from Linaro Connect SAN19:
https://github.com/OpenAMP/open-amp/wiki/System-Device-Trees
Best regards,
Nathalie C. Chan King Choy
Project Manager focused on Open Source and Community