Embedded Base Boot Requirements (EBBR) Specification

Copyright © 2017-2024 Arm Limited and Contributors.

Copyright © 2021 Western Digital Corporation or its affiliates.

This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

Creative Commons License
Table 1 Revision History




20 Sep 2017


  • Confidentiality Change, EBBR version 0.51

12 Jul 2018


  • Relicense to CC-BY-SA 4.0

  • Added Devicetree requirements

  • Added Multiprocessor boot requirements

  • Transitioned to reStructuredText and GitHub

  • Added firmware on shared media requirements

  • RTC is optional

  • Add constraints on sharing devices between firmware and OS

  • Add large note on implementation of runtime modification of non-volatile variables

18 Oct 2018


  • Add AArch32 details

  • Refactor Runtime Services text after face to fact meeting at Linaro Connect YVR18

12 Mar 2019


  • Update language around SetVariable() and what is available during runtime services

  • Editorial changes preparing for v1.0

31 Mar 2019


  • Remove unnecessary UEFI requirements appendix

  • Allow for ACPI vendor id in firmware path

5 Aug 2020


  • Update to UEFI 2.8 Errata A

  • Specify UUID for passing DTB

  • Typo and editorial fixes

  • Document the release process

23 Apr 2021


  • Reduce the number of UEFI required elements needed for compliance.

  • Add requirement for UpdateCapsule() runtime service.

  • Updated firmware shared storage requirements

  • Refined RTC requirements

  • Fixed ResetSystem() to correctly describe failure condition

6 Aug 2021


  • Update UEFI version to 2.9

  • Update Devicetree spec version to v0.3

  • Add RISC-V platform text

  • Temporarily drop ESRT requirement

  • Fix typos

7 Dec 2022


  • Restore ESRT requirement when capsule update is implemented

  • Update UEFI version to 2.10

  • Add an EFI Conformance Profile for EBBR v2.1.x

  • Drop requirement on now-ignored RISC-V boot-hartid and add RISCV_EFI_BOOT_PROTOCOL requirement

  • Update ACPI version to 6.4

  • Update PSCI version to issue D.b (v1.1)

  • Update BBR version to issue G (v2.0)

  • Add DTB requirements

  • Fix typos and spelling

  • Refresh links

5 Jun 2024


  • Require capsule update “on disk” and variables

  • Require the TCG2 protocol if system has a TPM

  • Define a file format for storing EFI variables

  • Provision conformance profile 2.2 guid

  • Recommend the firmware update protocol, PSCI >= 1.0, SMCCC >= 1.1

  • Make monotonic counter optional

  • Clarify that ConnectController must be implemented

  • Bump ACPI, PSCI and Devicetree references versions, refresh reference for RISC-V hypervisor extension, mention dt-schema

  • Links refresh and additions, typos and syntax fixes, cosmetic changes, formatting conventions, notes movements, chapters changes, glossary adjustments

1. About This Document

1.1. Introduction

This Embedded Base Boot Requirements (EBBR) specification defines an interface between platform firmware and an operating system that is suitable for embedded platforms. EBBR compliant platforms present a consistent interface that will boot an EBBR compliant operating system without any custom tailoring required. For example, an Arm A-class embedded platform will benefit from a standard interface that supports features such as secure boot and firmware update.

This specification defines the base firmware requirements for EBBR compliant platforms. The requirements in this specification are expected to be minimal yet complete, while leaving plenty of room for innovations and design details. This specification is intended to be OS-neutral.

It leverages the prevalent industry standard firmware specification of [UEFI].

Comments or change requests can be sent to boot-architecture@lists.linaro.org.

1.2. Guiding Principles

EBBR as a specification defines requirements on platforms and operating systems, but requirements alone don’t provide insight into why the specification is written the way it is, or what problems it is intended to solve. Using the assumption that better understanding of the thought process behind EBBR will result in better implementations, this section is a discussion of the goals and guiding principle that shaped EBBR.

This section should be considered commentary, and not a formal part of the specification.

EBBR was written as a response to the lack of boot sequence standardization in the embedded system ecosystem. As embedded systems are becoming more sophisticated and connected, it is becoming increasingly important for embedded systems to run standard OS distributions and software stacks, or to have consistent behaviour across a large deployment of heterogeneous platforms. However, the lack of consistency between platforms often requires per-platform customization to get an OS image to boot on multiple platforms.

A large part of this ecosystem is based on U-Boot and Linux. Vendors have heavy investments in both projects and are not interested in large scale changes to their firmware architecture. The challenge for EBBR is to define a set of boot standards that reduce the amount of custom engineering required, make it possible for OS distributions to support embedded platforms, while still preserving the firmware stack that product vendors are comfortable with. Or in simpler terms, EBBR is designed to reduce the embedded boot differences by implementing a widely accepted standard (UEFI) in existing firmware projects (U-Boot).

However, EBBR is a specification, not an implementation. The goal of EBBR is not to mandate U-Boot and Linux. Rather, it is to mandate interfaces that can be implemented by any firmware or OS project, while at the same time work with both Tianocore/EDK2 and U-Boot to ensure that the EBBR requirements are implemented by both projects. [1]

The following guiding principles are used while developing the EBBR specification.

  • Be agnostic about ACPI and Devicetree.

    EBBR explicitly does not require a specific system description language. Both Devicetree and ACPI are supported. The Linux kernel supports both equally well, and so EBBR doesn’t require one over the other. However, EBBR does require the system description to be supplied by the platform, not the OS. The platform must also conform to the relevant ACPI or DT specifications and adhere to platform compatibility rules. [2]

  • Focus on the UEFI interface, not a specific codebase

    EBBR does not require a specific firmware implementation. Any firmware project can implement these interfaces. Neither U-Boot nor Tianocore/EDK2 are required.

  • Design to be implementable and useful today

    The drafting process for EBBR worked closely with U-Boot and Tianocore developers to ensure that current upstream code will meet the requirements.

  • Design to be OS independent

    This document uses Linux as an example but other OS’s support EBBR compliant systems as well (e.g. FreeBSD, OpenBSD).

  • Support multiple architectures

    Any architecture can implement the EBBR requirements. Architecture specific requirements will clearly marked as to which architecture(s) they apply.

  • Design for common embedded hardware

    EBBR support will be implemented on existing developer hardware. Generally anything that has a near-upstream U-Boot implementation should be able to implement the EBBR requirements. EBBR was drafted with readily available hardware in mind, like the Raspberry Pi and BeagleBone families of boards, and it is applicable for low cost boards (<$10).

  • Plan to evolve over time

    The current release of EBBR is firmly targeted at existing platforms so that gaining EBBR compliance may require a firmware update, but will not require hardware changes for the majority of platforms.

    Future EBBR releases will tighten requirements to add features and improve compatibility, which may affect hardware design choices. However, EBBR will not retroactively revoke support from previously compliant platforms. Instead, new requirements will be clearly documented as being over and above what was required by a previous release. Existing platforms will be able to retain compliance with a previous requirement level. In turn, OS projects and end users can choose what level of EBBR compliance is required for their use case.

1.3. Scope

This document defines a subset of the boot and runtime services, protocols and configuration tables defined in the UEFI specification [UEFI] that is provided to an Operating System or hypervisor.

This specification defines the boot and runtime services for a physical system, including services that are required for virtualization. It does not define a standardized abstract virtual machine view for a Guest Operating System.

This specification is referenced by the Arm Base Boot Requirements Specification [ArmBBR] § 4.2. The UEFI requirements found in this document are similar but not identical to the requirements found in BBR. EBBR provides greater flexibility to support embedded designs which cannot easily meet the stricter BBR requirements.

By definition, all BBR compliant systems are also EBBR compliant, but the converse is not true.

This specification is referenced by RISC-V platform specification [RVPLTSPEC].

1.4. Conventions Used in this Document

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.

Features, which will not be supported by a future version of this specification are indicated with a warning such as the following one:


This feature is deprecated. A future version of this specification will disallow its use.

1.4.1. Typographic conventions

This document uses the following typographic conventions:


An italic typeface is used for identifiers such as UEFI tables, variables, protocols, memory types and functions names.


A monospace typeface is used for file paths and Devicetree nodes.

1.5. Cross References

This document cross-references sources that are listed in the References section by using the section sign §.


UEFI § 6.1 Block Translation Table (BTT) Background - Reference to the UEFI specification [UEFI] section 6.1

1.6. Terms and abbreviations

This document uses the following terms and abbreviations. Generic terms are listed at the beginning of this chapter. Architecture specific terms are listed a section for each architecture.

EFI Loaded Image

An executable image to be run under the UEFI environment, and which uses boot time services.

Logical Unit (LU)

A logical unit (LU) is an externally addressable, independent entity within a device. In the context of storage, a single device may use logical units to provide multiple independent storage areas.


System on a Chip. An integrated circuit comprising many components of a computer.


Serial Peripheral Interface. A synchronous serial bus used for communication between integrated circuits.


Unified Extensible Firmware Interface.

UEFI Boot Services

Functionality that is provided to UEFI Loaded Images during the UEFI boot process.

UEFI Runtime Services

Functionality that is provided to an Operating System after the ExitBootServices() call.

1.6.1. AARCH32


Arm 32-bit architectures. AArch32 is a roll up term referring to all 32-bit versions of the Arm architecture starting at ARMv4.

1.6.2. AARCH64


The 64-bit Arm instruction set used in AArch64 state. All A64 instructions are 32 bits.

AArch64 state

The Arm 64-bit Execution state that uses 64-bit general purpose registers, and a 64-bit program counter (PC), Stack Pointer (SP), and exception link registers (ELR).


Execution state provides a single instruction set, A64.


The lowest Exception level on AArch64. The Exception level that is used to execute user applications, in Non-secure state.


Privileged Exception level on AArch64. The Exception level that is used to execute Operating Systems, in Non-secure state.


Hypervisor Exception level on AArch64. The Exception level that is used to execute hypervisor code. EL2 is always in Non-secure state.


Secure Monitor Exception level on AArch64. The Exception level that is used to execute Secure Monitor code, which handles the transitions between Non-secure and Secure states. EL3 is always in Secure state.

1.6.3. RISC-V


Hardware thread in RISC-V. This is the hardware execution context that contains all the state mandated by the ISA.


Hart State Management (HSM) is an SBI extension that enables the supervisor mode software to implement ordered booting.

HS Mode

Hypervisor-extended-supervisor mode which virtualizes the supervisor mode.

M Mode

Machine mode is the most secure and privileged mode in RISC-V.


An open standard Instruction Set Architecture (ISA) based on Reduced Instruction Set Architecture (RISC).


32 bit execution mode in RISC-V.


64 bit execution mode in RISC-V.

RISC-V Supervisor Binary Interface (SBI)

Supervisor Binary Interface. This is an interface between SEE and supervisor mode in RISC-V.


Supervisor Execution Environment in RISC-V. This can be M mode or HS mode.

S Mode

Supervisor mode is the next privilege mode after M mode where virtual memory is enabled.

U Mode

User mode is the least privilege mode where user-space application is expected to run.

VS Mode

Virtualized supervisor mode where the guest OS is expected run when hypervisor is enabled.


This chapter discusses specific UEFI implementation details for EBBR compliant platforms.

2.1. UEFI Version

This document uses version 2.10 of the UEFI specification [UEFI].

2.2. UEFI Compliance

EBBR compliant platform shall conform to a subset of the [UEFI] spec as listed in this section. Normally, UEFI compliance would require full compliance with all items listed in UEFI § 2.6 Requirements. However, the EBBR target market has a reduced set of requirements, and so some UEFI features are omitted as unnecessary.

2.2.1. Required Elements

This section replaces the list of required elements in UEFI § 2.6.1 Required Elements. All of the following UEFI elements are required for EBBR compliance.

Table 2.1 UEFI Required Elements




The system table is required to provide access to UEFI Boot Services, UEFI Runtime Services, consoles, and other firmware, vendor and platform information.


All functions defined as boot services must exist. Methods for unsupported or unimplemented behaviour must return an appropriate error code.


All functions defined as runtime services must exist. Methods for unsupported or unimplemented behaviour must return an appropriate error code. If any runtime service is unimplemented, it must be indicated via the EFI_RT_PROPERTIES_TABLE.


Must be installed for each loaded image.


Must be installed for each loaded image.


An EFI_DEVICE_PATH_PROTOCOL must be installed onto all device handles provided by the firmware.


Interface for creating and manipulating UEFI device paths.

Table 2.2 Notable omissions from UEFI § 2.6.1 Required Elements




Native EFI decompression is rarely used and therefore not required.

2.2.2. Required Platform Specific Elements

This section replaces the list of required elements in UEFI § 2.6.2 Platform-Specific Elements. All of the following UEFI elements are required for EBBR compliance.

Table 2.3 UEFI Platform-Specific Required Elements



Console devices

The platform must have at least one console device.


Needed for console input.


Needed for console input.


Needed for console output.


Needed for console output.


Required by EFI shell and for compliance testing.


Required by EFI shell and for compliance testing.


Required by EFI shell and for compliance testing.


Required for block device access.


Required if booting from block device is supported.


Required if the platform has a hardware entropy source.


Required if the platform has a network device.


Required if the platform supports network booting. (UEFI § 24.7 HTTP Boot)


Required on RISC-V platforms. (UEFI § Handoff State and [RVUEFI])

The following table is a list of notable deviations from UEFI § 2.6.2 Platform-Specific Elements. Many of these deviations are because the EBBR use cases do not require interface specific UEFI protocols, and so they have been made optional.

Table 2.4 Notable Deviations from UEFI § 2.6.2 Platform-Specific Elements


Description of deviation


The LoadImage() boot service is not required to install an EFI_HII_PACKAGE_LIST_PROTOCOL for an image containing a custom PE/COFF resource with the type ‘HII’. HII resource images are not needed to run the UEFI shell or the SCT.


The ConnectController() boot service must be implemented but it is not required to support the EFI_PLATFORM_DRIVER_OVERRIDE_PROTOCOL, EFI_DRIVER_FAMILY_OVERRIDE_PROTOCOL, and EFI_BUS_SPECIFIC_DRIVER_OVERRIDE_PROTOCOL. These override protocols are only useful if drivers are loaded as EFI binaries by the firmware.


UEFI requires this for console devices, but it is rarely necessary in practice. Therefore this protocol is not required.


UEFI requires this for console devices, but it is rarely necessary in practice. Therefore this protocol is not required.

Graphical console

Platforms with a graphical device are not required to expose it as a graphical console.


Rarely used interface that isn’t required for EBBR use cases.


Booting via the Preboot Execution Environment (PXE) is insecure. Loading via PXE is typically executed before launching the first UEFI application.

Network protocols

A full implementation of the UEFI general purpose networking ABIs is not required, including EFI_NETWORK_INTERFACE_IDENTIFIER_PROTOCOL, EFI_MANAGED_NETWORK_PROTOCOL, EFI_*_SERVICE_BINDING_PROTOCOL, or any of the IPv4 or IPv6 protocols.

Byte stream device support (UART)

UEFI protocols not required.

PCI bus support

UEFI protocols not required.

USB bus support

UEFI protocols not required.

NVMe pass through support

UEFI protocols not required.

SCSI pass through support

UEFI protocols not required.


Not required.

Option ROM support

In many EBBR use cases there is no requirement to generically support any PCIe add in card at the firmware level. When PCIe devices are used, drivers for the device are often built into the firmware itself rather than loaded as option ROMs. For this reason EBBR implementations are not required to support option ROM loading.

2.2.3. Required Global Variables

EBBR compliant platforms are required to support the following Global Variables as found in UEFI § 3.3 Globally Defined Variables.

Table 2.5 Required UEFI Variables

Variable Name



A boot load option. #### is a numerical hex value.


The boot option that was selected for the current boot.


The boot option that will be used for the next boot only.


An ordered list of boot options. Firmware will try BootNext and each Boot#### entry in the order given by BootOrder to find the first bootable image.


Method for OS to request features from firmware.


Variable for firmware to indicate which features can be enabled. Required Variables for capsule update “on disk”

When the firmware implements in-band firmware update with UpdateCapsule() it must support the following Variables to report the status of capsule “on disk” processing after restart as found in UEFI § 8.5.6 UEFI variable reporting on the Success or any Errors encountered in processing of capsules after restart. [1]

Table 2.6 UEFI Variables required for capsule update “on disk”

Variable Name



Variable for firmware to report capsule processing status after restart. NNNN is a numerical hex value.


Variable for platform to publish the maximum CapsuleNNNN supported.


Variable for platform to publish the last CapsuleNNNN created.

2.2.4. Block device partitioning

The system firmware must implement support for MBR, GPT and El Torito partitioning on block devices. System firmware may also implement other partitioning methods as needed by the platform, but OS support for other methods is outside the scope of this specification.

2.3. UEFI System Environment and Configuration

The resident UEFI boot-time environment shall use the highest non-secure privilege level available. The exact meaning of this is architecture dependent, as detailed below.

Resident UEFI firmware might target a specific privilege level. In contrast, UEFI Loaded Images, such as third-party drivers and boot applications, must not contain any built-in assumptions that they are to be loaded at a given privilege level during boot time since they can, for example, legitimately be loaded into either EL1 or EL2 on AArch64 and HS/VS/S mode on RISC-V.

2.3.1. AArch64 Exception Levels

On AArch64 UEFI shall execute as 64-bit code at either EL1 or EL2, as defined in UEFI § 2.3.6 AArch64 Platforms, depending on whether or not virtualization is available at OS load time. UEFI Boot at EL2

Most systems are expected to boot UEFI at EL2, to allow for the installation of a hypervisor or a virtualization aware Operating System. UEFI Boot at EL1

Booting of UEFI at EL1 is most likely employed within a hypervisor hosted Guest Operating System environment, to allow the subsequent booting of a UEFI-compliant Operating System. In this instance, the UEFI boot-time environment can be provided, as a virtualized service, by the hypervisor and not as part of the host firmware.

2.3.2. RISC-V Privilege Levels

RISC-V doesn’t define dedicated privilege levels for hypervisor enabled platforms. The supervisor mode becomes HS mode where a hypervisor or a hosting-capable operating system runs while the guest OS runs in virtual S mode (VS mode). Resident UEFI firmware can be executed in M mode or S/HS mode during POST. However, the UEFI images must be loaded in HS or VS mode if virtualization is available at OS load time. UEFI Boot at S mode

Most systems are expected to boot UEFI at S mode when the hypervisor extension is not enabled [RVPRIVSPEC]. UEFI Boot at HS mode

Any platform supporting the hypervisor extension enabled most likely will boot UEFI at HS mode, to allow for the installation of a hypervisor or a virtualization aware Operating System. UEFI Boot at VS mode

Booting of UEFI at VS mode is employed within a hypervisor hosted Guest Operating System environment, to allow the subsequent booting of a UEFI-compliant Operating System. In this instance, the UEFI boot-time environment can be provided, as a virtualized service, by the hypervisor and not as part of the host firmware.

2.4. UEFI Configuration Tables

A UEFI system that complies with this specification may provide additional tables via the EFI Configuration Table.

Compliant systems are required to provide one, but not both, of the following tables:

  • an Advanced Configuration and Power Interface [ACPI] table, or

  • a Devicetree [DTSPEC] system description

EBBR systems must not provide both ACPI and Devicetree tables at the same time. Systems that support both interfaces must provide a configuration mechanism to select either ACPI or Devicetree, and must ensure only the selected interface is provided to the OS loader.

2.4.1. EFI Conformance Profile Table

The following GUIDs in the EFI Conformance Profile Table, as defined in UEFI § 4.6.5 EFI_CONFORMANCE_PROFILE_TABLE, are used to indicate compliance to specific versions of the EBBR specification.

If the platform advertises an EBBR profile in the EFI Conformance Profile Table, then it must be compliant with the corresponding version(s) of this specification [2].

  • Version 2.1.x:

{ 0xcce33c35, 0x74ac, 0x4087, \
{ 0xbc, 0xe7, 0x8b, 0x29, 0xb0, 0x2e, 0xeb, 0x27 }}
  • Version 2.2.x:

{ 0x9073eed4, 0xe50d, 0x11ee, \
{ 0xb8, 0xb0, 0x8b, 0x68, 0xda, 0x62, 0xfc, 0x80 }}

2.4.2. Devicetree

If firmware provides a Devicetree system description then it must be provided in Flattened Devicetree Blob (DTB) format version 17 or higher as described in [DTSPEC] § 5. The DTB Nodes and Properties must be compliant with the requirements listed in [DTSPEC] § 3 & 4 and with the requirements listed in the following table, which take precedence. [3]

Table 2.7 DTB Nodes and Properties requirements




This Node is required. ([DTSPEC] § 3.6)


This Property is required. It is necessary for console output. ([DTSPEC] § 3.6)

The DTB must be contained in memory of type EfiACPIReclaimMemory. [4]

2.5. UEFI Protocols

Requirements for protocols defined in the UEFI specification are described in sections Required Elements and Required Platform Specific Elements.

The following sections give additional requirements, for protocols not defined in the UEFI specification.

2.5.1. Trusted Platform Module (TPM)

Not all embedded systems include a TPM but if a TPM is present, then firmware shall implement the EFI_TCG2_PROTOCOL as defined in [TCG2].

2.6. UEFI Boot Services

2.6.1. Memory Map

The UEFI environment must provide a system memory map, which must include all appropriate devices and memories that are required for booting and system configuration.

All RAM defined by the UEFI memory map must be identity-mapped, which means that virtual addresses must equal physical addresses.

The default RAM allocated attribute must be EFI_MEMORY_WB.

2.6.2. Miscellaneous Boot Services

The platform’s monotonic counter is made optional. If the platform does not implement the monotonic counter, the GetNextMonotonicCount() function shall return EFI_DEVICE_ERROR. [5]

2.6.3. UEFI Secure Boot (Optional)

UEFI Secure Boot is optional for this specification.

If Secure Boot is implemented, it must conform to the UEFI specification for Secure Boot. There are no additional requirements for Secure Boot.

2.7. UEFI Runtime Services

UEFI runtime services exist after the call to ExitBootServices() and are designed to provide a limited set of persistent services to the platform Operating System or hypervisor. Functions contained in EFI_RUNTIME_SERVICES are expected to be available during both boot services and runtime services. However, it isn’t always practical for all EFI_RUNTIME_SERVICES functions to be callable during runtime services due to hardware limitations. If any EFI_RUNTIME_SERVICES functions are only available during boot services then firmware shall provide the EFI_RT_PROPERTIES_TABLE to indicate which functions are available during runtime services. Functions that are not available during runtime services shall return EFI_UNSUPPORTED.

Table 2.8 details which EFI_RUNTIME_SERVICES are required to be implemented during boot services and runtime services.

Table 2.8 EFI_RUNTIME_SERVICES Implementation Requirements


Before ExitBootServices()

After ExitBootServices()


Required if RTC present.



Required if RTC present.



Required if wakeup supported.



Required if wakeup supported.
























Required for in-band update.








2.7.1. Runtime Device Mappings

Firmware shall not create runtime mappings, or perform any runtime IO that will conflict with device access by the OS. Normally this means a device may be controlled by firmware, or controlled by the OS, but not both. E.g. if firmware attempts to access an eMMC device at runtime then it will conflict with transactions being performed by the OS.

Devices that are provided to the OS (i.e., via PCIe discovery or ACPI/DT description) shall not be accessed by firmware at runtime. Similarly, devices retained by firmware (i.e., not discoverable by the OS) shall not be accessed by the OS.

Only devices that explicitly support concurrent access by both firmware and an OS may be mapped at runtime by both firmware and the OS. Real-time Clock (RTC)

Not all embedded systems include an RTC, and even if one is present, it may not be possible to access the RTC from runtime services. e.g., The RTC may be on a shared I2C bus which runtime services cannot access because it will conflict with the OS.

If an RTC is present, then GetTime() and SetTime() must be supported before ExitBootServices() is called.

However, if firmware does not support access to the RTC after ExitBootServices(), then GetTime() and SetTime() shall return EFI_UNSUPPORTED and the OS must use a device driver to control the RTC.

2.7.2. UEFI Reset and Shutdown

ResetSystem() is required to be implemented in boot services, but it is optional for runtime services. During runtime services, the operating system should first attempt to use ResetSystem() to reset the system.

If firmware doesn’t support ResetSystem() during runtime services, then the call will immediately return, and the OS should fall back to an architecture or platform specific reset mechanism.

On AArch64 platforms implementing [PSCI], if ResetSystem() is not implemented then the Operating System should fall back to making a PSCI call to reset or shutdown the system.

2.7.3. Runtime Variable Access

There are many platforms where it is difficult to implement SetVariable() for non-volatile variables during runtime services because the firmware cannot access storage after ExitBootServices() is called.

e.g., If firmware accesses an eMMC device directly at runtime, it will collide with transactions initiated by the OS. Neither U-Boot nor Tianocore have a generic solution for accessing or updating variables stored on shared media. [6]

If a platform does not implement modifying non-volatile variables with SetVariable() after ExitBootServices(), then firmware shall return EFI_UNSUPPORTED for any call to SetVariable(), and must advertise that SetVariable() isn’t available during runtime services via the RuntimeServicesSupported value in the EFI_RT_PROPERTIES_TABLE as defined in UEFI § 4.6.2 EFI_RT_PROPERTIES_TABLE. EFI applications can read RuntimeServicesSupported to determine if calls to SetVariable() need to be performed before calling ExitBootServices().

Even when SetVariable() is not supported during runtime services, firmware should cache variable names and values in EfiRuntimeServicesData memory so that GetVariable() and GetNextVariableName() can behave as specified.

2.7.4. Firmware Update

Being able to update firmware to address security issues is a key feature of secure platforms. EBBR platforms are required to implement either an in-band or an out-of-band firmware update mechanism. In-band firmware update

If firmware update is performed in-band (firmware on the application processor updates itself), then the firmware shall implement the UpdateCapsule() runtime service and accept updates in the “Firmware Management Protocol Data Capsule Structure” format as described in UEFI § 23.3 Delivering Capsules Containing Updates toFirmware Management Protocol. [7] UpdateCapsule() is only required before ExitBootServices() is called.

Firmware is also required to provide an EFI System Resource Table (ESRT) as described in UEFI § 23.4 EFI System Resource Table. Every firmware image that can be updated in-band must be described in the ESRT.

Firmware must support the delivery of capsules via file on mass storage device (“on disk”) as described in UEFI § 8.5.5 Delivery of Capsules via file on Mass Storage Device. [8]


It is recommended that firmware implementing the UpdateCapsule() runtime service and an ESRT also implement the EFI_FIRMWARE_MANAGEMENT_PROTOCOL described in UEFI § 23.1 Firmware Management Protocol. [9] Out-of-band firmware update

If firmware update is performed out-of-band (e.g., by an independent Baseboard Management Controller (BMC), or firmware is provided by a hypervisor), then the platform is not required to implement the UpdateCapsule() runtime service and it is not required to provide an ESRT.

2.7.5. Miscellaneous Runtime Services

If the platform does not implement the monotonic counter, it shall not support the GetNextHighMonotonicCount() runtime service. [10]

3. Privileged or Secure Firmware

3.1. AArch32 Multiprocessor Startup Protocol

There is no standard multiprocessor startup or CPU power management mechanism for ARMv7 and earlier platforms. The OS is expected to use platform specific drivers for CPU power management. Firmware must advertize the CPU power management mechanism in the Devicetree system description or the ACPI tables so that the OS can enable the correct driver. At ExitBootServices() time, all secondary CPUs must be parked or powered off.

3.2. AArch64 Multiprocessor Startup Protocol

On AArch64 platforms, Firmware resident in Trustzone EL3 must implement and conform to the Power State Coordination Interface specification [PSCI] and to the SMC Calling Convention [SMCCC].

Platforms without EL3 must implement one of:

  • PSCI and SMCCC at EL2 (leaving only EL1 available to an operating system)

  • Linux AArch64 spin tables [LINUXA64BOOT] (Devicetree only)


The spin table protocol is strongly discouraged. Future versions of this specification will only allow PSCI and SMCCC, and they should be implemented in all new designs.

It is recommended that firmware implementing PSCI supports version 1.0 or later [1] and that firmware implementing SMCCC supports version 1.1 or later [2].

3.3. RISC-V Multiprocessor Startup Protocol

The resident firmware in M mode or hypervisor running in HS mode must implement and conform to at least SBI [RVSBISPEC] v0.2 with HART State Management(HSM) extension for both RV32 and RV64.

4. Firmware Storage

In general, EBBR compliant platforms should use dedicated storage for boot firmware images and data, independent of the storage used for OS partitions and the EFI System Partition (ESP). This could be a physically separate device (e.g. SPI flash), or a dedicated logical unit (LU) within a device (e.g. eMMC boot partition, [1] or UFS boot LU [2]).

However, many embedded systems have size, cost, or implementation constraints that make separate firmware storage unfeasible. On such systems, firmware and the OS reside in the same storage device. Care must be taken to ensure firmware kept in normal storage does not conflict with normal usage of the media by an OS.

  • Firmware must be stored on the media in a way that does not conflict with normal partitioning and usage by the operating system.

  • Normal operation of the OS must not interfere with firmware files.

  • Firmware needs a method to modify variable storage at runtime while the OS controls access to the device. [3]

4.1. Partitioning of Shared Storage

The shared storage device must use the GUID Partition Table (GPT) disk layout as defined in UEFI § 5.3 GUID Partition Table (GPT) Disk Layout, unless the platform boot sequence is fundamentally incompatible with the GPT disk layout. In which case, a legacy Master Boot Record (MBR) must be used. [4]


MBR partitioning is deprecated and only included for legacy support. All new platforms are expected to use GPT partitioning. GPT partitioning supports a much larger number of partitions, and has built in resiliency.

A future version of this specification will disallow the use of MBR partitioning.

Firmware images and data in shared storage should be contained in partitions described by the GPT or MBR. The platform should locate firmware by searching the partition table for the partition(s) containing firmware.

However, some SoCs load firmware from a fixed offset into the storage media. In this case, to protect against partitioning tools overwriting firmware, the partition table must be formed in a way to protect the firmware image(s) as described in sections GPT partitioning and MBR partitioning.

Automatic partitioning tools (e.g. an OS installer) must not delete the protective information in the partition table, or delete, move, or modify protective partition entries. Manual partitioning tools should provide warnings when modifying protective partitions.


Fixed offsets to firmware data is supported only for legacy reasons. All new platforms are expected to use partitions to locate firmware files.

A future version of this specification will disallow the use of fixed offsets.

4.1.1. GPT partitioning

The partition table must strictly conform to the UEFI specification and include a protective MBR authored exactly as described in UEFI § 5.3 GUID Partition Table (GPT) Disk Layout (hybrid partitioning schemes are not permitted).

Fixed-location firmware images must be protected by creating protective partition entries, or by placing GPT data structures away from the LBAs occupied by firmware,

Protective partitions are entries in the partition table that cover the LBA region occupied by firmware and have the ‘Required Partition’ attribute set. A protective partition must use a PartitionTypeGUID that identifies it as a firmware protective partition. (e.g., don’t reuse a GUID used by non-protective partitions). There are no requirements on the contents or layout of the firmware protective partition.

Placing GPT data structures away from firmware images can be accomplished by adjusting the GUID Partition Entry array location (adjusting the values of PartitionEntryLBA and NumberOfPartitionEntries, and SizeOfPartitionEntry), or by specifying the usable LBAs (Choosing FirstUsableLBA/LastUsableLBA to not overlap the fixed firmware location). See UEFI § 5.3.2 GPT Header.

Given the choice, platforms should use protective partitions over adjusting the placement of GPT data structures because protective partitions provide explicit information about the protected region.

4.1.2. MBR partitioning

If firmware is at a fixed location entirely within the first 1MiB of storage (<= LBA2047) then no protective partitions are required. If firmware resides in a fixed location outside the first 1MiB, then a protective partition must be used to cover the firmware LBAs. Protective partitions should have a partition type of 0xF8 unless an immutable feature of the platform makes this impossible.

OS partitioning tools must not create partitions in the first 1MiB of the storage device, and must not remove protective partitions.

4.2. Firmware Partition Filesystem

Where possible, firmware images and data should be stored in a filesystem. Firmware can be stored either in a dedicated firmware partition, or in certain circumstances in the UEFI System Partition (ESP). Using a filesystem makes it simpler to manage multiple firmware files and makes it possible for a single disk image to contain firmware for multiple platforms.

When firmware is stored in the ESP, the ESP should contain a directory named /FIRMWARE in the root directory, and all firmware images and data should be stored in platform vendor subdirectories under /FIRMWARE.

Dedicated firmware partitions should be formatted with a FAT filesystem as defined in UEFI § 13.3 File System Format. Dedicated firmware partitions should use the same /FIRMWARE directory hierarchy. OS tools shall ignore dedicated firmware partitions, and shall not attempt to use a dedicated firmware partition as an ESP.

4.2.1. The firmware directory hierarchy

Vendors may choose their own subdirectory name under /FIRMWARE, but shall choose names that do not conflict with other vendors. Normally the vendor name will be the name of the SoC vendor, because the firmware directory name will be hard coded in the SoC’s boot ROM. Vendors are recommended to use their Devicetree vendor prefix or ACPI vendor ID as their vendor subdirectory name.

Vendors are free to decide how to structure subdirectories under their own vendor directory, but they shall use a naming convention that allows multiple SoCs to be supported in the same filesystem.

For example, a vendor named Acme with two SoCs, AM100 & AM300, could choose to use the SoC part number as a subdirectory in the firmware path:


It is also recommended for dedicated firmware partitions to use the /FIRMWARE file hierarchy.

The following is a sample directory structure for firmware files:

  /<Vendor 1 Directory>
     /<SoC A Directory>
        <Firmware image>
        <Firmware data>
     /<SoC B Directory>
        <Firmware image>
        <Firmware data>
  /<Vendor 2 Directory>
     <Common Firmware image>
     <Common Firmware data>
  /<Vendor 3 Directory>
     /<SoC E Directory>
        <Firmware image>

Operating systems and installers should not manipulate any files in the /FIRMWARE hierarchy during normal operation.

4.3. Shared Storage Requirements

The sections below discuss the requirements when using both fixed and removable storage. However, it should be noted that the recommended behaviour of firmware should be identical regardless of storage type. In both cases, the recommended boot sequence is to first search for firmware in a dedicated firmware partition, and second search for firmware in the ESP. The only difference between fixed and removable storage is the recommended factory settings for the platform.

4.3.1. Fixed Shared Storage

Fixed storage is storage that is permanently attached to the platform, and cannot be moved between systems. eMMC and Universal Flash Storage (UFS) device are often used as shared fixed storage for both firmware and the OS.

Where possible, it is preferred for the system to boot from a dedicated boot region on media that provides one (e.g., eMMC) that is sufficiently large. Otherwise, the platform storage should be pre-formatted in the factory with a partition table, a dedicated firmware partition, and firmware binaries installed.

Operating systems must not use the dedicated firmware partition for installing EFI applications including, but not limited to, the OS loader and OS specific files. Instead, a normal ESP should be created. OS partitioning tools must take care not to modify or delete dedicated firmware partitions.

4.3.2. Removable Shared Storage

Removable storage is any media that can be physically removed from the system and moved to another machine as part of normal operation (e.g., SD cards, USB thumb drives, and CDs).

There are two primary scenarios for storing firmware on removable media.

  1. Platforms that only have removable media (e.g., The Raspberry Pi has an SD card slot, but no fixed storage).

  2. Recovery when on-board firmware has been corrupted. If firmware on fixed media has been corrupted, some platforms support loading firmware from removable media which can then be used to recover the platform.

In both cases, it is desirable to start with a stock OS boot image, copy it to the media (SD or USB), and then add the necessary firmware files to make the platform bootable. Typically, OS boot images won’t include a dedicated firmware partition, and it is inconvenient to repartition the media to add one. It is simpler and easier for the user if they are able to copy the required firmware files into the /FIRMWARE directory tree on the ESP using the basic file manager tools provided by all desktop operating systems.

On removable media, firmware should be stored in the ESP under the /FIRMWARE directory structure as described in Firmware Partition Filesystem. Platform vendors should support their platform by providing a single .zip file that places all the required firmware files in the correct locations when extracted in the ESP /FIRMWARE directory. For simplicity sake, it is expected the same .zip file will recover the firmware files in a dedicated firmware partition.

5. File Format For Storing EFI Variables

Some UEFI enabled devices can only store EFI variables as a file on a block device. This implies that at runtime the operating system must manage changes to the EFI variable by updating the file.

This chapter defines a file-format for EFI variables that both the firmware and the operating system can rely on.

All integer fields are stored in little-endian byte order.

5.1. File header

The following byte sequence is used to identify the file format:

#define EFI_VAR_FILE_MAGIC {0x55, 0x62, 0x45, 0x66, 0x69, 0x56, 0x61}

The current revision of the file format it given by:


The file header has the following structure:

typedef struct {
    UINT64                  Reserved;
    UINT8                   Magic[7];
    UINT8                   Revision;
    UINT32                  Length;
    UINT32                  Crc32;
    EFI_VARIABLE_ENTRY      Variables[];

This field is not used currently. Its value shall be set to 0.


This field is used to identify the file as containing EFI variables. Its value is EFI_VAR_FILE_MAGIC.


This field contains the revision of the file format. As of this revision it takes the value EFI_VAR_FILE_FORMAT_REVISION_1.


This field contains the length in bytes of the structure EFI_VARIABLE_FILE and all entries in Variables entries. The actual file may be longer.


This field contains the value of the CRC32 of all variable entries. The first byte to hash is given by the offset of field Variables. The number of bytes to hash is given by Length minus the size of EFI_VARIABLE_FILE.


The list of variables entries starts at this field. Each variable entry is expanded with NUL bytes to a multiple of 8 bytes. The list of variables is not sorted.

5.2. Variable entries

Each variable is stored as a structure:

typedef struct {
    UINT32          DataSize;
    UINT32          Attributes;
    UINT64          TimeStamp;
    EFI_GUID        VendorGuid;
    UINT8           Data[];

This field contains the size of the Data field in bytes without the NUL terminated variable name.


This field is a bitmap with the variable attributes as defined in UEFI § 8.2.1 GetVariable().


For time-based authenticated variables this field contains the timestamp associated with the authentication descriptor encoded as seconds since 1970-01-01T00:00:00Z. For all other variables this field shall be set to 0.


This field contains the unique identifier of the vendor.


This field contains a NUL terminated UCS-2 string with the name of the vendor’s variable followed by DataSize bytes of actual content of the variable.

5.3. Limitations

The security of a file based variable storage is limited by the security of the storage or transport medium. Without further measures file storage is inadequate for the UEFI security database and other authenticated variables.

The current version of the file format can convey the timestamp of time-based authenticated variables. It does not define the storage of the signing certificates of nonce-based authenticated variables. [1]

6. Bibliography