Merge "mmc: block: Fix request completion in the CQE timeout path"
diff --git a/Documentation/block/inline-encryption.rst b/Documentation/block/inline-encryption.rst
new file mode 100644
index 0000000..330106b
--- /dev/null
+++ b/Documentation/block/inline-encryption.rst
@@ -0,0 +1,183 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=================
+Inline Encryption
+=================
+
+Objective
+=========
+
+We want to support inline encryption (IE) in the kernel.
+To allow for testing, we also want a crypto API fallback when actual
+IE hardware is absent. We also want IE to work with layered devices
+like dm and loopback (i.e. we want to be able to use the IE hardware
+of the underlying devices if present, or else fall back to crypto API
+en/decryption).
+
+
+Constraints and notes
+=====================
+
+- IE hardware have a limited number of "keyslots" that can be programmed
+ with an encryption context (key, algorithm, data unit size, etc.) at any time.
+ One can specify a keyslot in a data request made to the device, and the
+ device will en/decrypt the data using the encryption context programmed into
+ that specified keyslot. When possible, we want to make multiple requests with
+ the same encryption context share the same keyslot.
+
+- We need a way for filesystems to specify an encryption context to use for
+ en/decrypting a struct bio, and a device driver (like UFS) needs to be able
+ to use that encryption context when it processes the bio.
+
+- We need a way for device drivers to expose their capabilities in a unified
+ way to the upper layers.
+
+
+Design
+======
+
+We add a struct bio_crypt_ctx to struct bio that can represent an
+encryption context, because we need to be able to pass this encryption
+context from the FS layer to the device driver to act upon.
+
+While IE hardware works on the notion of keyslots, the FS layer has no
+knowledge of keyslots - it simply wants to specify an encryption context to
+use while en/decrypting a bio.
+
+We introduce a keyslot manager (KSM) that handles the translation from
+encryption contexts specified by the FS to keyslots on the IE hardware.
+This KSM also serves as the way IE hardware can expose their capabilities to
+upper layers. The generic mode of operation is: each device driver that wants
+to support IE will construct a KSM and set it up in its struct request_queue.
+Upper layers that want to use IE on this device can then use this KSM in
+the device's struct request_queue to translate an encryption context into
+a keyslot. The presence of the KSM in the request queue shall be used to mean
+that the device supports IE.
+
+On the device driver end of the interface, the device driver needs to tell the
+KSM how to actually manipulate the IE hardware in the device to do things like
+programming the crypto key into the IE hardware into a particular keyslot. All
+this is achieved through the :c:type:`struct keyslot_mgmt_ll_ops` that the
+device driver passes to the KSM when creating it.
+
+It uses refcounts to track which keyslots are idle (either they have no
+encryption context programmed, or there are no in-flight struct bios
+referencing that keyslot). When a new encryption context needs a keyslot, it
+tries to find a keyslot that has already been programmed with the same
+encryption context, and if there is no such keyslot, it evicts the least
+recently used idle keyslot and programs the new encryption context into that
+one. If no idle keyslots are available, then the caller will sleep until there
+is at least one.
+
+
+Blk-crypto
+==========
+
+The above is sufficient for simple cases, but does not work if there is a
+need for a crypto API fallback, or if we are want to use IE with layered
+devices. To these ends, we introduce blk-crypto. Blk-crypto allows us to
+present a unified view of encryption to the FS (so FS only needs to specify
+an encryption context and not worry about keyslots at all), and blk-crypto
+can decide whether to delegate the en/decryption to IE hardware or to the
+crypto API. Blk-crypto maintains an internal KSM that serves as the crypto
+API fallback.
+
+Blk-crypto needs to ensure that the encryption context is programmed into the
+"correct" keyslot manager for IE. If a bio is submitted to a layered device
+that eventually passes the bio down to a device that really does support IE, we
+want the encryption context to be programmed into a keyslot for the KSM of the
+device with IE support. However, blk-crypto does not know a priori whether a
+particular device is the final device in the layering structure for a bio or
+not. So in the case that a particular device does not support IE, since it is
+possibly the final destination device for the bio, if the bio requires
+encryption (i.e. the bio is doing a write operation), blk-crypto must fallback
+to the crypto API *before* sending the bio to the device.
+
+Blk-crypto ensures that:
+
+- The bio's encryption context is programmed into a keyslot in the KSM of the
+ request queue that the bio is being submitted to (or the crypto API fallback
+ KSM if the request queue doesn't have a KSM), and that the ``bc_ksm``
+ in the ``bi_crypt_context`` is set to this KSM
+
+- That the bio has its own individual reference to the keyslot in this KSM.
+ Once the bio passes through blk-crypto, its encryption context is programmed
+ in some KSM. The "its own individual reference to the keyslot" ensures that
+ keyslots can be released by each bio independently of other bios while
+ ensuring that the bio has a valid reference to the keyslot when, for e.g., the
+ crypto API fallback KSM in blk-crypto performs crypto on the device's behalf.
+ The individual references are ensured by increasing the refcount for the
+ keyslot in the ``bc_ksm`` when a bio with a programmed encryption
+ context is cloned.
+
+
+What blk-crypto does on bio submission
+--------------------------------------
+
+**Case 1:** blk-crypto is given a bio with only an encryption context that hasn't
+been programmed into any keyslot in any KSM (for e.g. a bio from the FS).
+ In this case, blk-crypto will program the encryption context into the KSM of the
+ request queue the bio is being submitted to (and if this KSM does not exist,
+ then it will program it into blk-crypto's internal KSM for crypto API
+ fallback). The KSM that this encryption context was programmed into is stored
+ as the ``bc_ksm`` in the bio's ``bi_crypt_context``.
+
+**Case 2:** blk-crypto is given a bio whose encryption context has already been
+programmed into a keyslot in the *crypto API fallback* KSM.
+ In this case, blk-crypto does nothing; it treats the bio as not having
+ specified an encryption context. Note that we cannot do here what we will do
+ in Case 3 because we would have already encrypted the bio via the crypto API
+ by this point.
+
+**Case 3:** blk-crypto is given a bio whose encryption context has already been
+programmed into a keyslot in some KSM (that is *not* the crypto API fallback
+KSM).
+ In this case, blk-crypto first releases that keyslot from that KSM and then
+ treats the bio as in Case 1.
+
+This way, when a device driver is processing a bio, it can be sure that
+the bio's encryption context has been programmed into some KSM (either the
+device driver's request queue's KSM, or blk-crypto's crypto API fallback KSM).
+It then simply needs to check if the bio's ``bc_ksm`` is the device's
+request queue's KSM. If so, then it should proceed with IE. If not, it should
+simply do nothing with respect to crypto, because some other KSM (perhaps the
+blk-crypto crypto API fallback KSM) is handling the en/decryption.
+
+Blk-crypto will release the keyslot that is being held by the bio (and also
+decrypt it if the bio is using the crypto API fallback KSM) once
+``bio_remaining_done`` returns true for the bio.
+
+
+Layered Devices
+===============
+
+Layered devices that wish to support IE need to create their own keyslot
+manager for their request queue, and expose whatever functionality they choose.
+When a layered device wants to pass a bio to another layer (either by
+resubmitting the same bio, or by submitting a clone), it doesn't need to do
+anything special because the bio (or the clone) will once again pass through
+blk-crypto, which will work as described in Case 3. If a layered device wants
+for some reason to do the IO by itself instead of passing it on to a child
+device, but it also chose to expose IE capabilities by setting up a KSM in its
+request queue, it is then responsible for en/decrypting the data itself. In
+such cases, the device can choose to call the blk-crypto function
+``blk_crypto_fallback_to_kernel_crypto_api`` (TODO: Not yet implemented), which will
+cause the en/decryption to be done via the crypto API fallback.
+
+
+Future Optimizations for layered devices
+========================================
+
+Creating a keyslot manager for the layered device uses up memory for each
+keyslot, and in general, a layered device (like dm-linear) merely passes the
+request on to a "child" device, so the keyslots in the layered device itself
+might be completely unused. We can instead define a new type of KSM; the
+"passthrough KSM", that layered devices can use to let blk-crypto know that
+this layered device *will* pass the bio to some child device (and hence
+through blk-crypto again, at which point blk-crypto can program the encryption
+context, instead of programming it into the layered device's KSM). Again, if
+the device "lies" and decides to do the IO itself instead of passing it on to
+a child device, it is responsible for doing the en/decryption (and can choose
+to call ``blk_crypto_fallback_to_kernel_crypto_api``). Another use case for the
+"passthrough KSM" is for IE devices that want to manage their own keyslots/do
+not have a limited number of keyslots.
diff --git a/Documentation/crypto/msm/msm_ice_driver.txt b/Documentation/crypto/msm/msm_ice_driver.txt
deleted file mode 100644
index 4d02c22..0000000
--- a/Documentation/crypto/msm/msm_ice_driver.txt
+++ /dev/null
@@ -1,235 +0,0 @@
-Introduction:
-=============
-Storage encryption has been one of the most required feature from security
-point of view. QTI based storage encryption solution uses general purpose
-crypto engine. While this kind of solution provide a decent amount of
-performance, it falls short as storage speed is improving significantly
-continuously. To overcome performance degradation, newer chips are going to
-have Inline Crypto Engine (ICE) embedded into storage device. ICE is supposed
-to meet the line speed of storage devices.
-
-Hardware Description
-====================
-ICE is a HW block that is embedded into storage device such as UFS/eMMC. By
-default, ICE works in bypass mode i.e. ICE HW does not perform any crypto
-operation on data to be processed by storage device. If required, ICE can be
-configured to perform crypto operation in one direction (i.e. either encryption
-or decryption) or in both direction(both encryption & decryption).
-
-When a switch between the operation modes(plain to crypto or crypto to plain)
-is desired for a particular partition, SW must complete all transactions for
-that particular partition before switching the crypto mode i.e. no crypto, one
-direction crypto or both direction crypto operation. Requests for other
-partitions are not impacted due to crypto mode switch.
-
-ICE HW currently supports AES128/256 bit ECB & XTS mode encryption algorithms.
-
-Keys for crypto operations are loaded from SW. Keys are stored in a lookup
-table(LUT) located inside ICE HW. Maximum of 32 keys can be loaded in ICE key
-LUT. A Key inside the LUT can be referred using a key index.
-
-SW Description
-==============
-ICE HW has catagorized ICE registers in 2 groups: those which can be accessed by
-only secure side i.e. TZ and those which can be accessed by non-secure side such
-as HLOS as well. This requires that ICE driver to be split in two pieces: one
-running from TZ space and another from HLOS space.
-
-ICE driver from TZ would configure keys as requested by HLOS side.
-
-ICE driver on HLOS side is responsible for initialization of ICE HW.
-
-SW Architecture Diagram
-=======================
-Following are all the components involved in the ICE driver for control path:
-
-+++++++++++++++++++++++++++++++++++++++++
-+ App layer +
-+++++++++++++++++++++++++++++++++++++++++
-+ System layer +
-+ ++++++++ +++++++ +
-+ + VOLD + + PFM + +
-+ ++++++++ +++++++ +
-+ || || +
-+ || || +
-+ \/ \/ +
-+ ++++++++++++++ +
-+ + LibQSEECom + +
-+ ++++++++++++++ +
-+++++++++++++++++++++++++++++++++++++++++
-+ Kernel + +++++++++++++++++
-+ + + KMS +
-+ +++++++ +++++++++++ +++++++++++ + +++++++++++++++++
-+ + ICE + + Storage + + QSEECom + + + ICE Driver +
-+++++++++++++++++++++++++++++++++++++++++ <===> +++++++++++++++++
- || ||
- || ||
- \/ \/
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-+ Storage Device +
-+ ++++++++++++++ +
-+ + ICE HW + +
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-
-Use Cases:
-----------
-a) Device bootup
-ICE HW is detected during bootup time and corresponding probe function is
-called. ICE driver parses its data from device tree node. ICE HW and storage
-HW are tightly coupled. Storage device probing is dependent upon ICE device
-probing. ICE driver configures all the required registers to put the ICE HW
-in bypass mode.
-
-b) Configuring keys
-Currently, there are couple of use cases to configure the keys.
-
-1) Full Disk Encryption(FDE)
-System layer(VOLD) at invocation of apps layer would call libqseecom to create
-the encryption key. Libqseecom calls qseecom driver to communicate with KMS
-module on the secure side i.e. TZ. KMS would call ICE driver on the TZ side to
-create and set the keys in ICE HW. At the end of transaction, VOLD would have
-key index of key LUT where encryption key is present.
-
-2) Per File Encryption (PFE)
-Per File Manager(PFM) calls QSEECom api to create the key. PFM has a peer comp-
-onent(PFT) at kernel layer which gets the corresponding key index from PFM.
-
-Following are all the components involved in the ICE driver for data path:
-
-+++++++++++++++++++++++++++++++++++++++++
-+ App layer +
-+++++++++++++++++++++++++++++++++++++++++
-+ VFS +
-+---------------------------------------+
-+ File System (EXT4) +
-+---------------------------------------+
-+ Block Layer +
-+ --------------------------------------+
-+ +++++++ +
-+ dm-req-crypt => + PFT + +
-+ +++++++ +
-+ +
-+---------------------------------------+
-+ +++++++++++ +++++++ +
-+ + Storage + + ICE + +
-+++++++++++++++++++++++++++++++++++++++++
-+ || +
-+ || (Storage Req with +
-+ \/ ICE parameters ) +
-+++++++++++++++++++++++++++++++++++++++++
-+ Storage Device +
-+ ++++++++++++++ +
-+ + ICE HW + +
-+++++++++++++++++++++++++++++++++++++++++
-
-c) Data transaction
-Once the crypto key has been configured, VOLD/PFM creates device mapping for
-data partition. As part of device mapping VOLD passes key index, crypto
-algorithm, mode and key length to dm layer. In case of PFE, keys are provided
-by PFT as and when request is processed by dm-req-crypt. When any application
-needs to read/write data, it would go through DM layer which would add crypto
-information, provided by VOLD/PFT, to Request. For each Request, Storage driver
-would ask ICE driver to configure crypto part of request. ICE driver extracts
-crypto data from Request structure and provide it to storage driver which would
-finally dispatch request to storage device.
-
-d) Error Handling
-Due to issue # 1 mentioned in "Known Issues", ICE driver does not register for
-any interrupt. However, it enables sources of interrupt for ICE HW. After each
-data transaction, Storage driver receives transaction completion event. As part
-of event handling, storage driver calls ICE driver to check if any of ICE
-interrupt status is set. If yes, storage driver returns error to upper layer.
-
-Error handling would be changed in future chips.
-
-Interfaces
-==========
-ICE driver exposes interfaces for storage driver to :
-1. Get the global instance of ICE driver
-2. Get the implemented interfaces of the particular ice instance
-3. Initialize the ICE HW
-4. Reset the ICE HW
-5. Resume/Suspend the ICE HW
-6. Get the Crypto configuration for the data request for storage
-7. Check if current data transaction has generated any interrupt
-
-Driver Parameters
-=================
-This driver is built and statically linked into the kernel; therefore,
-there are no module parameters supported by this driver.
-
-There are no kernel command line parameters supported by this driver.
-
-Power Management
-================
-ICE driver does not do power management on its own as it is part of storage
-hardware. Whenever storage driver receives request for power collapse/suspend
-resume, it would call ICE driver which exposes APIs for Storage HW. ICE HW
-during power collapse or reset, wipes crypto configuration data. When ICE
-driver receives request to resume, it would ask ICE driver on TZ side to
-restore the configuration. ICE driver does not do anything as part of power
-collapse or suspend event.
-
-Interface:
-==========
-ICE driver exposes following APIs for storage driver to use:
-
-int (*init)(struct platform_device *, void *, ice_success_cb, ice_error_cb);
- -- This function is invoked by storage controller during initialization of
- storage controller. Storage controller would provide success and error call
- backs which would be invoked asynchronously once ICE HW init is done.
-
-int (*reset)(struct platform_device *);
- -- ICE HW reset as part of storage controller reset. When storage controller
- received reset command, it would call reset on ICE HW. As of now, ICE HW
- does not need to do anything as part of reset.
-
-int (*resume)(struct platform_device *);
- -- ICE HW while going to reset, wipes all crypto keys and other data from ICE
- HW. ICE driver would reconfigure those data as part of resume operation.
-
-int (*suspend)(struct platform_device *);
- -- This API would be called by storage driver when storage device is going to
- suspend mode. As of today, ICE driver does not do anything to handle suspend.
-
-int (*config)(struct platform_device *, struct request* , struct ice_data_setting*);
- -- Storage driver would call this interface to get all crypto data required to
- perform crypto operation.
-
-int (*status)(struct platform_device *);
- -- Storage driver would call this interface to check if previous data transfer
- generated any error.
-
-Config options
-==============
-This driver is enabled by the kernel config option CONFIG_CRYPTO_DEV_MSM_ICE.
-
-Dependencies
-============
-ICE driver depends upon corresponding ICE driver on TZ side to function
-appropriately.
-
-Known Issues
-============
-1. ICE HW emits 0s even if it has generated an interrupt
-This issue has significant impact on how ICE interrupts are handled. Currently,
-ICE driver does not register for any of the ICE interrupts but enables the
-sources of interrupt. Once storage driver asks to check the status of interrupt,
-it reads and clears the clear status and provide read status to storage driver.
-This mechanism though not optimal but prevents filesystem curruption.
-This issue has been fixed in newer chips.
-
-2. ICE HW wipes all crypto data during power collapse
-This issue necessiate that ICE driver on TZ side store the crypto material
-which is not required in the case of general purpose crypto engine.
-This issue has been fixed in newer chips.
-
-Further Improvements
-====================
-Currently, Due to PFE use case, ICE driver is dependent upon dm-req-crypt to
-provide the keys as part of request structure. This couples ICE driver with
-dm-req-crypt based solution. It is under discussion to expose an IOCTL based
-and registration based interface APIs from ICE driver. ICE driver would use
-these two interfaces to find out if any key exists for current request. If
-yes, choose the right key index received from IOCTL or registration based
-APIs. If not, dont set any crypto parameter in the request.
diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst
index 82efa41..4ed9d58 100644
--- a/Documentation/filesystems/fscrypt.rst
+++ b/Documentation/filesystems/fscrypt.rst
@@ -72,6 +72,9 @@
fscrypt (and storage encryption in general) can only provide limited
protection, if any at all, against online attacks. In detail:
+Side-channel attacks
+~~~~~~~~~~~~~~~~~~~~
+
fscrypt is only resistant to side-channel attacks, such as timing or
electromagnetic attacks, to the extent that the underlying Linux
Cryptographic API algorithms are. If a vulnerable algorithm is used,
@@ -80,29 +83,90 @@
Side channel attacks may also be mounted against applications
consuming decrypted data.
-After an encryption key has been provided, fscrypt is not designed to
-hide the plaintext file contents or filenames from other users on the
-same system, regardless of the visibility of the keyring key.
-Instead, existing access control mechanisms such as file mode bits,
-POSIX ACLs, LSMs, or mount namespaces should be used for this purpose.
-Also note that as long as the encryption keys are *anywhere* in
-memory, an online attacker can necessarily compromise them by mounting
-a physical attack or by exploiting any kernel security vulnerability
-which provides an arbitrary memory read primitive.
+Unauthorized file access
+~~~~~~~~~~~~~~~~~~~~~~~~
-While it is ostensibly possible to "evict" keys from the system,
-recently accessed encrypted files will remain accessible at least
-until the filesystem is unmounted or the VFS caches are dropped, e.g.
-using ``echo 2 > /proc/sys/vm/drop_caches``. Even after that, if the
-RAM is compromised before being powered off, it will likely still be
-possible to recover portions of the plaintext file contents, if not
-some of the encryption keys as well. (Since Linux v4.12, all
-in-kernel keys related to fscrypt are sanitized before being freed.
-However, userspace would need to do its part as well.)
+After an encryption key has been added, fscrypt does not hide the
+plaintext file contents or filenames from other users on the same
+system. Instead, existing access control mechanisms such as file mode
+bits, POSIX ACLs, LSMs, or namespaces should be used for this purpose.
-Currently, fscrypt does not prevent a user from maliciously providing
-an incorrect key for another user's existing encrypted files. A
-protection against this is planned.
+(For the reasoning behind this, understand that while the key is
+added, the confidentiality of the data, from the perspective of the
+system itself, is *not* protected by the mathematical properties of
+encryption but rather only by the correctness of the kernel.
+Therefore, any encryption-specific access control checks would merely
+be enforced by kernel *code* and therefore would be largely redundant
+with the wide variety of access control mechanisms already available.)
+
+Kernel memory compromise
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+An attacker who compromises the system enough to read from arbitrary
+memory, e.g. by mounting a physical attack or by exploiting a kernel
+security vulnerability, can compromise all encryption keys that are
+currently in use.
+
+However, fscrypt allows encryption keys to be removed from the kernel,
+which may protect them from later compromise.
+
+In more detail, the FS_IOC_REMOVE_ENCRYPTION_KEY ioctl (or the
+FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS ioctl) can wipe a master
+encryption key from kernel memory. If it does so, it will also try to
+evict all cached inodes which had been "unlocked" using the key,
+thereby wiping their per-file keys and making them once again appear
+"locked", i.e. in ciphertext or encrypted form.
+
+However, these ioctls have some limitations:
+
+- Per-file keys for in-use files will *not* be removed or wiped.
+ Therefore, for maximum effect, userspace should close the relevant
+ encrypted files and directories before removing a master key, as
+ well as kill any processes whose working directory is in an affected
+ encrypted directory.
+
+- The kernel cannot magically wipe copies of the master key(s) that
+ userspace might have as well. Therefore, userspace must wipe all
+ copies of the master key(s) it makes as well; normally this should
+ be done immediately after FS_IOC_ADD_ENCRYPTION_KEY, without waiting
+ for FS_IOC_REMOVE_ENCRYPTION_KEY. Naturally, the same also applies
+ to all higher levels in the key hierarchy. Userspace should also
+ follow other security precautions such as mlock()ing memory
+ containing keys to prevent it from being swapped out.
+
+- In general, decrypted contents and filenames in the kernel VFS
+ caches are freed but not wiped. Therefore, portions thereof may be
+ recoverable from freed memory, even after the corresponding key(s)
+ were wiped. To partially solve this, you can set
+ CONFIG_PAGE_POISONING=y in your kernel config and add page_poison=1
+ to your kernel command line. However, this has a performance cost.
+
+- Secret keys might still exist in CPU registers, in crypto
+ accelerator hardware (if used by the crypto API to implement any of
+ the algorithms), or in other places not explicitly considered here.
+
+Limitations of v1 policies
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+v1 encryption policies have some weaknesses with respect to online
+attacks:
+
+- There is no verification that the provided master key is correct.
+ Therefore, a malicious user can temporarily associate the wrong key
+ with another user's encrypted files to which they have read-only
+ access. Because of filesystem caching, the wrong key will then be
+ used by the other user's accesses to those files, even if the other
+ user has the correct key in their own keyring. This violates the
+ meaning of "read-only access".
+
+- A compromise of a per-file key also compromises the master key from
+ which it was derived.
+
+- Non-root users cannot securely remove encryption keys.
+
+All the above problems are fixed with v2 encryption policies. For
+this reason among others, it is recommended to use v2 encryption
+policies on all new encrypted directories.
Key hierarchy
=============
@@ -123,11 +187,52 @@
of which protects any number of directory trees on any number of
filesystems.
-Userspace should generate master keys either using a cryptographically
-secure random number generator, or by using a KDF (Key Derivation
-Function). Note that whenever a KDF is used to "stretch" a
-lower-entropy secret such as a passphrase, it is critical that a KDF
-designed for this purpose be used, such as scrypt, PBKDF2, or Argon2.
+Master keys must be real cryptographic keys, i.e. indistinguishable
+from random bytestrings of the same length. This implies that users
+**must not** directly use a password as a master key, zero-pad a
+shorter key, or repeat a shorter key. Security cannot be guaranteed
+if userspace makes any such error, as the cryptographic proofs and
+analysis would no longer apply.
+
+Instead, users should generate master keys either using a
+cryptographically secure random number generator, or by using a KDF
+(Key Derivation Function). The kernel does not do any key stretching;
+therefore, if userspace derives the key from a low-entropy secret such
+as a passphrase, it is critical that a KDF designed for this purpose
+be used, such as scrypt, PBKDF2, or Argon2.
+
+Key derivation function
+-----------------------
+
+With one exception, fscrypt never uses the master key(s) for
+encryption directly. Instead, they are only used as input to a KDF
+(Key Derivation Function) to derive the actual keys.
+
+The KDF used for a particular master key differs depending on whether
+the key is used for v1 encryption policies or for v2 encryption
+policies. Users **must not** use the same key for both v1 and v2
+encryption policies. (No real-world attack is currently known on this
+specific case of key reuse, but its security cannot be guaranteed
+since the cryptographic proofs and analysis would no longer apply.)
+
+For v1 encryption policies, the KDF only supports deriving per-file
+encryption keys. It works by encrypting the master key with
+AES-128-ECB, using the file's 16-byte nonce as the AES key. The
+resulting ciphertext is used as the derived key. If the ciphertext is
+longer than needed, then it is truncated to the needed length.
+
+For v2 encryption policies, the KDF is HKDF-SHA512. The master key is
+passed as the "input keying material", no salt is used, and a distinct
+"application-specific information string" is used for each distinct
+key to be derived. For example, when a per-file encryption key is
+derived, the application-specific information string is the file's
+nonce prefixed with "fscrypt\\0" and a context byte. Different
+context bytes are used for other types of derived keys.
+
+HKDF-SHA512 is preferred to the original AES-128-ECB based KDF because
+HKDF is more flexible, is nonreversible, and evenly distributes
+entropy from the master key. HKDF is also standardized and widely
+used by other software, whereas the AES-128-ECB based KDF is ad-hoc.
Per-file keys
-------------
@@ -138,29 +243,9 @@
cases, fscrypt does this by deriving per-file keys. When a new
encrypted inode (regular file, directory, or symlink) is created,
fscrypt randomly generates a 16-byte nonce and stores it in the
-inode's encryption xattr. Then, it uses a KDF (Key Derivation
-Function) to derive the file's key from the master key and nonce.
-
-The Adiantum encryption mode (see `Encryption modes and usage`_) is
-special, since it accepts longer IVs and is suitable for both contents
-and filenames encryption. For it, a "direct key" option is offered
-where the file's nonce is included in the IVs and the master key is
-used for encryption directly. This improves performance; however,
-users must not use the same master key for any other encryption mode.
-
-Below, the KDF and design considerations are described in more detail.
-
-The current KDF works by encrypting the master key with AES-128-ECB,
-using the file's nonce as the AES key. The output is used as the
-derived key. If the output is longer than needed, then it is
-truncated to the needed length.
-
-Note: this KDF meets the primary security requirement, which is to
-produce unique derived keys that preserve the entropy of the master
-key, assuming that the master key is already a good pseudorandom key.
-However, it is nonstandard and has some problems such as being
-reversible, so it is generally considered to be a mistake! It may be
-replaced with HKDF or another more standard KDF in the future.
+inode's encryption xattr. Then, it uses a KDF (as described in `Key
+derivation function`_) to derive the file's key from the master key
+and nonce.
Key derivation was chosen over key wrapping because wrapped keys would
require larger xattrs which would be less likely to fit in-line in the
@@ -171,10 +256,51 @@
the master keys may be wrapped in userspace, e.g. as is done by the
`fscrypt <https://github.com/google/fscrypt>`_ tool.
-Including the inode number in the IVs was considered. However, it was
-rejected as it would have prevented ext4 filesystems from being
-resized, and by itself still wouldn't have been sufficient to prevent
-the same key from being directly reused for both XTS and CTS-CBC.
+DIRECT_KEY policies
+-------------------
+
+The Adiantum encryption mode (see `Encryption modes and usage`_) is
+suitable for both contents and filenames encryption, and it accepts
+long IVs --- long enough to hold both an 8-byte logical block number
+and a 16-byte per-file nonce. Also, the overhead of each Adiantum key
+is greater than that of an AES-256-XTS key.
+
+Therefore, to improve performance and save memory, for Adiantum a
+"direct key" configuration is supported. When the user has enabled
+this by setting FSCRYPT_POLICY_FLAG_DIRECT_KEY in the fscrypt policy,
+per-file keys are not used. Instead, whenever any data (contents or
+filenames) is encrypted, the file's 16-byte nonce is included in the
+IV. Moreover:
+
+- For v1 encryption policies, the encryption is done directly with the
+ master key. Because of this, users **must not** use the same master
+ key for any other purpose, even for other v1 policies.
+
+- For v2 encryption policies, the encryption is done with a per-mode
+ key derived using the KDF. Users may use the same master key for
+ other v2 encryption policies.
+
+IV_INO_LBLK_64 policies
+-----------------------
+
+When FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 is set in the fscrypt policy,
+the encryption keys are derived from the master key, encryption mode
+number, and filesystem UUID. This normally results in all files
+protected by the same master key sharing a single contents encryption
+key and a single filenames encryption key. To still encrypt different
+files' data differently, inode numbers are included in the IVs.
+Consequently, shrinking the filesystem may not be allowed.
+
+This format is optimized for use with inline encryption hardware
+compliant with the UFS or eMMC standards, which support only 64 IV
+bits per I/O request and may have only a small number of keyslots.
+
+Key identifiers
+---------------
+
+For master keys used for v2 encryption policies, a unique 16-byte "key
+identifier" is also derived using the KDF. This value is stored in
+the clear, since it is needed to reliably identify the key itself.
Encryption modes and usage
==========================
@@ -192,8 +318,9 @@
AES-128-CBC was added only for low-powered embedded devices with
crypto accelerators such as CAAM or CESA that do not support XTS. To
-use AES-128-CBC, CONFIG_CRYPTO_SHA256 (or another SHA-256
-implementation) must be enabled so that ESSIV can be used.
+use AES-128-CBC, CONFIG_CRYPTO_ESSIV and CONFIG_CRYPTO_SHA256 (or
+another SHA-256 implementation) must be enabled so that ESSIV can be
+used.
Adiantum is a (primarily) stream cipher-based mode that is fast even
on CPUs without dedicated crypto instructions. It's also a true
@@ -225,10 +352,17 @@
is encrypted with AES-256 where the AES-256 key is the SHA-256 hash
of the file's data encryption key.
-- In the "direct key" configuration (FS_POLICY_FLAG_DIRECT_KEY set in
- the fscrypt_policy), the file's nonce is also appended to the IV.
+- With `DIRECT_KEY policies`_, the file's nonce is appended to the IV.
Currently this is only allowed with the Adiantum encryption mode.
+- With `IV_INO_LBLK_64 policies`_, the logical block number is limited
+ to 32 bits and is placed in bits 0-31 of the IV. The inode number
+ (which is also limited to 32 bits) is placed in bits 32-63.
+
+Note that because file logical block numbers are included in the IVs,
+filesystems must enforce that blocks are never shifted around within
+encrypted files, e.g. via "collapse range" or "insert range".
+
Filenames encryption
--------------------
@@ -237,10 +371,10 @@
filenames of up to 255 bytes, the same IV is used for every filename
in a directory.
-However, each encrypted directory still uses a unique key; or
-alternatively (for the "direct key" configuration) has the file's
-nonce included in the IVs. Thus, IV reuse is limited to within a
-single directory.
+However, each encrypted directory still uses a unique key, or
+alternatively has the file's nonce (for `DIRECT_KEY policies`_) or
+inode number (for `IV_INO_LBLK_64 policies`_) included in the IVs.
+Thus, IV reuse is limited to within a single directory.
With CTS-CBC, the IV reuse means that when the plaintext filenames
share a common prefix at least as long as the cipher block size (16
@@ -269,49 +403,80 @@
Setting an encryption policy
----------------------------
+FS_IOC_SET_ENCRYPTION_POLICY
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
The FS_IOC_SET_ENCRYPTION_POLICY ioctl sets an encryption policy on an
empty directory or verifies that a directory or regular file already
has the specified encryption policy. It takes in a pointer to a
-:c:type:`struct fscrypt_policy`, defined as follows::
+:c:type:`struct fscrypt_policy_v1` or a :c:type:`struct
+fscrypt_policy_v2`, defined as follows::
- #define FS_KEY_DESCRIPTOR_SIZE 8
-
- struct fscrypt_policy {
+ #define FSCRYPT_POLICY_V1 0
+ #define FSCRYPT_KEY_DESCRIPTOR_SIZE 8
+ struct fscrypt_policy_v1 {
__u8 version;
__u8 contents_encryption_mode;
__u8 filenames_encryption_mode;
__u8 flags;
- __u8 master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
+ __u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
+ };
+ #define fscrypt_policy fscrypt_policy_v1
+
+ #define FSCRYPT_POLICY_V2 2
+ #define FSCRYPT_KEY_IDENTIFIER_SIZE 16
+ struct fscrypt_policy_v2 {
+ __u8 version;
+ __u8 contents_encryption_mode;
+ __u8 filenames_encryption_mode;
+ __u8 flags;
+ __u8 __reserved[4];
+ __u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
};
This structure must be initialized as follows:
-- ``version`` must be 0.
+- ``version`` must be FSCRYPT_POLICY_V1 (0) if the struct is
+ :c:type:`fscrypt_policy_v1` or FSCRYPT_POLICY_V2 (2) if the struct
+ is :c:type:`fscrypt_policy_v2`. (Note: we refer to the original
+ policy version as "v1", though its version code is really 0.) For
+ new encrypted directories, use v2 policies.
- ``contents_encryption_mode`` and ``filenames_encryption_mode`` must
- be set to constants from ``<linux/fs.h>`` which identify the
- encryption modes to use. If unsure, use
- FS_ENCRYPTION_MODE_AES_256_XTS (1) for ``contents_encryption_mode``
- and FS_ENCRYPTION_MODE_AES_256_CTS (4) for
- ``filenames_encryption_mode``.
+ be set to constants from ``<linux/fscrypt.h>`` which identify the
+ encryption modes to use. If unsure, use FSCRYPT_MODE_AES_256_XTS
+ (1) for ``contents_encryption_mode`` and FSCRYPT_MODE_AES_256_CTS
+ (4) for ``filenames_encryption_mode``.
-- ``flags`` must contain a value from ``<linux/fs.h>`` which
- identifies the amount of NUL-padding to use when encrypting
- filenames. If unsure, use FS_POLICY_FLAGS_PAD_32 (0x3).
- In addition, if the chosen encryption modes are both
- FS_ENCRYPTION_MODE_ADIANTUM, this can contain
- FS_POLICY_FLAG_DIRECT_KEY to specify that the master key should be
- used directly, without key derivation.
+- ``flags`` contains optional flags from ``<linux/fscrypt.h>``:
-- ``master_key_descriptor`` specifies how to find the master key in
- the keyring; see `Adding keys`_. It is up to userspace to choose a
- unique ``master_key_descriptor`` for each master key. The e4crypt
- and fscrypt tools use the first 8 bytes of
+ - FSCRYPT_POLICY_FLAGS_PAD_*: The amount of NUL padding to use when
+ encrypting filenames. If unsure, use FSCRYPT_POLICY_FLAGS_PAD_32
+ (0x3).
+ - FSCRYPT_POLICY_FLAG_DIRECT_KEY: See `DIRECT_KEY policies`_.
+ - FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64: See `IV_INO_LBLK_64
+ policies`_. This is mutually exclusive with DIRECT_KEY and is not
+ supported on v1 policies.
+
+- For v2 encryption policies, ``__reserved`` must be zeroed.
+
+- For v1 encryption policies, ``master_key_descriptor`` specifies how
+ to find the master key in a keyring; see `Adding keys`_. It is up
+ to userspace to choose a unique ``master_key_descriptor`` for each
+ master key. The e4crypt and fscrypt tools use the first 8 bytes of
``SHA-512(SHA-512(master_key))``, but this particular scheme is not
required. Also, the master key need not be in the keyring yet when
FS_IOC_SET_ENCRYPTION_POLICY is executed. However, it must be added
before any files can be created in the encrypted directory.
+ For v2 encryption policies, ``master_key_descriptor`` has been
+ replaced with ``master_key_identifier``, which is longer and cannot
+ be arbitrarily chosen. Instead, the key must first be added using
+ `FS_IOC_ADD_ENCRYPTION_KEY`_. Then, the ``key_spec.u.identifier``
+ the kernel returned in the :c:type:`struct fscrypt_add_key_arg` must
+ be used as the ``master_key_identifier`` in the :c:type:`struct
+ fscrypt_policy_v2`.
+
If the file is not yet encrypted, then FS_IOC_SET_ENCRYPTION_POLICY
verifies that the file is an empty directory. If so, the specified
encryption policy is assigned to the directory, turning it into an
@@ -327,6 +492,15 @@
returns 0. Otherwise, it fails with EEXIST. This works on both
regular files and directories, including nonempty directories.
+When a v2 encryption policy is assigned to a directory, it is also
+required that either the specified key has been added by the current
+user or that the caller has CAP_FOWNER in the initial user namespace.
+(This is needed to prevent a user from encrypting their data with
+another user's key.) The key must remain added while
+FS_IOC_SET_ENCRYPTION_POLICY is executing. However, if the new
+encrypted directory does not need to be accessed immediately, then the
+key can be removed right away afterwards.
+
Note that the ext4 filesystem does not allow the root directory to be
encrypted, even if it is empty. Users who want to encrypt an entire
filesystem with one key should consider using dm-crypt instead.
@@ -339,7 +513,11 @@
- ``EEXIST``: the file is already encrypted with an encryption policy
different from the one specified
- ``EINVAL``: an invalid encryption policy was specified (invalid
- version, mode(s), or flags)
+ version, mode(s), or flags; or reserved bits were set)
+- ``ENOKEY``: a v2 encryption policy was specified, but the key with
+ the specified ``master_key_identifier`` has not been added, nor does
+ the process have the CAP_FOWNER capability in the initial user
+ namespace
- ``ENOTDIR``: the file is unencrypted and is a regular file, not a
directory
- ``ENOTEMPTY``: the file is unencrypted and is a nonempty directory
@@ -358,25 +536,79 @@
Getting an encryption policy
----------------------------
-The FS_IOC_GET_ENCRYPTION_POLICY ioctl retrieves the :c:type:`struct
-fscrypt_policy`, if any, for a directory or regular file. See above
-for the struct definition. No additional permissions are required
-beyond the ability to open the file.
+Two ioctls are available to get a file's encryption policy:
-FS_IOC_GET_ENCRYPTION_POLICY can fail with the following errors:
+- `FS_IOC_GET_ENCRYPTION_POLICY_EX`_
+- `FS_IOC_GET_ENCRYPTION_POLICY`_
+
+The extended (_EX) version of the ioctl is more general and is
+recommended to use when possible. However, on older kernels only the
+original ioctl is available. Applications should try the extended
+version, and if it fails with ENOTTY fall back to the original
+version.
+
+FS_IOC_GET_ENCRYPTION_POLICY_EX
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The FS_IOC_GET_ENCRYPTION_POLICY_EX ioctl retrieves the encryption
+policy, if any, for a directory or regular file. No additional
+permissions are required beyond the ability to open the file. It
+takes in a pointer to a :c:type:`struct fscrypt_get_policy_ex_arg`,
+defined as follows::
+
+ struct fscrypt_get_policy_ex_arg {
+ __u64 policy_size; /* input/output */
+ union {
+ __u8 version;
+ struct fscrypt_policy_v1 v1;
+ struct fscrypt_policy_v2 v2;
+ } policy; /* output */
+ };
+
+The caller must initialize ``policy_size`` to the size available for
+the policy struct, i.e. ``sizeof(arg.policy)``.
+
+On success, the policy struct is returned in ``policy``, and its
+actual size is returned in ``policy_size``. ``policy.version`` should
+be checked to determine the version of policy returned. Note that the
+version code for the "v1" policy is actually 0 (FSCRYPT_POLICY_V1).
+
+FS_IOC_GET_ENCRYPTION_POLICY_EX can fail with the following errors:
- ``EINVAL``: the file is encrypted, but it uses an unrecognized
- encryption context format
+ encryption policy version
- ``ENODATA``: the file is not encrypted
-- ``ENOTTY``: this type of filesystem does not implement encryption
+- ``ENOTTY``: this type of filesystem does not implement encryption,
+ or this kernel is too old to support FS_IOC_GET_ENCRYPTION_POLICY_EX
+ (try FS_IOC_GET_ENCRYPTION_POLICY instead)
- ``EOPNOTSUPP``: the kernel was not configured with encryption
- support for this filesystem
+ support for this filesystem, or the filesystem superblock has not
+ had encryption enabled on it
+- ``EOVERFLOW``: the file is encrypted and uses a recognized
+ encryption policy version, but the policy struct does not fit into
+ the provided buffer
Note: if you only need to know whether a file is encrypted or not, on
most filesystems it is also possible to use the FS_IOC_GETFLAGS ioctl
and check for FS_ENCRYPT_FL, or to use the statx() system call and
check for STATX_ATTR_ENCRYPTED in stx_attributes.
+FS_IOC_GET_ENCRYPTION_POLICY
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The FS_IOC_GET_ENCRYPTION_POLICY ioctl can also retrieve the
+encryption policy, if any, for a directory or regular file. However,
+unlike `FS_IOC_GET_ENCRYPTION_POLICY_EX`_,
+FS_IOC_GET_ENCRYPTION_POLICY only supports the original policy
+version. It takes in a pointer directly to a :c:type:`struct
+fscrypt_policy_v1` rather than a :c:type:`struct
+fscrypt_get_policy_ex_arg`.
+
+The error codes for FS_IOC_GET_ENCRYPTION_POLICY are the same as those
+for FS_IOC_GET_ENCRYPTION_POLICY_EX, except that
+FS_IOC_GET_ENCRYPTION_POLICY also returns ``EINVAL`` if the file is
+encrypted using a newer encryption policy version.
+
Getting the per-filesystem salt
-------------------------------
@@ -392,8 +624,144 @@
Adding keys
-----------
-To provide a master key, userspace must add it to an appropriate
-keyring using the add_key() system call (see:
+FS_IOC_ADD_ENCRYPTION_KEY
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The FS_IOC_ADD_ENCRYPTION_KEY ioctl adds a master encryption key to
+the filesystem, making all files on the filesystem which were
+encrypted using that key appear "unlocked", i.e. in plaintext form.
+It can be executed on any file or directory on the target filesystem,
+but using the filesystem's root directory is recommended. It takes in
+a pointer to a :c:type:`struct fscrypt_add_key_arg`, defined as
+follows::
+
+ struct fscrypt_add_key_arg {
+ struct fscrypt_key_specifier key_spec;
+ __u32 raw_size;
+ __u32 key_id;
+ __u32 __reserved[8];
+ __u8 raw[];
+ };
+
+ #define FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR 1
+ #define FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER 2
+
+ struct fscrypt_key_specifier {
+ __u32 type; /* one of FSCRYPT_KEY_SPEC_TYPE_* */
+ __u32 __reserved;
+ union {
+ __u8 __reserved[32]; /* reserve some extra space */
+ __u8 descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
+ __u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
+ } u;
+ };
+
+ struct fscrypt_provisioning_key_payload {
+ __u32 type;
+ __u32 __reserved;
+ __u8 raw[];
+ };
+
+:c:type:`struct fscrypt_add_key_arg` must be zeroed, then initialized
+as follows:
+
+- If the key is being added for use by v1 encryption policies, then
+ ``key_spec.type`` must contain FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR, and
+ ``key_spec.u.descriptor`` must contain the descriptor of the key
+ being added, corresponding to the value in the
+ ``master_key_descriptor`` field of :c:type:`struct
+ fscrypt_policy_v1`. To add this type of key, the calling process
+ must have the CAP_SYS_ADMIN capability in the initial user
+ namespace.
+
+ Alternatively, if the key is being added for use by v2 encryption
+ policies, then ``key_spec.type`` must contain
+ FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER, and ``key_spec.u.identifier`` is
+ an *output* field which the kernel fills in with a cryptographic
+ hash of the key. To add this type of key, the calling process does
+ not need any privileges. However, the number of keys that can be
+ added is limited by the user's quota for the keyrings service (see
+ ``Documentation/security/keys/core.rst``).
+
+- ``raw_size`` must be the size of the ``raw`` key provided, in bytes.
+ Alternatively, if ``key_id`` is nonzero, this field must be 0, since
+ in that case the size is implied by the specified Linux keyring key.
+
+- ``key_id`` is 0 if the raw key is given directly in the ``raw``
+ field. Otherwise ``key_id`` is the ID of a Linux keyring key of
+ type "fscrypt-provisioning" whose payload is a :c:type:`struct
+ fscrypt_provisioning_key_payload` whose ``raw`` field contains the
+ raw key and whose ``type`` field matches ``key_spec.type``. Since
+ ``raw`` is variable-length, the total size of this key's payload
+ must be ``sizeof(struct fscrypt_provisioning_key_payload)`` plus the
+ raw key size. The process must have Search permission on this key.
+
+ Most users should leave this 0 and specify the raw key directly.
+ The support for specifying a Linux keyring key is intended mainly to
+ allow re-adding keys after a filesystem is unmounted and re-mounted,
+ without having to store the raw keys in userspace memory.
+
+- ``raw`` is a variable-length field which must contain the actual
+ key, ``raw_size`` bytes long. Alternatively, if ``key_id`` is
+ nonzero, then this field is unused.
+
+For v2 policy keys, the kernel keeps track of which user (identified
+by effective user ID) added the key, and only allows the key to be
+removed by that user --- or by "root", if they use
+`FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS`_.
+
+However, if another user has added the key, it may be desirable to
+prevent that other user from unexpectedly removing it. Therefore,
+FS_IOC_ADD_ENCRYPTION_KEY may also be used to add a v2 policy key
+*again*, even if it's already added by other user(s). In this case,
+FS_IOC_ADD_ENCRYPTION_KEY will just install a claim to the key for the
+current user, rather than actually add the key again (but the raw key
+must still be provided, as a proof of knowledge).
+
+FS_IOC_ADD_ENCRYPTION_KEY returns 0 if either the key or a claim to
+the key was either added or already exists.
+
+FS_IOC_ADD_ENCRYPTION_KEY can fail with the following errors:
+
+- ``EACCES``: FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR was specified, but the
+ caller does not have the CAP_SYS_ADMIN capability in the initial
+ user namespace; or the raw key was specified by Linux key ID but the
+ process lacks Search permission on the key.
+- ``EDQUOT``: the key quota for this user would be exceeded by adding
+ the key
+- ``EINVAL``: invalid key size or key specifier type, or reserved bits
+ were set
+- ``EKEYREJECTED``: the raw key was specified by Linux key ID, but the
+ key has the wrong type
+- ``ENOKEY``: the raw key was specified by Linux key ID, but no key
+ exists with that ID
+- ``ENOTTY``: this type of filesystem does not implement encryption
+- ``EOPNOTSUPP``: the kernel was not configured with encryption
+ support for this filesystem, or the filesystem superblock has not
+ had encryption enabled on it
+
+Legacy method
+~~~~~~~~~~~~~
+
+For v1 encryption policies, a master encryption key can also be
+provided by adding it to a process-subscribed keyring, e.g. to a
+session keyring, or to a user keyring if the user keyring is linked
+into the session keyring.
+
+This method is deprecated (and not supported for v2 encryption
+policies) for several reasons. First, it cannot be used in
+combination with FS_IOC_REMOVE_ENCRYPTION_KEY (see `Removing keys`_),
+so for removing a key a workaround such as keyctl_unlink() in
+combination with ``sync; echo 2 > /proc/sys/vm/drop_caches`` would
+have to be used. Second, it doesn't match the fact that the
+locked/unlocked status of encrypted files (i.e. whether they appear to
+be in plaintext form or in ciphertext form) is global. This mismatch
+has caused much confusion as well as real problems when processes
+running under different UIDs, such as a ``sudo`` command, need to
+access encrypted files.
+
+Nevertheless, to add a key to one of the process-subscribed keyrings,
+the add_key() system call can be used (see:
``Documentation/security/keys/core.rst``). The key type must be
"logon"; keys of this type are kept in kernel memory and cannot be
read back by userspace. The key description must be "fscrypt:"
@@ -401,12 +769,12 @@
``master_key_descriptor`` that was set in the encryption policy. The
key payload must conform to the following structure::
- #define FS_MAX_KEY_SIZE 64
+ #define FSCRYPT_MAX_KEY_SIZE 64
struct fscrypt_key {
- u32 mode;
- u8 raw[FS_MAX_KEY_SIZE];
- u32 size;
+ __u32 mode;
+ __u8 raw[FSCRYPT_MAX_KEY_SIZE];
+ __u32 size;
};
``mode`` is ignored; just set it to 0. The actual key is provided in
@@ -418,26 +786,194 @@
filesystem-specific prefixes are deprecated and should not be used in
new programs.
-There are several different types of keyrings in which encryption keys
-may be placed, such as a session keyring, a user session keyring, or a
-user keyring. Each key must be placed in a keyring that is "attached"
-to all processes that might need to access files encrypted with it, in
-the sense that request_key() will find the key. Generally, if only
-processes belonging to a specific user need to access a given
-encrypted directory and no session keyring has been installed, then
-that directory's key should be placed in that user's user session
-keyring or user keyring. Otherwise, a session keyring should be
-installed if needed, and the key should be linked into that session
-keyring, or in a keyring linked into that session keyring.
+Removing keys
+-------------
-Note: introducing the complex visibility semantics of keyrings here
-was arguably a mistake --- especially given that by design, after any
-process successfully opens an encrypted file (thereby setting up the
-per-file key), possessing the keyring key is not actually required for
-any process to read/write the file until its in-memory inode is
-evicted. In the future there probably should be a way to provide keys
-directly to the filesystem instead, which would make the intended
-semantics clearer.
+Two ioctls are available for removing a key that was added by
+`FS_IOC_ADD_ENCRYPTION_KEY`_:
+
+- `FS_IOC_REMOVE_ENCRYPTION_KEY`_
+- `FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS`_
+
+These two ioctls differ only in cases where v2 policy keys are added
+or removed by non-root users.
+
+These ioctls don't work on keys that were added via the legacy
+process-subscribed keyrings mechanism.
+
+Before using these ioctls, read the `Kernel memory compromise`_
+section for a discussion of the security goals and limitations of
+these ioctls.
+
+FS_IOC_REMOVE_ENCRYPTION_KEY
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The FS_IOC_REMOVE_ENCRYPTION_KEY ioctl removes a claim to a master
+encryption key from the filesystem, and possibly removes the key
+itself. It can be executed on any file or directory on the target
+filesystem, but using the filesystem's root directory is recommended.
+It takes in a pointer to a :c:type:`struct fscrypt_remove_key_arg`,
+defined as follows::
+
+ struct fscrypt_remove_key_arg {
+ struct fscrypt_key_specifier key_spec;
+ #define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY 0x00000001
+ #define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS 0x00000002
+ __u32 removal_status_flags; /* output */
+ __u32 __reserved[5];
+ };
+
+This structure must be zeroed, then initialized as follows:
+
+- The key to remove is specified by ``key_spec``:
+
+ - To remove a key used by v1 encryption policies, set
+ ``key_spec.type`` to FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR and fill
+ in ``key_spec.u.descriptor``. To remove this type of key, the
+ calling process must have the CAP_SYS_ADMIN capability in the
+ initial user namespace.
+
+ - To remove a key used by v2 encryption policies, set
+ ``key_spec.type`` to FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER and fill
+ in ``key_spec.u.identifier``.
+
+For v2 policy keys, this ioctl is usable by non-root users. However,
+to make this possible, it actually just removes the current user's
+claim to the key, undoing a single call to FS_IOC_ADD_ENCRYPTION_KEY.
+Only after all claims are removed is the key really removed.
+
+For example, if FS_IOC_ADD_ENCRYPTION_KEY was called with uid 1000,
+then the key will be "claimed" by uid 1000, and
+FS_IOC_REMOVE_ENCRYPTION_KEY will only succeed as uid 1000. Or, if
+both uids 1000 and 2000 added the key, then for each uid
+FS_IOC_REMOVE_ENCRYPTION_KEY will only remove their own claim. Only
+once *both* are removed is the key really removed. (Think of it like
+unlinking a file that may have hard links.)
+
+If FS_IOC_REMOVE_ENCRYPTION_KEY really removes the key, it will also
+try to "lock" all files that had been unlocked with the key. It won't
+lock files that are still in-use, so this ioctl is expected to be used
+in cooperation with userspace ensuring that none of the files are
+still open. However, if necessary, this ioctl can be executed again
+later to retry locking any remaining files.
+
+FS_IOC_REMOVE_ENCRYPTION_KEY returns 0 if either the key was removed
+(but may still have files remaining to be locked), the user's claim to
+the key was removed, or the key was already removed but had files
+remaining to be the locked so the ioctl retried locking them. In any
+of these cases, ``removal_status_flags`` is filled in with the
+following informational status flags:
+
+- ``FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY``: set if some file(s)
+ are still in-use. Not guaranteed to be set in the case where only
+ the user's claim to the key was removed.
+- ``FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS``: set if only the
+ user's claim to the key was removed, not the key itself
+
+FS_IOC_REMOVE_ENCRYPTION_KEY can fail with the following errors:
+
+- ``EACCES``: The FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR key specifier type
+ was specified, but the caller does not have the CAP_SYS_ADMIN
+ capability in the initial user namespace
+- ``EINVAL``: invalid key specifier type, or reserved bits were set
+- ``ENOKEY``: the key object was not found at all, i.e. it was never
+ added in the first place or was already fully removed including all
+ files locked; or, the user does not have a claim to the key (but
+ someone else does).
+- ``ENOTTY``: this type of filesystem does not implement encryption
+- ``EOPNOTSUPP``: the kernel was not configured with encryption
+ support for this filesystem, or the filesystem superblock has not
+ had encryption enabled on it
+
+FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS is exactly the same as
+`FS_IOC_REMOVE_ENCRYPTION_KEY`_, except that for v2 policy keys, the
+ALL_USERS version of the ioctl will remove all users' claims to the
+key, not just the current user's. I.e., the key itself will always be
+removed, no matter how many users have added it. This difference is
+only meaningful if non-root users are adding and removing keys.
+
+Because of this, FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS also requires
+"root", namely the CAP_SYS_ADMIN capability in the initial user
+namespace. Otherwise it will fail with EACCES.
+
+Getting key status
+------------------
+
+FS_IOC_GET_ENCRYPTION_KEY_STATUS
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The FS_IOC_GET_ENCRYPTION_KEY_STATUS ioctl retrieves the status of a
+master encryption key. It can be executed on any file or directory on
+the target filesystem, but using the filesystem's root directory is
+recommended. It takes in a pointer to a :c:type:`struct
+fscrypt_get_key_status_arg`, defined as follows::
+
+ struct fscrypt_get_key_status_arg {
+ /* input */
+ struct fscrypt_key_specifier key_spec;
+ __u32 __reserved[6];
+
+ /* output */
+ #define FSCRYPT_KEY_STATUS_ABSENT 1
+ #define FSCRYPT_KEY_STATUS_PRESENT 2
+ #define FSCRYPT_KEY_STATUS_INCOMPLETELY_REMOVED 3
+ __u32 status;
+ #define FSCRYPT_KEY_STATUS_FLAG_ADDED_BY_SELF 0x00000001
+ __u32 status_flags;
+ __u32 user_count;
+ __u32 __out_reserved[13];
+ };
+
+The caller must zero all input fields, then fill in ``key_spec``:
+
+ - To get the status of a key for v1 encryption policies, set
+ ``key_spec.type`` to FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR and fill
+ in ``key_spec.u.descriptor``.
+
+ - To get the status of a key for v2 encryption policies, set
+ ``key_spec.type`` to FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER and fill
+ in ``key_spec.u.identifier``.
+
+On success, 0 is returned and the kernel fills in the output fields:
+
+- ``status`` indicates whether the key is absent, present, or
+ incompletely removed. Incompletely removed means that the master
+ secret has been removed, but some files are still in use; i.e.,
+ `FS_IOC_REMOVE_ENCRYPTION_KEY`_ returned 0 but set the informational
+ status flag FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY.
+
+- ``status_flags`` can contain the following flags:
+
+ - ``FSCRYPT_KEY_STATUS_FLAG_ADDED_BY_SELF`` indicates that the key
+ has added by the current user. This is only set for keys
+ identified by ``identifier`` rather than by ``descriptor``.
+
+- ``user_count`` specifies the number of users who have added the key.
+ This is only set for keys identified by ``identifier`` rather than
+ by ``descriptor``.
+
+FS_IOC_GET_ENCRYPTION_KEY_STATUS can fail with the following errors:
+
+- ``EINVAL``: invalid key specifier type, or reserved bits were set
+- ``ENOTTY``: this type of filesystem does not implement encryption
+- ``EOPNOTSUPP``: the kernel was not configured with encryption
+ support for this filesystem, or the filesystem superblock has not
+ had encryption enabled on it
+
+Among other use cases, FS_IOC_GET_ENCRYPTION_KEY_STATUS can be useful
+for determining whether the key for a given encrypted directory needs
+to be added before prompting the user for the passphrase needed to
+derive the key.
+
+FS_IOC_GET_ENCRYPTION_KEY_STATUS can only get the status of keys in
+the filesystem-level keyring, i.e. the keyring managed by
+`FS_IOC_ADD_ENCRYPTION_KEY`_ and `FS_IOC_REMOVE_ENCRYPTION_KEY`_. It
+cannot get the status of a key that has only been added for use by v1
+encryption policies using the legacy mechanism involving
+process-subscribed keyrings.
Access semantics
================
@@ -500,7 +1036,7 @@
Some filesystem operations may be performed on encrypted regular
files, directories, and symlinks even before their encryption key has
-been provided:
+been added, or after their encryption key has been removed:
- File metadata may be read, e.g. using stat().
@@ -565,33 +1101,44 @@
------------------
An encryption policy is represented on-disk by a :c:type:`struct
-fscrypt_context`. It is up to individual filesystems to decide where
-to store it, but normally it would be stored in a hidden extended
-attribute. It should *not* be exposed by the xattr-related system
-calls such as getxattr() and setxattr() because of the special
-semantics of the encryption xattr. (In particular, there would be
-much confusion if an encryption policy were to be added to or removed
-from anything other than an empty directory.) The struct is defined
-as follows::
+fscrypt_context_v1` or a :c:type:`struct fscrypt_context_v2`. It is
+up to individual filesystems to decide where to store it, but normally
+it would be stored in a hidden extended attribute. It should *not* be
+exposed by the xattr-related system calls such as getxattr() and
+setxattr() because of the special semantics of the encryption xattr.
+(In particular, there would be much confusion if an encryption policy
+were to be added to or removed from anything other than an empty
+directory.) These structs are defined as follows::
- #define FS_KEY_DESCRIPTOR_SIZE 8
#define FS_KEY_DERIVATION_NONCE_SIZE 16
- struct fscrypt_context {
- u8 format;
+ #define FSCRYPT_KEY_DESCRIPTOR_SIZE 8
+ struct fscrypt_context_v1 {
+ u8 version;
u8 contents_encryption_mode;
u8 filenames_encryption_mode;
u8 flags;
- u8 master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
+ u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
};
-Note that :c:type:`struct fscrypt_context` contains the same
-information as :c:type:`struct fscrypt_policy` (see `Setting an
-encryption policy`_), except that :c:type:`struct fscrypt_context`
-also contains a nonce. The nonce is randomly generated by the kernel
-and is used to derive the inode's encryption key as described in
-`Per-file keys`_.
+ #define FSCRYPT_KEY_IDENTIFIER_SIZE 16
+ struct fscrypt_context_v2 {
+ u8 version;
+ u8 contents_encryption_mode;
+ u8 filenames_encryption_mode;
+ u8 flags;
+ u8 __reserved[4];
+ u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
+ u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
+ };
+
+The context structs contain the same information as the corresponding
+policy structs (see `Setting an encryption policy`_), except that the
+context structs also contain a nonce. The nonce is randomly generated
+by the kernel and is used as KDF input or as a tweak to cause
+different files to be encrypted differently; see `Per-file keys`_ and
+`DIRECT_KEY policies`_.
Data path changes
-----------------
diff --git a/Documentation/sysctl/vm.txt b/Documentation/sysctl/vm.txt
index d22c468..b04c20d 100644
--- a/Documentation/sysctl/vm.txt
+++ b/Documentation/sysctl/vm.txt
@@ -33,6 +33,7 @@
- extfrag_threshold
- extra_free_kbytes
- hugetlb_shm_group
+- kswapd_threads
- laptop_mode
- legacy_va_layout
- lowmem_reserve_ratio
@@ -300,6 +301,28 @@
==============================================================
+kswapd_threads
+
+kswapd_threads allows you to control the number of kswapd threads per node
+running on the system. This provides the ability to devote additional CPU
+resources toward proactive page replacement with the goal of reducing
+direct reclaims. When direct reclaims are prevented, the CPU consumed
+by them is prevented as well. Depending on the workload, the result can
+cause aggregate CPU usage on the system to go up, down or stay the same.
+
+More aggressive page replacement can reduce direct reclaims which cause
+latency for tasks and decrease throughput when doing filesystem IO through
+the pagecache. Direct reclaims are recorded using the allocstall counter
+in /proc/vmstat.
+
+The default value is 1 and the range of acceptible values are 1-16.
+Always start with lower values in the 2-6 range. Higher values should
+be justified with testing. If direct reclaims occur in spite of high
+values, the cost of direct reclaims (in latency) that occur can be
+higher due to increased lock contention.
+
+==============================================================
+
laptop_mode
laptop_mode is a knob that controls "laptop mode". All the things that are
diff --git a/MAINTAINERS b/MAINTAINERS
index 14930c2..7bd11ad 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6013,6 +6013,7 @@
S: Supported
F: fs/crypto/
F: include/linux/fscrypt*.h
+F: include/uapi/linux/fscrypt.h
F: Documentation/filesystems/fscrypt.rst
FSI-ATTACHED I2C DRIVER
diff --git a/Makefile b/Makefile
index 6b10fe4..cb967bd 100644
--- a/Makefile
+++ b/Makefile
@@ -503,6 +503,7 @@
CLANG_FLAGS += $(call cc-option, -Wno-misleading-indentation)
CLANG_FLAGS += $(call cc-option, -Wno-bool-operation)
CLANG_FLAGS += -Werror=unknown-warning-option
+CLANG_FLAGS += $(call cc-option, -Wno-unsequenced)
KBUILD_CFLAGS += $(CLANG_FLAGS)
KBUILD_AFLAGS += $(CLANG_FLAGS)
export CLANG_FLAGS
@@ -738,7 +739,6 @@
KBUILD_CFLAGS += $(call cc-disable-warning, duplicate-decl-specifier)
KBUILD_CFLAGS += -Wno-asm-operand-widths
KBUILD_CFLAGS += -Wno-initializer-overrides
-KBUILD_CFLAGS += -fno-builtin
KBUILD_CFLAGS += $(call cc-option, -Wno-undefined-optimized)
KBUILD_CFLAGS += $(call cc-option, -Wno-tautological-constant-out-of-range-compare)
KBUILD_CFLAGS += $(call cc-option, -mllvm -disable-struct-const-merge)
diff --git a/arch/arm/configs/vendor/bengal-perf_defconfig b/arch/arm/configs/vendor/bengal-perf_defconfig
index 2dacf3f9..dc26d22 100644
--- a/arch/arm/configs/vendor/bengal-perf_defconfig
+++ b/arch/arm/configs/vendor/bengal-perf_defconfig
@@ -52,7 +52,6 @@
CONFIG_ARM_PSCI=y
CONFIG_HIGHMEM=y
CONFIG_SECCOMP=y
-CONFIG_BUILD_ARM_APPENDED_DTB_IMAGE=y
CONFIG_CPU_FREQ_TIMES=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
@@ -83,6 +82,7 @@
CONFIG_MODULE_SIG_FORCE=y
CONFIG_MODULE_SIG_SHA512=y
# CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_PARTITION_ADVANCED=y
# CONFIG_IOSCHED_DEADLINE is not set
CONFIG_CFQ_GROUP_IOSCHED=y
@@ -269,14 +269,15 @@
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
-CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
CONFIG_DM_VERITY_FEC=y
+CONFIG_DM_ANDROID_VERITY_AT_MOST_ONCE_DEFAULT_ENABLED=y
CONFIG_NETDEVICES=y
CONFIG_BONDING=y
CONFIG_DUMMY=y
@@ -437,8 +438,9 @@
CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_MSM_ICE=y
CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_CLASS_FLASH=y
@@ -473,6 +475,9 @@
CONFIG_SM_GPUCC_BENGAL=y
CONFIG_SM_DISPCC_BENGAL=y
CONFIG_SM_DEBUGCC_BENGAL=y
+CONFIG_QM_DISPCC_SCUBA=y
+CONFIG_QM_GPUCC_SCUBA=y
+CONFIG_QM_DEBUGCC_SCUBA=y
CONFIG_HWSPINLOCK=y
CONFIG_HWSPINLOCK_QCOM=y
CONFIG_MAILBOX=y
@@ -529,6 +534,8 @@
CONFIG_QCOM_CDSP_RM=y
CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
CONFIG_QCOM_CX_IPEAK=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
CONFIG_ICNSS=y
CONFIG_ICNSS_QMI=y
CONFIG_DEVFREQ_GOV_PASSIVE=y
@@ -558,6 +565,7 @@
CONFIG_F2FS_FS=y
CONFIG_F2FS_FS_SECURITY=y
CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QFMT_V2=y
@@ -571,7 +579,6 @@
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_LSM_MMAP_MIN_ADDR=4096
@@ -589,10 +596,10 @@
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y
CONFIG_FRAME_WARN=2048
+CONFIG_DEBUG_FS=y
# CONFIG_SECTION_MISMATCH_WARN_ONLY is not set
CONFIG_MAGIC_SYSRQ=y
CONFIG_PANIC_TIMEOUT=-1
diff --git a/arch/arm/configs/vendor/bengal_defconfig b/arch/arm/configs/vendor/bengal_defconfig
index 46918bd6..61d4f2d 100644
--- a/arch/arm/configs/vendor/bengal_defconfig
+++ b/arch/arm/configs/vendor/bengal_defconfig
@@ -55,7 +55,6 @@
CONFIG_ARM_PSCI=y
CONFIG_HIGHMEM=y
CONFIG_SECCOMP=y
-CONFIG_BUILD_ARM_APPENDED_DTB_IMAGE=y
CONFIG_EFI=y
CONFIG_CPU_FREQ_TIMES=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
@@ -88,6 +87,7 @@
CONFIG_MODULE_SIG_FORCE=y
CONFIG_MODULE_SIG_SHA512=y
# CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_PARTITION_ADVANCED=y
# CONFIG_IOSCHED_DEADLINE is not set
CONFIG_CFQ_GROUP_IOSCHED=y
@@ -284,14 +284,15 @@
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
-CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
CONFIG_DM_VERITY_FEC=y
+CONFIG_DM_ANDROID_VERITY_AT_MOST_ONCE_DEFAULT_ENABLED=y
CONFIG_NETDEVICES=y
CONFIG_BONDING=y
CONFIG_DUMMY=y
@@ -472,8 +473,9 @@
CONFIG_MMC_IPC_LOGGING=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_MSM_ICE=y
CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_CLASS_FLASH=y
@@ -510,6 +512,9 @@
CONFIG_SM_GPUCC_BENGAL=y
CONFIG_SM_DISPCC_BENGAL=y
CONFIG_SM_DEBUGCC_BENGAL=y
+CONFIG_QM_DISPCC_SCUBA=y
+CONFIG_QM_GPUCC_SCUBA=y
+CONFIG_QM_DEBUGCC_SCUBA=y
CONFIG_HWSPINLOCK=y
CONFIG_HWSPINLOCK_QCOM=y
CONFIG_MAILBOX=y
@@ -574,6 +579,8 @@
CONFIG_QCOM_CDSP_RM=y
CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
CONFIG_QCOM_CX_IPEAK=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
CONFIG_ICNSS=y
CONFIG_ICNSS_DEBUG=y
CONFIG_ICNSS_QMI=y
@@ -605,6 +612,7 @@
CONFIG_F2FS_FS_SECURITY=y
CONFIG_F2FS_CHECK_FS=y
CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QFMT_V2=y
@@ -620,7 +628,6 @@
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_LSM_MMAP_MIN_ADDR=4096
@@ -638,7 +645,6 @@
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_XZ_DEC=y
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y
diff --git a/arch/arm/mach-qcom/Kconfig b/arch/arm/mach-qcom/Kconfig
index c108485..01c7488 100644
--- a/arch/arm/mach-qcom/Kconfig
+++ b/arch/arm/mach-qcom/Kconfig
@@ -42,6 +42,44 @@
select CLKSRC_OF
select COMMON_CLK
+config ARCH_SDM660
+ bool "Enable Support for Qualcomm Technologies, Inc. SDM660"
+ select CLKDEV_LOOKUP
+ select HAVE_CLK
+ select HAVE_CLK_PREPARE
+ select PM_OPP
+ select SOC_BUS
+ select MSM_IRQ
+ select THERMAL_WRITABLE_TRIPS
+ select ARM_GIC_V3
+ select ARM_AMBA
+ select SPARSE_IRQ
+ select MULTI_IRQ_HANDLER
+ select HAVE_ARM_ARCH_TIMER
+ select MAY_HAVE_SPARSE_IRQ
+ select COMMON_CLK
+ select COMMON_CLK_QCOM
+ select QCOM_GDSC
+ select PINCTRL_MSM_TLMM
+ select USE_PINCTRL_IRQ
+ select MSM_PM if PM
+ select QMI_ENCDEC
+ select CPU_FREQ
+ select CPU_FREQ_MSM
+ select PM_DEVFREQ
+ select MSM_DEVFREQ_DEVBW
+ select DEVFREQ_SIMPLE_DEV
+ select DEVFREQ_GOV_MSM_BW_HWMON
+ select MSM_BIMC_BWMON
+ select MSM_QDSP6V2_CODECS
+ select MSM_AUDIO_QDSP6V2 if SND_SOC
+ select MSM_RPM_SMD
+ select GENERIC_IRQ_MIGRATION
+ select MSM_JTAGV8 if CORESIGHT_ETMV4
+ help
+ This enables support for the SDM660 chipset. If you do not
+ wish to build a kernel that runs on this chipset, say 'N' here.
+
config ARCH_BENGAL
bool "Enable Support for Qualcomm Technologies, Inc. BENGAL"
select COMMON_CLK_QCOM
diff --git a/arch/arm/mach-qcom/Makefile b/arch/arm/mach-qcom/Makefile
index f6658a2..621362a 100644
--- a/arch/arm/mach-qcom/Makefile
+++ b/arch/arm/mach-qcom/Makefile
@@ -2,3 +2,4 @@
obj-$(CONFIG_SMP) += platsmp.o
obj-$(CONFIG_ARCH_BENGAL) += board-bengal.o
obj-$(CONFIG_ARCH_SCUBA) += board-scuba.o
+obj-$(CONFIG_ARCH_SDM660) += board-660.o
diff --git a/arch/arm/mach-qcom/board-660.c b/arch/arm/mach-qcom/board-660.c
new file mode 100644
index 0000000..f616baa
--- /dev/null
+++ b/arch/arm/mach-qcom/board-660.c
@@ -0,0 +1,68 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2016, 2019-2020, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <asm/mach/arch.h>
+#include "board-dt.h"
+
+static const char *sdm660_dt_match[] __initconst = {
+ "qcom,sdm660",
+ "qcom,sda660",
+ NULL
+};
+
+static void __init sdm660_init(void)
+{
+ board_dt_populate(NULL);
+}
+
+DT_MACHINE_START(SDM660_DT,
+ "Qualcomm Technologies, Inc. SDM 660 (Flattened Device Tree)")
+ .init_machine = sdm660_init,
+ .dt_compat = sdm660_dt_match,
+MACHINE_END
+
+static const char *sdm630_dt_match[] __initconst = {
+ "qcom,sdm630",
+ "qcom,sda630",
+ NULL
+};
+
+static void __init sdm630_init(void)
+{
+ board_dt_populate(NULL);
+}
+
+DT_MACHINE_START(SDM630_DT,
+ "Qualcomm Technologies, Inc. SDM 630 (Flattened Device Tree)")
+ .init_machine = sdm630_init,
+ .dt_compat = sdm630_dt_match,
+MACHINE_END
+
+static const char *sdm658_dt_match[] __initconst = {
+ "qcom,sdm658",
+ "qcom,sda658",
+ NULL
+};
+
+static void __init sdm658_init(void)
+{
+ board_dt_populate(NULL);
+}
+
+DT_MACHINE_START(SDM658_DT,
+ "Qualcomm Technologies, Inc. SDM 658 (Flattened Device Tree)")
+ .init_machine = sdm658_init,
+ .dt_compat = sdm658_dt_match,
+MACHINE_END
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 339eb17..587e2eb 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -18,6 +18,7 @@
#include <asm/pgtable-hwdef.h>
#include <asm/pgtable.h>
#include <asm/memory.h>
+#include <asm/cache.h>
#include "proc-macros.S"
@@ -548,10 +549,10 @@
ENDPROC(__v7_setup)
.bss
- .align 2
+ .align L1_CACHE_SHIFT
__v7_setup_stack:
.space 4 * 7 @ 7 registers
-
+ .align L1_CACHE_SHIFT
__INITDATA
.weak cpu_v7_bugs_init
diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
index 7ae180d..7793552 100644
--- a/arch/arm64/Kconfig.platforms
+++ b/arch/arm64/Kconfig.platforms
@@ -190,6 +190,16 @@
This enables support for the SCUBA chipset. If you do not
wish to build a kernel that runs on this chipset, say 'N' here.
+config ARCH_SDM660
+ bool "Enable Support for Qualcomm Technologies, Inc. SDM660"
+ depends on ARCH_QCOM
+ select COMMON_CLK
+ select COMMON_CLK_QCOM
+ select QCOM_GDSC
+ help
+ This enables support for the SDM660 chipset. If you do not
+ wish to build a kernel that runs on this chipset, say 'N' here.
+
config ARCH_REALTEK
bool "Realtek Platforms"
help
diff --git a/arch/arm64/configs/gki_defconfig b/arch/arm64/configs/gki_defconfig
index 62b98ef..b4b41a3 100644
--- a/arch/arm64/configs/gki_defconfig
+++ b/arch/arm64/configs/gki_defconfig
@@ -81,6 +81,7 @@
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y
+CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_GKI_HACKS_TO_FIX=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=m
@@ -222,6 +223,7 @@
CONFIG_BLK_DEV_SD=y
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
+CONFIG_SCSI_UFS_CRYPTO=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
@@ -392,6 +394,7 @@
CONFIG_F2FS_FS=y
CONFIG_F2FS_FS_SECURITY=y
CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
CONFIG_FS_VERITY=y
CONFIG_FS_VERITY_BUILTIN_SIGNATURES=y
# CONFIG_DNOTIFY is not set
diff --git a/arch/arm64/configs/vendor/bengal-perf_defconfig b/arch/arm64/configs/vendor/bengal-perf_defconfig
index 2358115..97194b4 100644
--- a/arch/arm64/configs/vendor/bengal-perf_defconfig
+++ b/arch/arm64/configs/vendor/bengal-perf_defconfig
@@ -18,6 +18,8 @@
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_CPU_MAX_BUF_SHIFT=17
+CONFIG_MEMCG=y
+CONFIG_MEMCG_SWAP=y
CONFIG_BLK_CGROUP=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_CGROUP_FREEZER=y
@@ -95,6 +97,7 @@
CONFIG_MODULE_SIG_FORCE=y
CONFIG_MODULE_SIG_SHA512=y
# CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_PARTITION_ADVANCED=y
# CONFIG_IOSCHED_DEADLINE is not set
CONFIG_CFQ_GROUP_IOSCHED=y
@@ -286,7 +289,8 @@
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
@@ -464,8 +468,9 @@
CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_MSM_ICE=y
CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_CLASS_FLASH=y
@@ -563,6 +568,8 @@
CONFIG_QCOM_CDSP_RM=y
CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
CONFIG_QCOM_CX_IPEAK=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
CONFIG_ICNSS=y
CONFIG_ICNSS_QMI=y
CONFIG_DEVFREQ_GOV_PASSIVE=y
@@ -594,6 +601,7 @@
CONFIG_F2FS_FS=y
CONFIG_F2FS_FS_SECURITY=y
CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QFMT_V2=y
@@ -608,7 +616,6 @@
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@@ -625,14 +632,13 @@
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_STACK_HASH_ORDER_SHIFT=12
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y
CONFIG_PAGE_OWNER=y
# CONFIG_SECTION_MISMATCH_WARN_ONLY is not set
CONFIG_MAGIC_SYSRQ=y
-CONFIG_PANIC_TIMEOUT=-1
+CONFIG_PANIC_TIMEOUT=5
CONFIG_SCHEDSTATS=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_IPC_LOGGING=y
diff --git a/arch/arm64/configs/vendor/bengal_defconfig b/arch/arm64/configs/vendor/bengal_defconfig
index f90e430..c241006 100644
--- a/arch/arm64/configs/vendor/bengal_defconfig
+++ b/arch/arm64/configs/vendor/bengal_defconfig
@@ -17,6 +17,8 @@
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_CPU_MAX_BUF_SHIFT=17
+CONFIG_MEMCG=y
+CONFIG_MEMCG_SWAP=y
CONFIG_BLK_CGROUP=y
CONFIG_DEBUG_BLK_CGROUP=y
CONFIG_RT_GROUP_SCHED=y
@@ -81,6 +83,7 @@
CONFIG_CPU_BOOST=y
CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y
CONFIG_ARM_QCOM_CPUFREQ_HW=y
+CONFIG_ARM_QCOM_CPUFREQ_HW_DEBUG=y
CONFIG_MSM_TZ_LOG=y
CONFIG_ARM64_CRYPTO=y
CONFIG_CRYPTO_SHA1_ARM64_CE=y
@@ -99,6 +102,7 @@
CONFIG_MODULE_SIG_FORCE=y
CONFIG_MODULE_SIG_SHA512=y
# CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_PARTITION_ADVANCED=y
# CONFIG_IOSCHED_DEADLINE is not set
CONFIG_CFQ_GROUP_IOSCHED=y
@@ -295,8 +299,9 @@
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
@@ -476,8 +481,9 @@
CONFIG_MMC_IPC_LOGGING=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_MSM_ICE=y
CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_CLASS_FLASH=y
@@ -586,6 +592,8 @@
CONFIG_QCOM_CDSP_RM=y
CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
CONFIG_QCOM_CX_IPEAK=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
CONFIG_ICNSS=y
CONFIG_ICNSS_DEBUG=y
CONFIG_ICNSS_QMI=y
@@ -619,6 +627,7 @@
CONFIG_F2FS_FS_SECURITY=y
CONFIG_F2FS_CHECK_FS=y
CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QFMT_V2=y
@@ -633,7 +642,6 @@
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@@ -650,7 +658,6 @@
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DEBUG_CONSOLE_UNHASHED_POINTERS=y
@@ -675,7 +682,7 @@
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_WQ_WATCHDOG=y
-CONFIG_PANIC_TIMEOUT=-1
+CONFIG_PANIC_TIMEOUT=5
CONFIG_PANIC_ON_SCHED_BUG=y
CONFIG_PANIC_ON_RT_THROTTLING=y
CONFIG_SCHEDSTATS=y
diff --git a/arch/arm64/configs/vendor/kona-iot-perf_defconfig b/arch/arm64/configs/vendor/kona-iot-perf_defconfig
index d7d763d..1b4e76f 100644
--- a/arch/arm64/configs/vendor/kona-iot-perf_defconfig
+++ b/arch/arm64/configs/vendor/kona-iot-perf_defconfig
@@ -294,11 +294,9 @@
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
-CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
@@ -424,8 +422,7 @@
CONFIG_MSM_GLOBAL_SYNX=y
CONFIG_DVB_MPQ=m
CONFIG_DVB_MPQ_DEMUX=m
-CONFIG_DVB_MPQ_TSPP1=y
-CONFIG_TSPP=m
+CONFIG_DVB_MPQ_SW=y
CONFIG_VIDEO_V4L2_VIDEOBUF2_CORE=y
CONFIG_I2C_RTC6226_QCA=y
CONFIG_DRM=y
@@ -667,7 +664,6 @@
CONFIG_SDCARD_FS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@@ -685,9 +681,9 @@
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y
+CONFIG_DEBUG_FS=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_PANIC_TIMEOUT=-1
CONFIG_SCHEDSTATS=y
diff --git a/arch/arm64/configs/vendor/kona-iot_defconfig b/arch/arm64/configs/vendor/kona-iot_defconfig
index b3d2663..dbce12e 100644
--- a/arch/arm64/configs/vendor/kona-iot_defconfig
+++ b/arch/arm64/configs/vendor/kona-iot_defconfig
@@ -308,12 +308,10 @@
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
-CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
@@ -440,8 +438,7 @@
CONFIG_MSM_GLOBAL_SYNX=y
CONFIG_DVB_MPQ=m
CONFIG_DVB_MPQ_DEMUX=m
-CONFIG_DVB_MPQ_TSPP1=y
-CONFIG_TSPP=m
+CONFIG_DVB_MPQ_SW=y
CONFIG_VIDEO_V4L2_VIDEOBUF2_CORE=y
CONFIG_I2C_RTC6226_QCA=y
CONFIG_DRM=y
@@ -701,7 +698,6 @@
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@@ -720,7 +716,6 @@
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_XZ_DEC=y
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y
diff --git a/arch/arm64/configs/vendor/kona-perf_defconfig b/arch/arm64/configs/vendor/kona-perf_defconfig
index d7d763d..74b58921 100644
--- a/arch/arm64/configs/vendor/kona-perf_defconfig
+++ b/arch/arm64/configs/vendor/kona-perf_defconfig
@@ -97,6 +97,7 @@
CONFIG_MODULE_SIG=y
CONFIG_MODULE_SIG_FORCE=y
CONFIG_MODULE_SIG_SHA512=y
+CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_CFQ_GROUP_IOSCHED=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
@@ -223,6 +224,8 @@
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_RAW=y
+CONFIG_IP6_NF_NAT=y
+CONFIG_IP6_NF_TARGET_MASQUERADE=y
CONFIG_BRIDGE_NF_EBTABLES=y
CONFIG_BRIDGE_EBT_BROUTE=y
CONFIG_IP_SCTP=y
@@ -284,6 +287,7 @@
CONFIG_OKL4_USER_VIRQ=y
CONFIG_WIGIG_SENSING_SPI=m
CONFIG_QTI_XR_SMRTVWR_MISC=y
+CONFIG_QTI_MAXIM_FAN_CONTROLLER=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_SG=y
@@ -294,10 +298,10 @@
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
-CONFIG_DM_CRYPT=y
CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_UEVENT=y
@@ -424,8 +428,7 @@
CONFIG_MSM_GLOBAL_SYNX=y
CONFIG_DVB_MPQ=m
CONFIG_DVB_MPQ_DEMUX=m
-CONFIG_DVB_MPQ_TSPP1=y
-CONFIG_TSPP=m
+CONFIG_DVB_MPQ_SW=y
CONFIG_VIDEO_V4L2_VIDEOBUF2_CORE=y
CONFIG_I2C_RTC6226_QCA=y
CONFIG_DRM=y
@@ -502,6 +505,8 @@
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_QTI_TRI_LED=y
@@ -581,7 +586,6 @@
CONFIG_SSR_SUBSYS_NOTIF_TIMEOUT=20000
CONFIG_PANIC_ON_SSR_NOTIF_TIMEOUT=y
CONFIG_QCOM_SECURE_BUFFER=y
-CONFIG_MSM_REMOTEQDSS=y
CONFIG_MSM_SERVICE_LOCATOR=y
CONFIG_MSM_SERVICE_NOTIFIER=y
CONFIG_MSM_SUBSYSTEM_RESTART=y
@@ -615,6 +619,8 @@
CONFIG_QMP_DEBUGFS_CLIENT=y
CONFIG_QCOM_CDSP_RM=y
CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
CONFIG_DEVFREQ_GOV_PASSIVE=y
CONFIG_QCOM_BIMC_BWMON=y
CONFIG_ARM_MEMLAT_MON=y
@@ -638,6 +644,7 @@
CONFIG_RAS=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
+# CONFIG_NVMEM_SYSFS is not set
CONFIG_QCOM_QFPROM=y
CONFIG_NVMEM_SPMI_SDAM=y
CONFIG_SLIMBUS_MSM_NGD=y
@@ -650,9 +657,10 @@
CONFIG_QCOM_KGSL=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_SECURITY=y
+CONFIG_EXT4_ENCRYPTION=y
CONFIG_F2FS_FS=y
CONFIG_F2FS_FS_SECURITY=y
-CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QFMT_V2=y
@@ -667,7 +675,6 @@
CONFIG_SDCARD_FS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@@ -685,9 +692,9 @@
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y
+CONFIG_DEBUG_FS=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_PANIC_TIMEOUT=-1
CONFIG_SCHEDSTATS=y
diff --git a/arch/arm64/configs/vendor/kona_defconfig b/arch/arm64/configs/vendor/kona_defconfig
index b3d2663..46b77b9 100644
--- a/arch/arm64/configs/vendor/kona_defconfig
+++ b/arch/arm64/configs/vendor/kona_defconfig
@@ -100,6 +100,7 @@
CONFIG_MODULE_SIG_FORCE=y
CONFIG_MODULE_SIG_SHA512=y
# CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_PARTITION_ADVANCED=y
# CONFIG_IOSCHED_DEADLINE is not set
CONFIG_CFQ_GROUP_IOSCHED=y
@@ -230,6 +231,8 @@
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_RAW=y
+CONFIG_IP6_NF_NAT=y
+CONFIG_IP6_NF_TARGET_MASQUERADE=y
CONFIG_BRIDGE_NF_EBTABLES=y
CONFIG_BRIDGE_EBT_BROUTE=y
CONFIG_IP_SCTP=y
@@ -298,6 +301,7 @@
CONFIG_OKL4_USER_VIRQ=y
CONFIG_WIGIG_SENSING_SPI=m
CONFIG_QTI_XR_SMRTVWR_MISC=y
+CONFIG_QTI_MAXIM_FAN_CONTROLLER=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_SG=y
@@ -308,11 +312,11 @@
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
-CONFIG_DM_CRYPT=y
CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_UEVENT=y
@@ -440,8 +444,7 @@
CONFIG_MSM_GLOBAL_SYNX=y
CONFIG_DVB_MPQ=m
CONFIG_DVB_MPQ_DEMUX=m
-CONFIG_DVB_MPQ_TSPP1=y
-CONFIG_TSPP=m
+CONFIG_DVB_MPQ_SW=y
CONFIG_VIDEO_V4L2_VIDEOBUF2_CORE=y
CONFIG_I2C_RTC6226_QCA=y
CONFIG_DRM=y
@@ -520,6 +523,8 @@
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_QTI_TRI_LED=y
@@ -644,6 +649,8 @@
CONFIG_QMP_DEBUGFS_CLIENT=y
CONFIG_QCOM_CDSP_RM=y
CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
CONFIG_DEVFREQ_GOV_PASSIVE=y
CONFIG_QCOM_BIMC_BWMON=y
CONFIG_ARM_MEMLAT_MON=y
@@ -668,6 +675,7 @@
CONFIG_RAS=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
+# CONFIG_NVMEM_SYSFS is not set
CONFIG_QCOM_QFPROM=y
CONFIG_NVMEM_SPMI_SDAM=y
CONFIG_SLIMBUS_MSM_NGD=y
@@ -685,6 +693,7 @@
CONFIG_F2FS_FS=y
CONFIG_F2FS_FS_SECURITY=y
CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QFMT_V2=y
@@ -701,7 +710,6 @@
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@@ -720,7 +728,6 @@
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_XZ_DEC=y
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y
diff --git a/arch/arm64/configs/vendor/lito-perf_defconfig b/arch/arm64/configs/vendor/lito-perf_defconfig
index f6df776..7548051 100644
--- a/arch/arm64/configs/vendor/lito-perf_defconfig
+++ b/arch/arm64/configs/vendor/lito-perf_defconfig
@@ -96,6 +96,7 @@
CONFIG_MODULE_SIG_FORCE=y
CONFIG_MODULE_SIG_SHA512=y
# CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_PARTITION_ADVANCED=y
# CONFIG_IOSCHED_DEADLINE is not set
CONFIG_CFQ_GROUP_IOSCHED=y
@@ -290,7 +291,8 @@
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
@@ -488,8 +490,9 @@
CONFIG_MMC_TEST=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_MSM_ICE=y
CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_QTI_TRI_LED=y
@@ -604,6 +607,8 @@
CONFIG_QMP_DEBUGFS_CLIENT=y
CONFIG_QCOM_CDSP_RM=y
CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
CONFIG_ICNSS=y
CONFIG_ICNSS_QMI=y
CONFIG_DEVFREQ_GOV_PASSIVE=y
@@ -636,6 +641,7 @@
CONFIG_F2FS_FS=y
CONFIG_F2FS_FS_SECURITY=y
CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QFMT_V2=y
@@ -651,7 +657,6 @@
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@@ -667,14 +672,13 @@
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_STACK_HASH_ORDER_SHIFT=12
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y
CONFIG_PAGE_OWNER=y
# CONFIG_SECTION_MISMATCH_WARN_ONLY is not set
CONFIG_MAGIC_SYSRQ=y
-CONFIG_PANIC_TIMEOUT=-1
+CONFIG_PANIC_TIMEOUT=5
CONFIG_SCHEDSTATS=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_IPC_LOGGING=y
@@ -685,6 +689,7 @@
CONFIG_CORESIGHT_DYNAMIC_REPLICATOR=y
CONFIG_CORESIGHT_STM=y
CONFIG_CORESIGHT_CTI=y
+CONFIG_CORESIGHT_CTI_SAVE_DISABLE=y
CONFIG_CORESIGHT_TPDA=y
CONFIG_CORESIGHT_TPDM=y
CONFIG_CORESIGHT_HWEVENT=y
diff --git a/arch/arm64/configs/vendor/lito_defconfig b/arch/arm64/configs/vendor/lito_defconfig
index 8e025ea8..9c80d86 100644
--- a/arch/arm64/configs/vendor/lito_defconfig
+++ b/arch/arm64/configs/vendor/lito_defconfig
@@ -98,6 +98,7 @@
CONFIG_MODULE_SIG_FORCE=y
CONFIG_MODULE_SIG_SHA512=y
# CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_PARTITION_ADVANCED=y
# CONFIG_IOSCHED_DEADLINE is not set
CONFIG_CFQ_GROUP_IOSCHED=y
@@ -296,8 +297,9 @@
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
@@ -497,8 +499,9 @@
CONFIG_MMC_IPC_LOGGING=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_MSM_ICE=y
CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_QTI_TRI_LED=y
@@ -623,6 +626,8 @@
CONFIG_QMP_DEBUGFS_CLIENT=y
CONFIG_QCOM_CDSP_RM=y
CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
CONFIG_ICNSS=y
CONFIG_ICNSS_DEBUG=y
CONFIG_ICNSS_QMI=y
@@ -656,6 +661,7 @@
CONFIG_F2FS_FS=y
CONFIG_F2FS_FS_SECURITY=y
CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QFMT_V2=y
@@ -672,7 +678,6 @@
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@@ -688,7 +693,6 @@
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_XZ_DEC=y
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y
@@ -714,7 +718,7 @@
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_WQ_WATCHDOG=y
-CONFIG_PANIC_TIMEOUT=-1
+CONFIG_PANIC_TIMEOUT=5
CONFIG_PANIC_ON_SCHED_BUG=y
CONFIG_PANIC_ON_RT_THROTTLING=y
CONFIG_SCHEDSTATS=y
diff --git a/arch/x86/configs/gki_defconfig b/arch/x86/configs/gki_defconfig
index 2307b1e..f2e9c4a 100644
--- a/arch/x86/configs/gki_defconfig
+++ b/arch/x86/configs/gki_defconfig
@@ -50,6 +50,7 @@
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y
+CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_GKI_HACKS_TO_FIX=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=m
@@ -329,6 +330,7 @@
CONFIG_F2FS_FS=y
CONFIG_F2FS_FS_SECURITY=y
CONFIG_F2FS_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
CONFIG_FS_VERITY=y
CONFIG_FS_VERITY_BUILTIN_SIGNATURES=y
# CONFIG_DNOTIFY is not set
diff --git a/block/Kconfig b/block/Kconfig
index 1f2469a..1a4929c 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -200,6 +200,23 @@
Enabling this option enables users to setup/unlock/lock
Locking ranges for SED devices using the Opal protocol.
+config BLK_INLINE_ENCRYPTION
+ bool "Enable inline encryption support in block layer"
+ help
+ Build the blk-crypto subsystem. Enabling this lets the
+ block layer handle encryption, so users can take
+ advantage of inline encryption hardware if present.
+
+config BLK_INLINE_ENCRYPTION_FALLBACK
+ bool "Enable crypto API fallback for blk-crypto"
+ depends on BLK_INLINE_ENCRYPTION
+ select CRYPTO
+ select CRYPTO_BLKCIPHER
+ help
+ Enabling this lets the block layer handle inline encryption
+ by falling back to the kernel crypto API when inline
+ encryption hardware is not present.
+
menu "Partition Types"
source "block/partitions/Kconfig"
diff --git a/block/Makefile b/block/Makefile
index 572b33f..a2e0533 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -37,3 +37,6 @@
obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o
obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o
+obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o bio-crypt-ctx.o \
+ blk-crypto.o
+obj-$(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK) += blk-crypto-fallback.o
\ No newline at end of file
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index ef5a07b..ceb72a7 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -5152,20 +5152,28 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
return bfqq;
}
-static void bfq_idle_slice_timer_body(struct bfq_queue *bfqq)
+static void
+bfq_idle_slice_timer_body(struct bfq_data *bfqd, struct bfq_queue *bfqq)
{
- struct bfq_data *bfqd = bfqq->bfqd;
enum bfqq_expiration reason;
unsigned long flags;
spin_lock_irqsave(&bfqd->lock, flags);
- bfq_clear_bfqq_wait_request(bfqq);
+ /*
+ * Considering that bfqq may be in race, we should firstly check
+ * whether bfqq is in service before doing something on it. If
+ * the bfqq in race is not in service, it has already been expired
+ * through __bfq_bfqq_expire func and its wait_request flags has
+ * been cleared in __bfq_bfqd_reset_in_service func.
+ */
if (bfqq != bfqd->in_service_queue) {
spin_unlock_irqrestore(&bfqd->lock, flags);
return;
}
+ bfq_clear_bfqq_wait_request(bfqq);
+
if (bfq_bfqq_budget_timeout(bfqq))
/*
* Also here the queue can be safely expired
@@ -5210,7 +5218,7 @@ static enum hrtimer_restart bfq_idle_slice_timer(struct hrtimer *timer)
* early.
*/
if (bfqq)
- bfq_idle_slice_timer_body(bfqq);
+ bfq_idle_slice_timer_body(bfqd, bfqq);
return HRTIMER_NORESTART;
}
diff --git a/block/bio-crypt-ctx.c b/block/bio-crypt-ctx.c
new file mode 100644
index 0000000..75008b2
--- /dev/null
+++ b/block/bio-crypt-ctx.c
@@ -0,0 +1,142 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#include <linux/bio.h>
+#include <linux/blkdev.h>
+#include <linux/keyslot-manager.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#include "blk-crypto-internal.h"
+
+static int num_prealloc_crypt_ctxs = 128;
+
+module_param(num_prealloc_crypt_ctxs, int, 0444);
+MODULE_PARM_DESC(num_prealloc_crypt_ctxs,
+ "Number of bio crypto contexts to preallocate");
+
+static struct kmem_cache *bio_crypt_ctx_cache;
+static mempool_t *bio_crypt_ctx_pool;
+
+int __init bio_crypt_ctx_init(void)
+{
+ size_t i;
+
+ bio_crypt_ctx_cache = KMEM_CACHE(bio_crypt_ctx, 0);
+ if (!bio_crypt_ctx_cache)
+ return -ENOMEM;
+
+ bio_crypt_ctx_pool = mempool_create_slab_pool(num_prealloc_crypt_ctxs,
+ bio_crypt_ctx_cache);
+ if (!bio_crypt_ctx_pool)
+ return -ENOMEM;
+
+ /* This is assumed in various places. */
+ BUILD_BUG_ON(BLK_ENCRYPTION_MODE_INVALID != 0);
+
+ /* Sanity check that no algorithm exceeds the defined limits. */
+ for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++) {
+ BUG_ON(blk_crypto_modes[i].keysize > BLK_CRYPTO_MAX_KEY_SIZE);
+ BUG_ON(blk_crypto_modes[i].ivsize > BLK_CRYPTO_MAX_IV_SIZE);
+ }
+
+ return 0;
+}
+
+struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask)
+{
+ return mempool_alloc(bio_crypt_ctx_pool, gfp_mask);
+}
+EXPORT_SYMBOL_GPL(bio_crypt_alloc_ctx);
+
+void bio_crypt_free_ctx(struct bio *bio)
+{
+ mempool_free(bio->bi_crypt_context, bio_crypt_ctx_pool);
+ bio->bi_crypt_context = NULL;
+}
+
+void bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask)
+{
+ const struct bio_crypt_ctx *src_bc = src->bi_crypt_context;
+
+ bio_clone_skip_dm_default_key(dst, src);
+
+ /*
+ * If a bio is fallback_crypted, then it will be decrypted when
+ * bio_endio is called. As we only want the data to be decrypted once,
+ * copies of the bio must not have have a crypt context.
+ */
+ if (!src_bc || bio_crypt_fallback_crypted(src_bc))
+ return;
+
+ dst->bi_crypt_context = bio_crypt_alloc_ctx(gfp_mask);
+ *dst->bi_crypt_context = *src_bc;
+
+ if (src_bc->bc_keyslot >= 0)
+ keyslot_manager_get_slot(src_bc->bc_ksm, src_bc->bc_keyslot);
+}
+EXPORT_SYMBOL_GPL(bio_crypt_clone);
+
+bool bio_crypt_should_process(struct request *rq)
+{
+ struct bio *bio = rq->bio;
+
+ if (!bio || !bio->bi_crypt_context)
+ return false;
+
+ return rq->q->ksm == bio->bi_crypt_context->bc_ksm;
+}
+EXPORT_SYMBOL_GPL(bio_crypt_should_process);
+
+/*
+ * Checks that two bio crypt contexts are compatible - i.e. that
+ * they are mergeable except for data_unit_num continuity.
+ */
+bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
+{
+ struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context;
+ struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context;
+
+ if (!bc1)
+ return !bc2;
+ return bc2 && bc1->bc_key == bc2->bc_key;
+}
+
+/*
+ * Checks that two bio crypt contexts are compatible, and also
+ * that their data_unit_nums are continuous (and can hence be merged)
+ * in the order b_1 followed by b_2.
+ */
+bool bio_crypt_ctx_mergeable(struct bio *b_1, unsigned int b1_bytes,
+ struct bio *b_2)
+{
+ struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context;
+ struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context;
+
+ if (!bio_crypt_ctx_compatible(b_1, b_2))
+ return false;
+
+ return !bc1 || bio_crypt_dun_is_contiguous(bc1, b1_bytes, bc2->bc_dun);
+}
+
+void bio_crypt_ctx_release_keyslot(struct bio_crypt_ctx *bc)
+{
+ keyslot_manager_put_slot(bc->bc_ksm, bc->bc_keyslot);
+ bc->bc_ksm = NULL;
+ bc->bc_keyslot = -1;
+}
+
+int bio_crypt_ctx_acquire_keyslot(struct bio_crypt_ctx *bc,
+ struct keyslot_manager *ksm)
+{
+ int slot = keyslot_manager_get_slot_for_key(ksm, bc->bc_key);
+
+ if (slot < 0)
+ return slot;
+
+ bc->bc_keyslot = slot;
+ bc->bc_ksm = ksm;
+ return 0;
+}
diff --git a/block/bio.c b/block/bio.c
index ab2acc2..ee3bae8 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -29,6 +29,7 @@
#include <linux/workqueue.h>
#include <linux/cgroup.h>
#include <linux/blk-cgroup.h>
+#include <linux/blk-crypto.h>
#include <trace/events/block.h>
#include "blk.h"
@@ -245,6 +246,8 @@ struct bio_vec *bvec_alloc(gfp_t gfp_mask, int nr, unsigned long *idx,
void bio_uninit(struct bio *bio)
{
bio_disassociate_task(bio);
+
+ bio_crypt_free_ctx(bio);
}
EXPORT_SYMBOL(bio_uninit);
@@ -580,19 +583,6 @@ inline int bio_phys_segments(struct request_queue *q, struct bio *bio)
}
EXPORT_SYMBOL(bio_phys_segments);
-inline void bio_clone_crypt_key(struct bio *dst, const struct bio *src)
-{
-#ifdef CONFIG_PFK
- dst->bi_iter.bi_dun = src->bi_iter.bi_dun;
-#ifdef CONFIG_DM_DEFAULT_KEY
- dst->bi_crypt_key = src->bi_crypt_key;
- dst->bi_crypt_skip = src->bi_crypt_skip;
-#endif
- dst->bi_dio_inode = src->bi_dio_inode;
-#endif
-}
-EXPORT_SYMBOL(bio_clone_crypt_key);
-
/**
* __bio_clone_fast - clone a bio that shares the original bio's biovec
* @bio: destination bio
@@ -622,7 +612,7 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src)
bio->bi_write_hint = bio_src->bi_write_hint;
bio->bi_iter = bio_src->bi_iter;
bio->bi_io_vec = bio_src->bi_io_vec;
- bio_clone_crypt_key(bio, bio_src);
+
bio_clone_blkcg_association(bio, bio_src);
}
EXPORT_SYMBOL(__bio_clone_fast);
@@ -645,15 +635,12 @@ struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs)
__bio_clone_fast(b, bio);
- if (bio_integrity(bio)) {
- int ret;
+ bio_crypt_clone(b, bio, gfp_mask);
- ret = bio_integrity_clone(b, bio, gfp_mask);
-
- if (ret < 0) {
- bio_put(b);
- return NULL;
- }
+ if (bio_integrity(bio) &&
+ bio_integrity_clone(b, bio, gfp_mask) < 0) {
+ bio_put(b);
+ return NULL;
}
return b;
@@ -966,6 +953,7 @@ void bio_advance(struct bio *bio, unsigned bytes)
if (bio_integrity(bio))
bio_integrity_advance(bio, bytes);
+ bio_crypt_advance(bio, bytes);
bio_advance_iter(bio, &bio->bi_iter, bytes);
}
EXPORT_SYMBOL(bio_advance);
@@ -1764,6 +1752,10 @@ void bio_endio(struct bio *bio)
again:
if (!bio_remaining_done(bio))
return;
+
+ if (!blk_crypto_endio(bio))
+ return;
+
if (!bio_integrity_endio(bio))
return;
diff --git a/block/blk-core.c b/block/blk-core.c
index b6c6fb1..f61a9f1 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -36,6 +36,7 @@
#include <linux/debugfs.h>
#include <linux/bpf.h>
#include <linux/psi.h>
+#include <linux/blk-crypto.h>
#define CREATE_TRACE_POINTS
#include <trace/events/block.h>
@@ -1610,9 +1611,6 @@ static struct request *blk_old_get_request(struct request_queue *q,
/* q->queue_lock is unlocked at this point */
rq->__data_len = 0;
rq->__sector = (sector_t) -1;
-#ifdef CONFIG_PFK
- rq->__dun = 0;
-#endif
rq->bio = rq->biotail = NULL;
return rq;
}
@@ -1845,9 +1843,6 @@ bool bio_attempt_front_merge(struct request_queue *q, struct request *req,
bio->bi_next = req->bio;
req->bio = bio;
-#ifdef CONFIG_PFK
- req->__dun = bio->bi_iter.bi_dun;
-#endif
req->__sector = bio->bi_iter.bi_sector;
req->__data_len += bio->bi_iter.bi_size;
req->ioprio = ioprio_best(req->ioprio, bio_prio(bio));
@@ -1997,9 +1992,6 @@ void blk_init_request_from_bio(struct request *req, struct bio *bio)
else
req->ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
req->write_hint = bio->bi_write_hint;
-#ifdef CONFIG_PFK
- req->__dun = bio->bi_iter.bi_dun;
-#endif
blk_rq_bio_prep(req->q, req, bio);
}
EXPORT_SYMBOL_GPL(blk_init_request_from_bio);
@@ -2471,7 +2463,9 @@ blk_qc_t generic_make_request(struct bio *bio)
/* Create a fresh bio_list for all subordinate requests */
bio_list_on_stack[1] = bio_list_on_stack[0];
bio_list_init(&bio_list_on_stack[0]);
- ret = q->make_request_fn(q, bio);
+
+ if (!blk_crypto_submit_bio(&bio))
+ ret = q->make_request_fn(q, bio);
/* sort new bios into those for a lower level
* and those for the same level
@@ -2520,7 +2514,7 @@ blk_qc_t direct_make_request(struct bio *bio)
{
struct request_queue *q = bio->bi_disk->queue;
bool nowait = bio->bi_opf & REQ_NOWAIT;
- blk_qc_t ret;
+ blk_qc_t ret = BLK_QC_T_NONE;
if (!generic_make_request_checks(bio))
return BLK_QC_T_NONE;
@@ -2534,7 +2528,8 @@ blk_qc_t direct_make_request(struct bio *bio)
return BLK_QC_T_NONE;
}
- ret = q->make_request_fn(q, bio);
+ if (!blk_crypto_submit_bio(&bio))
+ ret = q->make_request_fn(q, bio);
blk_queue_exit(q);
return ret;
}
@@ -3161,13 +3156,8 @@ bool blk_update_request(struct request *req, blk_status_t error,
req->__data_len -= total_bytes;
/* update sector only for requests with clear definition of sector */
- if (!blk_rq_is_passthrough(req)) {
+ if (!blk_rq_is_passthrough(req))
req->__sector += total_bytes >> 9;
-#ifdef CONFIG_PFK
- if (req->__dun)
- req->__dun += total_bytes >> 12;
-#endif
- }
/* mixed attributes always follow the first bio */
if (req->rq_flags & RQF_MIXED_MERGE) {
@@ -3531,9 +3521,6 @@ static void __blk_rq_prep_clone(struct request *dst, struct request *src)
{
dst->cpu = src->cpu;
dst->__sector = blk_rq_pos(src);
-#ifdef CONFIG_PFK
- dst->__dun = blk_rq_dun(src);
-#endif
dst->__data_len = blk_rq_bytes(src);
if (src->rq_flags & RQF_SPECIAL_PAYLOAD) {
dst->rq_flags |= RQF_SPECIAL_PAYLOAD;
@@ -4009,5 +3996,11 @@ int __init blk_dev_init(void)
blk_debugfs_root = debugfs_create_dir("block", NULL);
#endif
+ if (bio_crypt_ctx_init() < 0)
+ panic("Failed to allocate mem for bio crypt ctxs\n");
+
+ if (blk_crypto_fallback_init() < 0)
+ panic("Failed to init blk-crypto-fallback\n");
+
return 0;
}
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
new file mode 100644
index 0000000..945d23d
--- /dev/null
+++ b/block/blk-crypto-fallback.c
@@ -0,0 +1,656 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+/*
+ * Refer to Documentation/block/inline-encryption.rst for detailed explanation.
+ */
+
+#define pr_fmt(fmt) "blk-crypto-fallback: " fmt
+
+#include <crypto/skcipher.h>
+#include <linux/blk-cgroup.h>
+#include <linux/blk-crypto.h>
+#include <linux/crypto.h>
+#include <linux/keyslot-manager.h>
+#include <linux/mempool.h>
+#include <linux/module.h>
+#include <linux/random.h>
+
+#include "blk-crypto-internal.h"
+
+static unsigned int num_prealloc_bounce_pg = 32;
+module_param(num_prealloc_bounce_pg, uint, 0);
+MODULE_PARM_DESC(num_prealloc_bounce_pg,
+ "Number of preallocated bounce pages for the blk-crypto crypto API fallback");
+
+static unsigned int blk_crypto_num_keyslots = 100;
+module_param_named(num_keyslots, blk_crypto_num_keyslots, uint, 0);
+MODULE_PARM_DESC(num_keyslots,
+ "Number of keyslots for the blk-crypto crypto API fallback");
+
+static unsigned int num_prealloc_fallback_crypt_ctxs = 128;
+module_param(num_prealloc_fallback_crypt_ctxs, uint, 0);
+MODULE_PARM_DESC(num_prealloc_crypt_fallback_ctxs,
+ "Number of preallocated bio fallback crypto contexts for blk-crypto to use during crypto API fallback");
+
+struct bio_fallback_crypt_ctx {
+ struct bio_crypt_ctx crypt_ctx;
+ /*
+ * Copy of the bvec_iter when this bio was submitted.
+ * We only want to en/decrypt the part of the bio as described by the
+ * bvec_iter upon submission because bio might be split before being
+ * resubmitted
+ */
+ struct bvec_iter crypt_iter;
+ u64 fallback_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+};
+
+/* The following few vars are only used during the crypto API fallback */
+static struct kmem_cache *bio_fallback_crypt_ctx_cache;
+static mempool_t *bio_fallback_crypt_ctx_pool;
+
+/*
+ * Allocating a crypto tfm during I/O can deadlock, so we have to preallocate
+ * all of a mode's tfms when that mode starts being used. Since each mode may
+ * need all the keyslots at some point, each mode needs its own tfm for each
+ * keyslot; thus, a keyslot may contain tfms for multiple modes. However, to
+ * match the behavior of real inline encryption hardware (which only supports a
+ * single encryption context per keyslot), we only allow one tfm per keyslot to
+ * be used at a time - the rest of the unused tfms have their keys cleared.
+ */
+static DEFINE_MUTEX(tfms_init_lock);
+static bool tfms_inited[BLK_ENCRYPTION_MODE_MAX];
+
+struct blk_crypto_decrypt_work {
+ struct work_struct work;
+ struct bio *bio;
+};
+
+static struct blk_crypto_keyslot {
+ struct crypto_skcipher *tfm;
+ enum blk_crypto_mode_num crypto_mode;
+ struct crypto_skcipher *tfms[BLK_ENCRYPTION_MODE_MAX];
+} *blk_crypto_keyslots;
+
+/* The following few vars are only used during the crypto API fallback */
+static struct keyslot_manager *blk_crypto_ksm;
+static struct workqueue_struct *blk_crypto_wq;
+static mempool_t *blk_crypto_bounce_page_pool;
+static struct kmem_cache *blk_crypto_decrypt_work_cache;
+
+bool bio_crypt_fallback_crypted(const struct bio_crypt_ctx *bc)
+{
+ return bc && bc->bc_ksm == blk_crypto_ksm;
+}
+
+/*
+ * This is the key we set when evicting a keyslot. This *should* be the all 0's
+ * key, but AES-XTS rejects that key, so we use some random bytes instead.
+ */
+static u8 blank_key[BLK_CRYPTO_MAX_KEY_SIZE];
+
+static void blk_crypto_evict_keyslot(unsigned int slot)
+{
+ struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];
+ enum blk_crypto_mode_num crypto_mode = slotp->crypto_mode;
+ int err;
+
+ WARN_ON(slotp->crypto_mode == BLK_ENCRYPTION_MODE_INVALID);
+
+ /* Clear the key in the skcipher */
+ err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], blank_key,
+ blk_crypto_modes[crypto_mode].keysize);
+ WARN_ON(err);
+ slotp->crypto_mode = BLK_ENCRYPTION_MODE_INVALID;
+}
+
+static int blk_crypto_keyslot_program(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot)
+{
+ struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];
+ const enum blk_crypto_mode_num crypto_mode = key->crypto_mode;
+ int err;
+
+ if (crypto_mode != slotp->crypto_mode &&
+ slotp->crypto_mode != BLK_ENCRYPTION_MODE_INVALID) {
+ blk_crypto_evict_keyslot(slot);
+ }
+
+ if (!slotp->tfms[crypto_mode])
+ return -ENOMEM;
+ slotp->crypto_mode = crypto_mode;
+ err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], key->raw,
+ key->size);
+ if (err) {
+ blk_crypto_evict_keyslot(slot);
+ return err;
+ }
+ return 0;
+}
+
+static int blk_crypto_keyslot_evict(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot)
+{
+ blk_crypto_evict_keyslot(slot);
+ return 0;
+}
+
+/*
+ * The crypto API fallback KSM ops - only used for a bio when it specifies a
+ * blk_crypto_mode for which we failed to get a keyslot in the device's inline
+ * encryption hardware (which probably means the device doesn't have inline
+ * encryption hardware that supports that crypto mode).
+ */
+static const struct keyslot_mgmt_ll_ops blk_crypto_ksm_ll_ops = {
+ .keyslot_program = blk_crypto_keyslot_program,
+ .keyslot_evict = blk_crypto_keyslot_evict,
+};
+
+static void blk_crypto_encrypt_endio(struct bio *enc_bio)
+{
+ struct bio *src_bio = enc_bio->bi_private;
+ int i;
+
+ for (i = 0; i < enc_bio->bi_vcnt; i++)
+ mempool_free(enc_bio->bi_io_vec[i].bv_page,
+ blk_crypto_bounce_page_pool);
+
+ src_bio->bi_status = enc_bio->bi_status;
+
+ bio_put(enc_bio);
+ bio_endio(src_bio);
+}
+
+static struct bio *blk_crypto_clone_bio(struct bio *bio_src)
+{
+ struct bvec_iter iter;
+ struct bio_vec bv;
+ struct bio *bio;
+
+ bio = bio_alloc_bioset(GFP_NOIO, bio_segments(bio_src), NULL);
+ if (!bio)
+ return NULL;
+ bio->bi_disk = bio_src->bi_disk;
+ bio->bi_opf = bio_src->bi_opf;
+ bio->bi_ioprio = bio_src->bi_ioprio;
+ bio->bi_write_hint = bio_src->bi_write_hint;
+ bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector;
+ bio->bi_iter.bi_size = bio_src->bi_iter.bi_size;
+
+ bio_for_each_segment(bv, bio_src, iter)
+ bio->bi_io_vec[bio->bi_vcnt++] = bv;
+
+ if (bio_integrity(bio_src) &&
+ bio_integrity_clone(bio, bio_src, GFP_NOIO) < 0) {
+ bio_put(bio);
+ return NULL;
+ }
+
+ bio_clone_blkcg_association(bio, bio_src);
+
+ bio_clone_skip_dm_default_key(bio, bio_src);
+
+ return bio;
+}
+
+static int blk_crypto_alloc_cipher_req(struct bio *src_bio,
+ struct skcipher_request **ciph_req_ret,
+ struct crypto_wait *wait)
+{
+ struct skcipher_request *ciph_req;
+ const struct blk_crypto_keyslot *slotp;
+
+ slotp = &blk_crypto_keyslots[src_bio->bi_crypt_context->bc_keyslot];
+ ciph_req = skcipher_request_alloc(slotp->tfms[slotp->crypto_mode],
+ GFP_NOIO);
+ if (!ciph_req) {
+ src_bio->bi_status = BLK_STS_RESOURCE;
+ return -ENOMEM;
+ }
+
+ skcipher_request_set_callback(ciph_req,
+ CRYPTO_TFM_REQ_MAY_BACKLOG |
+ CRYPTO_TFM_REQ_MAY_SLEEP,
+ crypto_req_done, wait);
+ *ciph_req_ret = ciph_req;
+ return 0;
+}
+
+static int blk_crypto_split_bio_if_needed(struct bio **bio_ptr)
+{
+ struct bio *bio = *bio_ptr;
+ unsigned int i = 0;
+ unsigned int num_sectors = 0;
+ struct bio_vec bv;
+ struct bvec_iter iter;
+
+ bio_for_each_segment(bv, bio, iter) {
+ num_sectors += bv.bv_len >> SECTOR_SHIFT;
+ if (++i == BIO_MAX_PAGES)
+ break;
+ }
+ if (num_sectors < bio_sectors(bio)) {
+ struct bio *split_bio;
+
+ split_bio = bio_split(bio, num_sectors, GFP_NOIO, NULL);
+ if (!split_bio) {
+ bio->bi_status = BLK_STS_RESOURCE;
+ return -ENOMEM;
+ }
+ bio_chain(split_bio, bio);
+ generic_make_request(bio);
+ *bio_ptr = split_bio;
+ }
+ return 0;
+}
+
+union blk_crypto_iv {
+ __le64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+ u8 bytes[BLK_CRYPTO_MAX_IV_SIZE];
+};
+
+static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
+ union blk_crypto_iv *iv)
+{
+ int i;
+
+ for (i = 0; i < BLK_CRYPTO_DUN_ARRAY_SIZE; i++)
+ iv->dun[i] = cpu_to_le64(dun[i]);
+}
+
+/*
+ * The crypto API fallback's encryption routine.
+ * Allocate a bounce bio for encryption, encrypt the input bio using crypto API,
+ * and replace *bio_ptr with the bounce bio. May split input bio if it's too
+ * large.
+ */
+static int blk_crypto_encrypt_bio(struct bio **bio_ptr)
+{
+ struct bio *src_bio;
+ struct skcipher_request *ciph_req = NULL;
+ DECLARE_CRYPTO_WAIT(wait);
+ u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+ union blk_crypto_iv iv;
+ struct scatterlist src, dst;
+ struct bio *enc_bio;
+ unsigned int i, j;
+ int data_unit_size;
+ struct bio_crypt_ctx *bc;
+ int err = 0;
+
+ /* Split the bio if it's too big for single page bvec */
+ err = blk_crypto_split_bio_if_needed(bio_ptr);
+ if (err)
+ return err;
+
+ src_bio = *bio_ptr;
+ bc = src_bio->bi_crypt_context;
+ data_unit_size = bc->bc_key->data_unit_size;
+
+ /* Allocate bounce bio for encryption */
+ enc_bio = blk_crypto_clone_bio(src_bio);
+ if (!enc_bio) {
+ src_bio->bi_status = BLK_STS_RESOURCE;
+ return -ENOMEM;
+ }
+
+ /*
+ * Use the crypto API fallback keyslot manager to get a crypto_skcipher
+ * for the algorithm and key specified for this bio.
+ */
+ err = bio_crypt_ctx_acquire_keyslot(bc, blk_crypto_ksm);
+ if (err) {
+ src_bio->bi_status = BLK_STS_IOERR;
+ goto out_put_enc_bio;
+ }
+
+ /* and then allocate an skcipher_request for it */
+ err = blk_crypto_alloc_cipher_req(src_bio, &ciph_req, &wait);
+ if (err)
+ goto out_release_keyslot;
+
+ memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun));
+ sg_init_table(&src, 1);
+ sg_init_table(&dst, 1);
+
+ skcipher_request_set_crypt(ciph_req, &src, &dst, data_unit_size,
+ iv.bytes);
+
+ /* Encrypt each page in the bounce bio */
+ for (i = 0; i < enc_bio->bi_vcnt; i++) {
+ struct bio_vec *enc_bvec = &enc_bio->bi_io_vec[i];
+ struct page *plaintext_page = enc_bvec->bv_page;
+ struct page *ciphertext_page =
+ mempool_alloc(blk_crypto_bounce_page_pool, GFP_NOIO);
+
+ enc_bvec->bv_page = ciphertext_page;
+
+ if (!ciphertext_page) {
+ src_bio->bi_status = BLK_STS_RESOURCE;
+ err = -ENOMEM;
+ goto out_free_bounce_pages;
+ }
+
+ sg_set_page(&src, plaintext_page, data_unit_size,
+ enc_bvec->bv_offset);
+ sg_set_page(&dst, ciphertext_page, data_unit_size,
+ enc_bvec->bv_offset);
+
+ /* Encrypt each data unit in this page */
+ for (j = 0; j < enc_bvec->bv_len; j += data_unit_size) {
+ blk_crypto_dun_to_iv(curr_dun, &iv);
+ err = crypto_wait_req(crypto_skcipher_encrypt(ciph_req),
+ &wait);
+ if (err) {
+ i++;
+ src_bio->bi_status = BLK_STS_RESOURCE;
+ goto out_free_bounce_pages;
+ }
+ bio_crypt_dun_increment(curr_dun, 1);
+ src.offset += data_unit_size;
+ dst.offset += data_unit_size;
+ }
+ }
+
+ enc_bio->bi_private = src_bio;
+ enc_bio->bi_end_io = blk_crypto_encrypt_endio;
+ *bio_ptr = enc_bio;
+
+ enc_bio = NULL;
+ err = 0;
+ goto out_free_ciph_req;
+
+out_free_bounce_pages:
+ while (i > 0)
+ mempool_free(enc_bio->bi_io_vec[--i].bv_page,
+ blk_crypto_bounce_page_pool);
+out_free_ciph_req:
+ skcipher_request_free(ciph_req);
+out_release_keyslot:
+ bio_crypt_ctx_release_keyslot(bc);
+out_put_enc_bio:
+ if (enc_bio)
+ bio_put(enc_bio);
+
+ return err;
+}
+
+static void blk_crypto_free_fallback_crypt_ctx(struct bio *bio)
+{
+ mempool_free(container_of(bio->bi_crypt_context,
+ struct bio_fallback_crypt_ctx,
+ crypt_ctx),
+ bio_fallback_crypt_ctx_pool);
+ bio->bi_crypt_context = NULL;
+}
+
+/*
+ * The crypto API fallback's main decryption routine.
+ * Decrypts input bio in place.
+ */
+static void blk_crypto_decrypt_bio(struct work_struct *work)
+{
+ struct blk_crypto_decrypt_work *decrypt_work =
+ container_of(work, struct blk_crypto_decrypt_work, work);
+ struct bio *bio = decrypt_work->bio;
+ struct skcipher_request *ciph_req = NULL;
+ DECLARE_CRYPTO_WAIT(wait);
+ struct bio_vec bv;
+ struct bvec_iter iter;
+ u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+ union blk_crypto_iv iv;
+ struct scatterlist sg;
+ struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+ struct bio_fallback_crypt_ctx *f_ctx =
+ container_of(bc, struct bio_fallback_crypt_ctx, crypt_ctx);
+ const int data_unit_size = bc->bc_key->data_unit_size;
+ unsigned int i;
+ int err;
+
+ /*
+ * Use the crypto API fallback keyslot manager to get a crypto_skcipher
+ * for the algorithm and key specified for this bio.
+ */
+ if (bio_crypt_ctx_acquire_keyslot(bc, blk_crypto_ksm)) {
+ bio->bi_status = BLK_STS_RESOURCE;
+ goto out_no_keyslot;
+ }
+
+ /* and then allocate an skcipher_request for it */
+ err = blk_crypto_alloc_cipher_req(bio, &ciph_req, &wait);
+ if (err)
+ goto out;
+
+ memcpy(curr_dun, f_ctx->fallback_dun, sizeof(curr_dun));
+ sg_init_table(&sg, 1);
+ skcipher_request_set_crypt(ciph_req, &sg, &sg, data_unit_size,
+ iv.bytes);
+
+ /* Decrypt each segment in the bio */
+ __bio_for_each_segment(bv, bio, iter, f_ctx->crypt_iter) {
+ struct page *page = bv.bv_page;
+
+ sg_set_page(&sg, page, data_unit_size, bv.bv_offset);
+
+ /* Decrypt each data unit in the segment */
+ for (i = 0; i < bv.bv_len; i += data_unit_size) {
+ blk_crypto_dun_to_iv(curr_dun, &iv);
+ if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req),
+ &wait)) {
+ bio->bi_status = BLK_STS_IOERR;
+ goto out;
+ }
+ bio_crypt_dun_increment(curr_dun, 1);
+ sg.offset += data_unit_size;
+ }
+ }
+
+out:
+ skcipher_request_free(ciph_req);
+ bio_crypt_ctx_release_keyslot(bc);
+out_no_keyslot:
+ kmem_cache_free(blk_crypto_decrypt_work_cache, decrypt_work);
+ blk_crypto_free_fallback_crypt_ctx(bio);
+ bio_endio(bio);
+}
+
+/*
+ * Queue bio for decryption.
+ * Returns true iff bio was queued for decryption.
+ */
+bool blk_crypto_queue_decrypt_bio(struct bio *bio)
+{
+ struct blk_crypto_decrypt_work *decrypt_work;
+
+ /* If there was an IO error, don't queue for decrypt. */
+ if (bio->bi_status)
+ goto out;
+
+ decrypt_work = kmem_cache_zalloc(blk_crypto_decrypt_work_cache,
+ GFP_ATOMIC);
+ if (!decrypt_work) {
+ bio->bi_status = BLK_STS_RESOURCE;
+ goto out;
+ }
+
+ INIT_WORK(&decrypt_work->work, blk_crypto_decrypt_bio);
+ decrypt_work->bio = bio;
+ queue_work(blk_crypto_wq, &decrypt_work->work);
+
+ return true;
+out:
+ blk_crypto_free_fallback_crypt_ctx(bio);
+ return false;
+}
+
+/**
+ * blk_crypto_start_using_mode() - Start using a crypto algorithm on a device
+ * @mode_num: the blk_crypto_mode we want to allocate ciphers for.
+ * @data_unit_size: the data unit size that will be used
+ * @q: the request queue for the device
+ *
+ * Upper layers must call this function to ensure that a the crypto API fallback
+ * has transforms for this algorithm, if they become necessary.
+ *
+ * Return: 0 on success and -err on error.
+ */
+int blk_crypto_start_using_mode(enum blk_crypto_mode_num mode_num,
+ unsigned int data_unit_size,
+ struct request_queue *q)
+{
+ struct blk_crypto_keyslot *slotp;
+ unsigned int i;
+ int err = 0;
+
+ /*
+ * Fast path
+ * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num]
+ * for each i are visible before we try to access them.
+ */
+ if (likely(smp_load_acquire(&tfms_inited[mode_num])))
+ return 0;
+
+ /*
+ * If the keyslot manager of the request queue supports this
+ * crypto mode, then we don't need to allocate this mode.
+ */
+ if (keyslot_manager_crypto_mode_supported(q->ksm, mode_num,
+ data_unit_size))
+ return 0;
+
+ mutex_lock(&tfms_init_lock);
+ if (likely(tfms_inited[mode_num]))
+ goto out;
+
+ for (i = 0; i < blk_crypto_num_keyslots; i++) {
+ slotp = &blk_crypto_keyslots[i];
+ slotp->tfms[mode_num] = crypto_alloc_skcipher(
+ blk_crypto_modes[mode_num].cipher_str,
+ 0, 0);
+ if (IS_ERR(slotp->tfms[mode_num])) {
+ err = PTR_ERR(slotp->tfms[mode_num]);
+ slotp->tfms[mode_num] = NULL;
+ goto out_free_tfms;
+ }
+
+ crypto_skcipher_set_flags(slotp->tfms[mode_num],
+ CRYPTO_TFM_REQ_WEAK_KEY);
+ }
+
+ /*
+ * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num]
+ * for each i are visible before we set tfms_inited[mode_num].
+ */
+ smp_store_release(&tfms_inited[mode_num], true);
+ goto out;
+
+out_free_tfms:
+ for (i = 0; i < blk_crypto_num_keyslots; i++) {
+ slotp = &blk_crypto_keyslots[i];
+ crypto_free_skcipher(slotp->tfms[mode_num]);
+ slotp->tfms[mode_num] = NULL;
+ }
+out:
+ mutex_unlock(&tfms_init_lock);
+ return err;
+}
+EXPORT_SYMBOL_GPL(blk_crypto_start_using_mode);
+
+int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key)
+{
+ return keyslot_manager_evict_key(blk_crypto_ksm, key);
+}
+
+int blk_crypto_fallback_submit_bio(struct bio **bio_ptr)
+{
+ struct bio *bio = *bio_ptr;
+ struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+ struct bio_fallback_crypt_ctx *f_ctx;
+
+ if (bc->bc_key->is_hw_wrapped) {
+ pr_warn_once("HW wrapped key cannot be used with fallback.\n");
+ bio->bi_status = BLK_STS_NOTSUPP;
+ return -EOPNOTSUPP;
+ }
+
+ if (!tfms_inited[bc->bc_key->crypto_mode]) {
+ bio->bi_status = BLK_STS_IOERR;
+ return -EIO;
+ }
+
+ if (bio_data_dir(bio) == WRITE)
+ return blk_crypto_encrypt_bio(bio_ptr);
+
+ /*
+ * Mark bio as fallback crypted and replace the bio_crypt_ctx with
+ * another one contained in a bio_fallback_crypt_ctx, so that the
+ * fallback has space to store the info it needs for decryption.
+ */
+ bc->bc_ksm = blk_crypto_ksm;
+ f_ctx = mempool_alloc(bio_fallback_crypt_ctx_pool, GFP_NOIO);
+ f_ctx->crypt_ctx = *bc;
+ memcpy(f_ctx->fallback_dun, bc->bc_dun, sizeof(f_ctx->fallback_dun));
+ f_ctx->crypt_iter = bio->bi_iter;
+
+ bio_crypt_free_ctx(bio);
+ bio->bi_crypt_context = &f_ctx->crypt_ctx;
+
+ return 0;
+}
+
+int __init blk_crypto_fallback_init(void)
+{
+ int i;
+ unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX];
+
+ prandom_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE);
+
+ /* All blk-crypto modes have a crypto API fallback. */
+ for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++)
+ crypto_mode_supported[i] = 0xFFFFFFFF;
+ crypto_mode_supported[BLK_ENCRYPTION_MODE_INVALID] = 0;
+
+ blk_crypto_ksm = keyslot_manager_create(blk_crypto_num_keyslots,
+ &blk_crypto_ksm_ll_ops,
+ crypto_mode_supported, NULL);
+ if (!blk_crypto_ksm)
+ return -ENOMEM;
+
+ blk_crypto_wq = alloc_workqueue("blk_crypto_wq",
+ WQ_UNBOUND | WQ_HIGHPRI |
+ WQ_MEM_RECLAIM, num_online_cpus());
+ if (!blk_crypto_wq)
+ return -ENOMEM;
+
+ blk_crypto_keyslots = kcalloc(blk_crypto_num_keyslots,
+ sizeof(blk_crypto_keyslots[0]),
+ GFP_KERNEL);
+ if (!blk_crypto_keyslots)
+ return -ENOMEM;
+
+ blk_crypto_bounce_page_pool =
+ mempool_create_page_pool(num_prealloc_bounce_pg, 0);
+ if (!blk_crypto_bounce_page_pool)
+ return -ENOMEM;
+
+ blk_crypto_decrypt_work_cache = KMEM_CACHE(blk_crypto_decrypt_work,
+ SLAB_RECLAIM_ACCOUNT);
+ if (!blk_crypto_decrypt_work_cache)
+ return -ENOMEM;
+
+ bio_fallback_crypt_ctx_cache = KMEM_CACHE(bio_fallback_crypt_ctx, 0);
+ if (!bio_fallback_crypt_ctx_cache)
+ return -ENOMEM;
+
+ bio_fallback_crypt_ctx_pool =
+ mempool_create_slab_pool(num_prealloc_fallback_crypt_ctxs,
+ bio_fallback_crypt_ctx_cache);
+ if (!bio_fallback_crypt_ctx_pool)
+ return -ENOMEM;
+
+ return 0;
+}
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
new file mode 100644
index 0000000..40d826b
--- /dev/null
+++ b/block/blk-crypto-internal.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef __LINUX_BLK_CRYPTO_INTERNAL_H
+#define __LINUX_BLK_CRYPTO_INTERNAL_H
+
+#include <linux/bio.h>
+
+/* Represents a crypto mode supported by blk-crypto */
+struct blk_crypto_mode {
+ const char *cipher_str; /* crypto API name (for fallback case) */
+ unsigned int keysize; /* key size in bytes */
+ unsigned int ivsize; /* iv size in bytes */
+};
+
+extern const struct blk_crypto_mode blk_crypto_modes[];
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK
+
+int blk_crypto_fallback_submit_bio(struct bio **bio_ptr);
+
+bool blk_crypto_queue_decrypt_bio(struct bio *bio);
+
+int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key);
+
+bool bio_crypt_fallback_crypted(const struct bio_crypt_ctx *bc);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+
+static inline bool bio_crypt_fallback_crypted(const struct bio_crypt_ctx *bc)
+{
+ return false;
+}
+
+static inline int blk_crypto_fallback_submit_bio(struct bio **bio_ptr)
+{
+ pr_warn_once("crypto API fallback disabled; failing request\n");
+ (*bio_ptr)->bi_status = BLK_STS_NOTSUPP;
+ return -EIO;
+}
+
+static inline bool blk_crypto_queue_decrypt_bio(struct bio *bio)
+{
+ WARN_ON(1);
+ return false;
+}
+
+static inline int
+blk_crypto_fallback_evict_key(const struct blk_crypto_key *key)
+{
+ return 0;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+
+#endif /* __LINUX_BLK_CRYPTO_INTERNAL_H */
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
new file mode 100644
index 0000000..88df1c0
--- /dev/null
+++ b/block/blk-crypto.c
@@ -0,0 +1,260 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+/*
+ * Refer to Documentation/block/inline-encryption.rst for detailed explanation.
+ */
+
+#define pr_fmt(fmt) "blk-crypto: " fmt
+
+#include <linux/blk-crypto.h>
+#include <linux/blkdev.h>
+#include <linux/keyslot-manager.h>
+#include <linux/random.h>
+#include <linux/siphash.h>
+
+#include "blk-crypto-internal.h"
+
+const struct blk_crypto_mode blk_crypto_modes[] = {
+ [BLK_ENCRYPTION_MODE_AES_256_XTS] = {
+ .cipher_str = "xts(aes)",
+ .keysize = 64,
+ .ivsize = 16,
+ },
+ [BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV] = {
+ .cipher_str = "essiv(cbc(aes),sha256)",
+ .keysize = 16,
+ .ivsize = 16,
+ },
+ [BLK_ENCRYPTION_MODE_ADIANTUM] = {
+ .cipher_str = "adiantum(xchacha12,aes)",
+ .keysize = 32,
+ .ivsize = 32,
+ },
+};
+
+/* Check that all I/O segments are data unit aligned */
+static int bio_crypt_check_alignment(struct bio *bio)
+{
+ const unsigned int data_unit_size =
+ bio->bi_crypt_context->bc_key->data_unit_size;
+ struct bvec_iter iter;
+ struct bio_vec bv;
+
+ bio_for_each_segment(bv, bio, iter) {
+ if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size))
+ return -EIO;
+ }
+ return 0;
+}
+
+/**
+ * blk_crypto_submit_bio - handle submitting bio for inline encryption
+ *
+ * @bio_ptr: pointer to original bio pointer
+ *
+ * If the bio doesn't have inline encryption enabled or the submitter already
+ * specified a keyslot for the target device, do nothing. Else, a raw key must
+ * have been provided, so acquire a device keyslot for it if supported. Else,
+ * use the crypto API fallback.
+ *
+ * When the crypto API fallback is used for encryption, blk-crypto may choose to
+ * split the bio into 2 - the first one that will continue to be processed and
+ * the second one that will be resubmitted via generic_make_request.
+ * A bounce bio will be allocated to encrypt the contents of the aforementioned
+ * "first one", and *bio_ptr will be updated to this bounce bio.
+ *
+ * Return: 0 if bio submission should continue; nonzero if bio_endio() was
+ * already called so bio submission should abort.
+ */
+int blk_crypto_submit_bio(struct bio **bio_ptr)
+{
+ struct bio *bio = *bio_ptr;
+ struct request_queue *q;
+ struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+ int err;
+
+ if (!bc || !bio_has_data(bio))
+ return 0;
+
+ /*
+ * When a read bio is marked for fallback decryption, its bi_iter is
+ * saved so that when we decrypt the bio later, we know what part of it
+ * was marked for fallback decryption (when the bio is passed down after
+ * blk_crypto_submit bio, it may be split or advanced so we cannot rely
+ * on the bi_iter while decrypting in blk_crypto_endio)
+ */
+ if (bio_crypt_fallback_crypted(bc))
+ return 0;
+
+ err = bio_crypt_check_alignment(bio);
+ if (err) {
+ bio->bi_status = BLK_STS_IOERR;
+ goto out;
+ }
+
+ q = bio->bi_disk->queue;
+
+ if (bc->bc_ksm) {
+ /* Key already programmed into device? */
+ if (q->ksm == bc->bc_ksm)
+ return 0;
+
+ /* Nope, release the existing keyslot. */
+ bio_crypt_ctx_release_keyslot(bc);
+ }
+
+ /* Get device keyslot if supported */
+ if (keyslot_manager_crypto_mode_supported(q->ksm,
+ bc->bc_key->crypto_mode,
+ bc->bc_key->data_unit_size)) {
+ err = bio_crypt_ctx_acquire_keyslot(bc, q->ksm);
+ if (!err)
+ return 0;
+
+ pr_warn_once("Failed to acquire keyslot for %s (err=%d). Falling back to crypto API.\n",
+ bio->bi_disk->disk_name, err);
+ }
+
+ /* Fallback to crypto API */
+ err = blk_crypto_fallback_submit_bio(bio_ptr);
+ if (err)
+ goto out;
+
+ return 0;
+out:
+ bio_endio(*bio_ptr);
+ return err;
+}
+
+/**
+ * blk_crypto_endio - clean up bio w.r.t inline encryption during bio_endio
+ *
+ * @bio: the bio to clean up
+ *
+ * If blk_crypto_submit_bio decided to fallback to crypto API for this bio,
+ * we queue the bio for decryption into a workqueue and return false,
+ * and call bio_endio(bio) at a later time (after the bio has been decrypted).
+ *
+ * If the bio is not to be decrypted by the crypto API, this function releases
+ * the reference to the keyslot that blk_crypto_submit_bio got.
+ *
+ * Return: true if bio_endio should continue; false otherwise (bio_endio will
+ * be called again when bio has been decrypted).
+ */
+bool blk_crypto_endio(struct bio *bio)
+{
+ struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+
+ if (!bc)
+ return true;
+
+ if (bio_crypt_fallback_crypted(bc)) {
+ /*
+ * The only bios who's crypto is handled by the blk-crypto
+ * fallback when they reach here are those with
+ * bio_data_dir(bio) == READ, since WRITE bios that are
+ * encrypted by the crypto API fallback are handled by
+ * blk_crypto_encrypt_endio().
+ */
+ return !blk_crypto_queue_decrypt_bio(bio);
+ }
+
+ if (bc->bc_keyslot >= 0)
+ bio_crypt_ctx_release_keyslot(bc);
+
+ return true;
+}
+
+/**
+ * blk_crypto_init_key() - Prepare a key for use with blk-crypto
+ * @blk_key: Pointer to the blk_crypto_key to initialize.
+ * @raw_key: Pointer to the raw key.
+ * @raw_key_size: Size of raw key. Must be at least the required size for the
+ * chosen @crypto_mode; see blk_crypto_modes[]. (It's allowed
+ * to be longer than the mode's actual key size, in order to
+ * support inline encryption hardware that accepts wrapped keys.
+ * @is_hw_wrapped has to be set for such keys)
+ * @is_hw_wrapped: Denotes @raw_key is wrapped.
+ * @crypto_mode: identifier for the encryption algorithm to use
+ * @data_unit_size: the data unit size to use for en/decryption
+ *
+ * Return: The blk_crypto_key that was prepared, or an ERR_PTR() on error. When
+ * done using the key, it must be freed with blk_crypto_free_key().
+ */
+int blk_crypto_init_key(struct blk_crypto_key *blk_key,
+ const u8 *raw_key, unsigned int raw_key_size,
+ bool is_hw_wrapped,
+ enum blk_crypto_mode_num crypto_mode,
+ unsigned int data_unit_size)
+{
+ const struct blk_crypto_mode *mode;
+ static siphash_key_t hash_key;
+
+ memset(blk_key, 0, sizeof(*blk_key));
+
+ if (crypto_mode >= ARRAY_SIZE(blk_crypto_modes))
+ return -EINVAL;
+
+ BUILD_BUG_ON(BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE < BLK_CRYPTO_MAX_KEY_SIZE);
+
+ mode = &blk_crypto_modes[crypto_mode];
+ if (is_hw_wrapped) {
+ if (raw_key_size < mode->keysize ||
+ raw_key_size > BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE)
+ return -EINVAL;
+ } else {
+ if (raw_key_size != mode->keysize)
+ return -EINVAL;
+ }
+
+ if (!is_power_of_2(data_unit_size))
+ return -EINVAL;
+
+ blk_key->crypto_mode = crypto_mode;
+ blk_key->data_unit_size = data_unit_size;
+ blk_key->data_unit_size_bits = ilog2(data_unit_size);
+ blk_key->size = raw_key_size;
+ blk_key->is_hw_wrapped = is_hw_wrapped;
+ memcpy(blk_key->raw, raw_key, raw_key_size);
+
+ /*
+ * The keyslot manager uses the SipHash of the key to implement O(1) key
+ * lookups while avoiding leaking information about the keys. It's
+ * precomputed here so that it only needs to be computed once per key.
+ */
+ get_random_once(&hash_key, sizeof(hash_key));
+ blk_key->hash = siphash(raw_key, raw_key_size, &hash_key);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(blk_crypto_init_key);
+
+/**
+ * blk_crypto_evict_key() - Evict a key from any inline encryption hardware
+ * it may have been programmed into
+ * @q: The request queue who's keyslot manager this key might have been
+ * programmed into
+ * @key: The key to evict
+ *
+ * Upper layers (filesystems) should call this function to ensure that a key
+ * is evicted from hardware that it might have been programmed into. This
+ * will call keyslot_manager_evict_key on the queue's keyslot manager, if one
+ * exists, and supports the crypto algorithm with the specified data unit size.
+ * Otherwise, it will evict the key from the blk-crypto-fallback's ksm.
+ *
+ * Return: 0 on success, -err on error.
+ */
+int blk_crypto_evict_key(struct request_queue *q,
+ const struct blk_crypto_key *key)
+{
+ if (q->ksm &&
+ keyslot_manager_crypto_mode_supported(q->ksm, key->crypto_mode,
+ key->data_unit_size))
+ return keyslot_manager_evict_key(q->ksm, key);
+
+ return blk_crypto_fallback_evict_key(key);
+}
+EXPORT_SYMBOL_GPL(blk_crypto_evict_key);
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 8c8c285..ac7ff16 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -9,7 +9,7 @@
#include <linux/scatterlist.h>
#include <trace/events/block.h>
-#include <linux/pfk.h>
+
#include "blk.h"
static struct bio *blk_bio_discard_split(struct request_queue *q,
@@ -515,13 +515,13 @@ int ll_back_merge_fn(struct request_queue *q, struct request *req,
if (blk_integrity_rq(req) &&
integrity_req_gap_back_merge(req, bio))
return 0;
- if (blk_try_merge(req, bio) != ELEVATOR_BACK_MERGE)
- return 0;
if (blk_rq_sectors(req) + bio_sectors(bio) >
blk_rq_get_max_sectors(req, blk_rq_pos(req))) {
req_set_nomerge(q, req);
return 0;
}
+ if (!bio_crypt_ctx_mergeable(req->bio, blk_rq_bytes(req), bio))
+ return 0;
if (!bio_flagged(req->biotail, BIO_SEG_VALID))
blk_recount_segments(q, req->biotail);
if (!bio_flagged(bio, BIO_SEG_VALID))
@@ -539,13 +539,13 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req,
if (blk_integrity_rq(req) &&
integrity_req_gap_front_merge(req, bio))
return 0;
- if (blk_try_merge(req, bio) != ELEVATOR_FRONT_MERGE)
- return 0;
if (blk_rq_sectors(req) + bio_sectors(bio) >
blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) {
req_set_nomerge(q, req);
return 0;
}
+ if (!bio_crypt_ctx_mergeable(bio, bio->bi_iter.bi_size, req->bio))
+ return 0;
if (!bio_flagged(bio, BIO_SEG_VALID))
blk_recount_segments(q, bio);
if (!bio_flagged(req->bio, BIO_SEG_VALID))
@@ -622,6 +622,9 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
if (blk_integrity_merge_rq(q, req, next) == false)
return 0;
+ if (!bio_crypt_ctx_mergeable(req->bio, blk_rq_bytes(req), next->bio))
+ return 0;
+
/* Merge is OK... */
req->nr_phys_segments = total_phys_segments;
return 1;
@@ -674,11 +677,6 @@ static void blk_account_io_merge(struct request *req)
}
}
-static bool crypto_not_mergeable(const struct bio *bio, const struct bio *nxt)
-{
- return (!pfk_allow_merge_bio(bio, nxt));
-}
-
/*
* For non-mq, this has to be called with the request spinlock acquired.
* For mq with scheduling, the appropriate queue wide lock should be held.
@@ -717,9 +715,6 @@ static struct request *attempt_merge(struct request_queue *q,
if (req->write_hint != next->write_hint)
return NULL;
- if (crypto_not_mergeable(req->bio, next->bio))
- return 0;
-
/*
* If we are allowed to merge, then append bio list
* from next to rq and release next. merge_requests_fn
@@ -850,6 +845,10 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
if (rq->write_hint != bio->bi_write_hint)
return false;
+ /* Only merge if the crypt contexts are compatible */
+ if (!bio_crypt_ctx_compatible(bio, rq->bio))
+ return false;
+
return true;
}
@@ -858,16 +857,9 @@ enum elv_merge blk_try_merge(struct request *rq, struct bio *bio)
if (req_op(rq) == REQ_OP_DISCARD &&
queue_max_discard_segments(rq->q) > 1)
return ELEVATOR_DISCARD_MERGE;
- else if (blk_rq_pos(rq) + blk_rq_sectors(rq) ==
- bio->bi_iter.bi_sector) {
- if (crypto_not_mergeable(rq->bio, bio))
- return ELEVATOR_NO_MERGE;
+ else if (blk_rq_pos(rq) + blk_rq_sectors(rq) == bio->bi_iter.bi_sector)
return ELEVATOR_BACK_MERGE;
- } else if (blk_rq_pos(rq) - bio_sectors(bio) ==
- bio->bi_iter.bi_sector) {
- if (crypto_not_mergeable(bio, rq->bio))
- return ELEVATOR_NO_MERGE;
+ else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_iter.bi_sector)
return ELEVATOR_FRONT_MERGE;
- }
return ELEVATOR_NO_MERGE;
}
diff --git a/block/blk.h b/block/blk.h
index 34fcead..1a5b67b 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -55,6 +55,24 @@ static inline void queue_lockdep_assert_held(struct request_queue *q)
lockdep_assert_held(q->queue_lock);
}
+static inline void queue_flag_set_unlocked(unsigned int flag,
+ struct request_queue *q)
+{
+ if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) &&
+ kref_read(&q->kobj.kref))
+ lockdep_assert_held(q->queue_lock);
+ __set_bit(flag, &q->queue_flags);
+}
+
+static inline void queue_flag_clear_unlocked(unsigned int flag,
+ struct request_queue *q)
+{
+ if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) &&
+ kref_read(&q->kobj.kref))
+ lockdep_assert_held(q->queue_lock);
+ __clear_bit(flag, &q->queue_flags);
+}
+
static inline int queue_flag_test_and_clear(unsigned int flag,
struct request_queue *q)
{
diff --git a/block/bounce.c b/block/bounce.c
index c6a5536..dc37375 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -267,17 +267,14 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
break;
}
- if (bio_integrity(bio_src)) {
- int ret;
+ bio_crypt_clone(bio, bio_src, gfp_mask);
- ret = bio_integrity_clone(bio, bio_src, gfp_mask);
- if (ret < 0) {
- bio_put(bio);
- return NULL;
- }
+ if (bio_integrity(bio_src) &&
+ bio_integrity_clone(bio, bio_src, gfp_mask) < 0) {
+ bio_put(bio);
+ return NULL;
}
- bio_clone_crypt_key(bio, bio_src);
bio_clone_blkcg_association(bio, bio_src);
return bio;
diff --git a/block/elevator.c b/block/elevator.c
index 3d88ab3..6d940446 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -422,7 +422,7 @@ enum elv_merge elv_merge(struct request_queue *q, struct request **req,
{
struct elevator_queue *e = q->elevator;
struct request *__rq;
- enum elv_merge ret;
+
/*
* Levels of merges:
* nomerges: No merges at all attempted
@@ -435,11 +435,9 @@ enum elv_merge elv_merge(struct request_queue *q, struct request **req,
/*
* First try one-hit cache.
*/
- if (q->last_merge) {
- if (!elv_bio_merge_ok(q->last_merge, bio))
- return ELEVATOR_NO_MERGE;
+ if (q->last_merge && elv_bio_merge_ok(q->last_merge, bio)) {
+ enum elv_merge ret = blk_try_merge(q->last_merge, bio);
- ret = blk_try_merge(q->last_merge, bio);
if (ret != ELEVATOR_NO_MERGE) {
*req = q->last_merge;
return ret;
diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
new file mode 100644
index 0000000..1436426
--- /dev/null
+++ b/block/keyslot-manager.c
@@ -0,0 +1,559 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+/**
+ * DOC: The Keyslot Manager
+ *
+ * Many devices with inline encryption support have a limited number of "slots"
+ * into which encryption contexts may be programmed, and requests can be tagged
+ * with a slot number to specify the key to use for en/decryption.
+ *
+ * As the number of slots are limited, and programming keys is expensive on
+ * many inline encryption hardware, we don't want to program the same key into
+ * multiple slots - if multiple requests are using the same key, we want to
+ * program just one slot with that key and use that slot for all requests.
+ *
+ * The keyslot manager manages these keyslots appropriately, and also acts as
+ * an abstraction between the inline encryption hardware and the upper layers.
+ *
+ * Lower layer devices will set up a keyslot manager in their request queue
+ * and tell it how to perform device specific operations like programming/
+ * evicting keys from keyslots.
+ *
+ * Upper layers will call keyslot_manager_get_slot_for_key() to program a
+ * key into some slot in the inline encryption hardware.
+ */
+#include <crypto/algapi.h>
+#include <linux/keyslot-manager.h>
+#include <linux/atomic.h>
+#include <linux/mutex.h>
+#include <linux/wait.h>
+#include <linux/blkdev.h>
+
+struct keyslot {
+ atomic_t slot_refs;
+ struct list_head idle_slot_node;
+ struct hlist_node hash_node;
+ struct blk_crypto_key key;
+};
+
+struct keyslot_manager {
+ unsigned int num_slots;
+ struct keyslot_mgmt_ll_ops ksm_ll_ops;
+ unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX];
+ void *ll_priv_data;
+
+ /* Protects programming and evicting keys from the device */
+ struct rw_semaphore lock;
+
+ /* List of idle slots, with least recently used slot at front */
+ wait_queue_head_t idle_slots_wait_queue;
+ struct list_head idle_slots;
+ spinlock_t idle_slots_lock;
+
+ /*
+ * Hash table which maps key hashes to keyslots, so that we can find a
+ * key's keyslot in O(1) time rather than O(num_slots). Protected by
+ * 'lock'. A cryptographic hash function is used so that timing attacks
+ * can't leak information about the raw keys.
+ */
+ struct hlist_head *slot_hashtable;
+ unsigned int slot_hashtable_size;
+
+ /* Per-keyslot data */
+ struct keyslot slots[];
+};
+
+static inline bool keyslot_manager_is_passthrough(struct keyslot_manager *ksm)
+{
+ return ksm->num_slots == 0;
+}
+
+/**
+ * keyslot_manager_create() - Create a keyslot manager
+ * @num_slots: The number of key slots to manage.
+ * @ksm_ll_ops: The struct keyslot_mgmt_ll_ops for the device that this keyslot
+ * manager will use to perform operations like programming and
+ * evicting keys.
+ * @crypto_mode_supported: Array of size BLK_ENCRYPTION_MODE_MAX of
+ * bitmasks that represents whether a crypto mode
+ * and data unit size are supported. The i'th bit
+ * of crypto_mode_supported[crypto_mode] is set iff
+ * a data unit size of (1 << i) is supported. We
+ * only support data unit sizes that are powers of
+ * 2.
+ * @ll_priv_data: Private data passed as is to the functions in ksm_ll_ops.
+ *
+ * Allocate memory for and initialize a keyslot manager. Called by e.g.
+ * storage drivers to set up a keyslot manager in their request_queue.
+ *
+ * Context: May sleep
+ * Return: Pointer to constructed keyslot manager or NULL on error.
+ */
+struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
+ const struct keyslot_mgmt_ll_ops *ksm_ll_ops,
+ const unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX],
+ void *ll_priv_data)
+{
+ struct keyslot_manager *ksm;
+ unsigned int slot;
+ unsigned int i;
+
+ if (num_slots == 0)
+ return NULL;
+
+ /* Check that all ops are specified */
+ if (ksm_ll_ops->keyslot_program == NULL ||
+ ksm_ll_ops->keyslot_evict == NULL)
+ return NULL;
+
+ ksm = kvzalloc(struct_size(ksm, slots, num_slots), GFP_KERNEL);
+ if (!ksm)
+ return NULL;
+
+ ksm->num_slots = num_slots;
+ ksm->ksm_ll_ops = *ksm_ll_ops;
+ memcpy(ksm->crypto_mode_supported, crypto_mode_supported,
+ sizeof(ksm->crypto_mode_supported));
+ ksm->ll_priv_data = ll_priv_data;
+
+ init_rwsem(&ksm->lock);
+
+ init_waitqueue_head(&ksm->idle_slots_wait_queue);
+ INIT_LIST_HEAD(&ksm->idle_slots);
+
+ for (slot = 0; slot < num_slots; slot++) {
+ list_add_tail(&ksm->slots[slot].idle_slot_node,
+ &ksm->idle_slots);
+ }
+
+ spin_lock_init(&ksm->idle_slots_lock);
+
+ ksm->slot_hashtable_size = roundup_pow_of_two(num_slots);
+ ksm->slot_hashtable = kvmalloc_array(ksm->slot_hashtable_size,
+ sizeof(ksm->slot_hashtable[0]),
+ GFP_KERNEL);
+ if (!ksm->slot_hashtable)
+ goto err_free_ksm;
+ for (i = 0; i < ksm->slot_hashtable_size; i++)
+ INIT_HLIST_HEAD(&ksm->slot_hashtable[i]);
+
+ return ksm;
+
+err_free_ksm:
+ keyslot_manager_destroy(ksm);
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_create);
+
+static inline struct hlist_head *
+hash_bucket_for_key(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key)
+{
+ return &ksm->slot_hashtable[key->hash & (ksm->slot_hashtable_size - 1)];
+}
+
+static void remove_slot_from_lru_list(struct keyslot_manager *ksm, int slot)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&ksm->idle_slots_lock, flags);
+ list_del(&ksm->slots[slot].idle_slot_node);
+ spin_unlock_irqrestore(&ksm->idle_slots_lock, flags);
+}
+
+static int find_keyslot(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key)
+{
+ const struct hlist_head *head = hash_bucket_for_key(ksm, key);
+ const struct keyslot *slotp;
+
+ hlist_for_each_entry(slotp, head, hash_node) {
+ if (slotp->key.hash == key->hash &&
+ slotp->key.crypto_mode == key->crypto_mode &&
+ slotp->key.size == key->size &&
+ slotp->key.data_unit_size == key->data_unit_size &&
+ !crypto_memneq(slotp->key.raw, key->raw, key->size))
+ return slotp - ksm->slots;
+ }
+ return -ENOKEY;
+}
+
+static int find_and_grab_keyslot(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key)
+{
+ int slot;
+
+ slot = find_keyslot(ksm, key);
+ if (slot < 0)
+ return slot;
+ if (atomic_inc_return(&ksm->slots[slot].slot_refs) == 1) {
+ /* Took first reference to this slot; remove it from LRU list */
+ remove_slot_from_lru_list(ksm, slot);
+ }
+ return slot;
+}
+
+/**
+ * keyslot_manager_get_slot_for_key() - Program a key into a keyslot.
+ * @ksm: The keyslot manager to program the key into.
+ * @key: Pointer to the key object to program, including the raw key, crypto
+ * mode, and data unit size.
+ *
+ * Get a keyslot that's been programmed with the specified key. If one already
+ * exists, return it with incremented refcount. Otherwise, wait for a keyslot
+ * to become idle and program it.
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ * Return: The keyslot on success, else a -errno value.
+ */
+int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key)
+{
+ int slot;
+ int err;
+ struct keyslot *idle_slot;
+
+ if (keyslot_manager_is_passthrough(ksm))
+ return 0;
+
+ down_read(&ksm->lock);
+ slot = find_and_grab_keyslot(ksm, key);
+ up_read(&ksm->lock);
+ if (slot != -ENOKEY)
+ return slot;
+
+ for (;;) {
+ down_write(&ksm->lock);
+ slot = find_and_grab_keyslot(ksm, key);
+ if (slot != -ENOKEY) {
+ up_write(&ksm->lock);
+ return slot;
+ }
+
+ /*
+ * If we're here, that means there wasn't a slot that was
+ * already programmed with the key. So try to program it.
+ */
+ if (!list_empty(&ksm->idle_slots))
+ break;
+
+ up_write(&ksm->lock);
+ wait_event(ksm->idle_slots_wait_queue,
+ !list_empty(&ksm->idle_slots));
+ }
+
+ idle_slot = list_first_entry(&ksm->idle_slots, struct keyslot,
+ idle_slot_node);
+ slot = idle_slot - ksm->slots;
+
+ err = ksm->ksm_ll_ops.keyslot_program(ksm, key, slot);
+ if (err) {
+ wake_up(&ksm->idle_slots_wait_queue);
+ up_write(&ksm->lock);
+ return err;
+ }
+
+ /* Move this slot to the hash list for the new key. */
+ if (idle_slot->key.crypto_mode != BLK_ENCRYPTION_MODE_INVALID)
+ hlist_del(&idle_slot->hash_node);
+ hlist_add_head(&idle_slot->hash_node, hash_bucket_for_key(ksm, key));
+
+ atomic_set(&idle_slot->slot_refs, 1);
+ idle_slot->key = *key;
+
+ remove_slot_from_lru_list(ksm, slot);
+
+ up_write(&ksm->lock);
+ return slot;
+}
+
+/**
+ * keyslot_manager_get_slot() - Increment the refcount on the specified slot.
+ * @ksm: The keyslot manager that we want to modify.
+ * @slot: The slot to increment the refcount of.
+ *
+ * This function assumes that there is already an active reference to that slot
+ * and simply increments the refcount. This is useful when cloning a bio that
+ * already has a reference to a keyslot, and we want the cloned bio to also have
+ * its own reference.
+ *
+ * Context: Any context.
+ */
+void keyslot_manager_get_slot(struct keyslot_manager *ksm, unsigned int slot)
+{
+ if (keyslot_manager_is_passthrough(ksm))
+ return;
+
+ if (WARN_ON(slot >= ksm->num_slots))
+ return;
+
+ WARN_ON(atomic_inc_return(&ksm->slots[slot].slot_refs) < 2);
+}
+
+/**
+ * keyslot_manager_put_slot() - Release a reference to a slot
+ * @ksm: The keyslot manager to release the reference from.
+ * @slot: The slot to release the reference from.
+ *
+ * Context: Any context.
+ */
+void keyslot_manager_put_slot(struct keyslot_manager *ksm, unsigned int slot)
+{
+ unsigned long flags;
+
+ if (keyslot_manager_is_passthrough(ksm))
+ return;
+
+ if (WARN_ON(slot >= ksm->num_slots))
+ return;
+
+ if (atomic_dec_and_lock_irqsave(&ksm->slots[slot].slot_refs,
+ &ksm->idle_slots_lock, flags)) {
+ list_add_tail(&ksm->slots[slot].idle_slot_node,
+ &ksm->idle_slots);
+ spin_unlock_irqrestore(&ksm->idle_slots_lock, flags);
+ wake_up(&ksm->idle_slots_wait_queue);
+ }
+}
+
+/**
+ * keyslot_manager_crypto_mode_supported() - Find out if a crypto_mode/data
+ * unit size combination is supported
+ * by a ksm.
+ * @ksm: The keyslot manager to check
+ * @crypto_mode: The crypto mode to check for.
+ * @data_unit_size: The data_unit_size for the mode.
+ *
+ * Calls and returns the result of the crypto_mode_supported function specified
+ * by the ksm.
+ *
+ * Context: Process context.
+ * Return: Whether or not this ksm supports the specified crypto_mode/
+ * data_unit_size combo.
+ */
+bool keyslot_manager_crypto_mode_supported(struct keyslot_manager *ksm,
+ enum blk_crypto_mode_num crypto_mode,
+ unsigned int data_unit_size)
+{
+ if (!ksm)
+ return false;
+ if (WARN_ON(crypto_mode >= BLK_ENCRYPTION_MODE_MAX))
+ return false;
+ if (WARN_ON(!is_power_of_2(data_unit_size)))
+ return false;
+ return ksm->crypto_mode_supported[crypto_mode] & data_unit_size;
+}
+
+/**
+ * keyslot_manager_evict_key() - Evict a key from the lower layer device.
+ * @ksm: The keyslot manager to evict from
+ * @key: The key to evict
+ *
+ * Find the keyslot that the specified key was programmed into, and evict that
+ * slot from the lower layer device if that slot is not currently in use.
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ * Return: 0 on success, -EBUSY if the key is still in use, or another
+ * -errno value on other error.
+ */
+int keyslot_manager_evict_key(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key)
+{
+ int slot;
+ int err;
+ struct keyslot *slotp;
+
+ if (keyslot_manager_is_passthrough(ksm)) {
+ if (ksm->ksm_ll_ops.keyslot_evict) {
+ down_write(&ksm->lock);
+ err = ksm->ksm_ll_ops.keyslot_evict(ksm, key, -1);
+ up_write(&ksm->lock);
+ return err;
+ }
+ return 0;
+ }
+
+ down_write(&ksm->lock);
+ slot = find_keyslot(ksm, key);
+ if (slot < 0) {
+ err = slot;
+ goto out_unlock;
+ }
+ slotp = &ksm->slots[slot];
+
+ if (atomic_read(&slotp->slot_refs) != 0) {
+ err = -EBUSY;
+ goto out_unlock;
+ }
+ err = ksm->ksm_ll_ops.keyslot_evict(ksm, key, slot);
+ if (err)
+ goto out_unlock;
+
+ hlist_del(&slotp->hash_node);
+ memzero_explicit(&slotp->key, sizeof(slotp->key));
+ err = 0;
+out_unlock:
+ up_write(&ksm->lock);
+ return err;
+}
+
+/**
+ * keyslot_manager_reprogram_all_keys() - Re-program all keyslots.
+ * @ksm: The keyslot manager
+ *
+ * Re-program all keyslots that are supposed to have a key programmed. This is
+ * intended only for use by drivers for hardware that loses its keys on reset.
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ */
+void keyslot_manager_reprogram_all_keys(struct keyslot_manager *ksm)
+{
+ unsigned int slot;
+
+ if (WARN_ON(keyslot_manager_is_passthrough(ksm)))
+ return;
+
+ down_write(&ksm->lock);
+ for (slot = 0; slot < ksm->num_slots; slot++) {
+ const struct keyslot *slotp = &ksm->slots[slot];
+ int err;
+
+ if (slotp->key.crypto_mode == BLK_ENCRYPTION_MODE_INVALID)
+ continue;
+
+ err = ksm->ksm_ll_ops.keyslot_program(ksm, &slotp->key, slot);
+ WARN_ON(err);
+ }
+ up_write(&ksm->lock);
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_reprogram_all_keys);
+
+/**
+ * keyslot_manager_private() - return the private data stored with ksm
+ * @ksm: The keyslot manager
+ *
+ * Returns the private data passed to the ksm when it was created.
+ */
+void *keyslot_manager_private(struct keyslot_manager *ksm)
+{
+ return ksm->ll_priv_data;
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_private);
+
+void keyslot_manager_destroy(struct keyslot_manager *ksm)
+{
+ if (ksm) {
+ kvfree(ksm->slot_hashtable);
+ memzero_explicit(ksm, struct_size(ksm, slots, ksm->num_slots));
+ kvfree(ksm);
+ }
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_destroy);
+
+/**
+ * keyslot_manager_create_passthrough() - Create a passthrough keyslot manager
+ * @ksm_ll_ops: The struct keyslot_mgmt_ll_ops
+ * @crypto_mode_supported: Bitmasks for supported encryption modes
+ * @ll_priv_data: Private data passed as is to the functions in ksm_ll_ops.
+ *
+ * Allocate memory for and initialize a passthrough keyslot manager.
+ * Called by e.g. storage drivers to set up a keyslot manager in their
+ * request_queue, when the storage driver wants to manage its keys by itself.
+ * This is useful for inline encryption hardware that don't have a small fixed
+ * number of keyslots, and for layered devices.
+ *
+ * See keyslot_manager_create() for more details about the parameters.
+ *
+ * Context: This function may sleep
+ * Return: Pointer to constructed keyslot manager or NULL on error.
+ */
+struct keyslot_manager *keyslot_manager_create_passthrough(
+ const struct keyslot_mgmt_ll_ops *ksm_ll_ops,
+ const unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX],
+ void *ll_priv_data)
+{
+ struct keyslot_manager *ksm;
+
+ ksm = kzalloc(sizeof(*ksm), GFP_KERNEL);
+ if (!ksm)
+ return NULL;
+
+ ksm->ksm_ll_ops = *ksm_ll_ops;
+ memcpy(ksm->crypto_mode_supported, crypto_mode_supported,
+ sizeof(ksm->crypto_mode_supported));
+ ksm->ll_priv_data = ll_priv_data;
+
+ init_rwsem(&ksm->lock);
+
+ return ksm;
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_create_passthrough);
+
+/**
+ * keyslot_manager_intersect_modes() - restrict supported modes by child device
+ * @parent: The keyslot manager for parent device
+ * @child: The keyslot manager for child device, or NULL
+ *
+ * Clear any crypto mode support bits in @parent that aren't set in @child.
+ * If @child is NULL, then all parent bits are cleared.
+ *
+ * Only use this when setting up the keyslot manager for a layered device,
+ * before it's been exposed yet.
+ */
+void keyslot_manager_intersect_modes(struct keyslot_manager *parent,
+ const struct keyslot_manager *child)
+{
+ if (child) {
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(child->crypto_mode_supported); i++) {
+ parent->crypto_mode_supported[i] &=
+ child->crypto_mode_supported[i];
+ }
+ } else {
+ memset(parent->crypto_mode_supported, 0,
+ sizeof(parent->crypto_mode_supported));
+ }
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_intersect_modes);
+
+/**
+ * keyslot_manager_derive_raw_secret() - Derive software secret from wrapped key
+ * @ksm: The keyslot manager
+ * @wrapped_key: The wrapped key
+ * @wrapped_key_size: Size of the wrapped key in bytes
+ * @secret: (output) the software secret
+ * @secret_size: (output) the number of secret bytes to derive
+ *
+ * Given a hardware-wrapped key, ask the hardware to derive a secret which
+ * software can use for cryptographic tasks other than inline encryption. The
+ * derived secret is guaranteed to be cryptographically isolated from the key
+ * with which any inline encryption with this wrapped key would actually be
+ * done. I.e., both will be derived from the unwrapped key.
+ *
+ * Return: 0 on success, -EOPNOTSUPP if hardware-wrapped keys are unsupported,
+ * or another -errno code.
+ */
+int keyslot_manager_derive_raw_secret(struct keyslot_manager *ksm,
+ const u8 *wrapped_key,
+ unsigned int wrapped_key_size,
+ u8 *secret, unsigned int secret_size)
+{
+ int err;
+
+ down_write(&ksm->lock);
+ if (ksm->ksm_ll_ops.derive_raw_secret) {
+ err = ksm->ksm_ll_ops.derive_raw_secret(ksm, wrapped_key,
+ wrapped_key_size,
+ secret, secret_size);
+ } else {
+ err = -EOPNOTSUPP;
+ }
+ up_write(&ksm->lock);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_derive_raw_secret);
diff --git a/drivers/bluetooth/bluetooth-power.c b/drivers/bluetooth/bluetooth-power.c
index e014f61..54b09bf 100644
--- a/drivers/bluetooth/bluetooth-power.c
+++ b/drivers/bluetooth/bluetooth-power.c
@@ -28,6 +28,7 @@
#if defined CONFIG_BT_SLIM_QCA6390 || defined CONFIG_BTFM_SLIM_WCN3990
#include "btfm_slim.h"
+#include "btfm_slim_slave.h"
#endif
#include <linux/fs.h>
@@ -41,6 +42,7 @@ static const struct of_device_id bt_power_match_table[] = {
{ .compatible = "qca,qca6174" },
{ .compatible = "qca,wcn3990" },
{ .compatible = "qca,qca6390" },
+ { .compatible = "qca,wcn6750" },
{}
};
@@ -271,10 +273,14 @@ static int bt_configure_gpios(int on)
return rc;
}
msleep(50);
- BT_PWR_ERR("BTON:Turn Bt Off bt-reset-gpio(%d) value(%d)\n",
- bt_reset_gpio, gpio_get_value(bt_reset_gpio));
- BT_PWR_ERR("BTON:Turn Bt Off bt-sw-ctrl-gpio(%d) value(%d)\n",
- bt_sw_ctrl_gpio, gpio_get_value(bt_sw_ctrl_gpio));
+ BT_PWR_INFO("BTON:Turn Bt Off bt-reset-gpio(%d) value(%d)\n",
+ bt_reset_gpio, gpio_get_value(bt_reset_gpio));
+ if (bt_sw_ctrl_gpio >= 0) {
+ BT_PWR_INFO("BTON:Turn Bt Off");
+ BT_PWR_INFO("bt-sw-ctrl-gpio(%d) value(%d)",
+ bt_sw_ctrl_gpio,
+ gpio_get_value(bt_sw_ctrl_gpio));
+ }
rc = gpio_direction_output(bt_reset_gpio, 1);
if (rc) {
@@ -305,22 +311,30 @@ static int bt_configure_gpios(int on)
BT_PWR_ERR("Prob: Set Debug-Gpio\n");
}
}
- BT_PWR_ERR("BTON:Turn Bt On bt-reset-gpio(%d) value(%d)\n",
- bt_reset_gpio, gpio_get_value(bt_reset_gpio));
- BT_PWR_ERR("BTON:Turn Bt On bt-sw-ctrl-gpio(%d) value(%d)\n",
- bt_sw_ctrl_gpio, gpio_get_value(bt_sw_ctrl_gpio));
+ BT_PWR_INFO("BTON:Turn Bt On bt-reset-gpio(%d) value(%d)\n",
+ bt_reset_gpio, gpio_get_value(bt_reset_gpio));
+ if (bt_sw_ctrl_gpio >= 0) {
+ BT_PWR_INFO("BTON:Turn Bt On");
+ BT_PWR_INFO("bt-sw-ctrl-gpio(%d) value(%d)",
+ bt_sw_ctrl_gpio,
+ gpio_get_value(bt_sw_ctrl_gpio));
+ }
} else {
gpio_set_value(bt_reset_gpio, 0);
if (bt_debug_gpio >= 0)
gpio_set_value(bt_debug_gpio, 0);
msleep(100);
- BT_PWR_ERR("BT-OFF:bt-reset-gpio(%d) value(%d)\n",
- bt_reset_gpio, gpio_get_value(bt_reset_gpio));
- BT_PWR_ERR("BT-OFF:bt-sw-ctrl-gpio(%d) value(%d)\n",
- bt_sw_ctrl_gpio, gpio_get_value(bt_sw_ctrl_gpio));
+ BT_PWR_INFO("BT-OFF:bt-reset-gpio(%d) value(%d)\n",
+ bt_reset_gpio, gpio_get_value(bt_reset_gpio));
+
+ if (bt_sw_ctrl_gpio >= 0) {
+ BT_PWR_INFO("BT-OFF:bt-sw-ctrl-gpio(%d) value(%d)",
+ bt_sw_ctrl_gpio,
+ gpio_get_value(bt_sw_ctrl_gpio));
+ }
}
- BT_PWR_ERR("bt_gpio= %d on: %d is successful", bt_reset_gpio, on);
+ BT_PWR_INFO("bt_gpio= %d on: %d is successful", bt_reset_gpio, on);
return rc;
}
@@ -847,6 +861,18 @@ int get_chipset_version(void)
return soc_id;
}
+int bt_disable_asd(void)
+{
+ int rc = 0;
+ if (bt_power_pdata->bt_vdd_asd) {
+ BT_PWR_INFO("Disabling ASD regulator");
+ rc = bt_vreg_disable(bt_power_pdata->bt_vdd_asd);
+ } else {
+ BT_PWR_INFO("ASD regulator is not configured");
+ }
+ return rc;
+}
+
static long bt_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
int ret = 0, pwr_cntrl = 0;
@@ -880,9 +906,14 @@ static long bt_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
break;
case BT_CMD_CHIPSET_VERS:
chipset_version = (int)arg;
- BT_PWR_ERR("BT_CMD_CHIP_VERS soc_version:%x", chipset_version);
+ BT_PWR_ERR("unified Current SOC Version : %x", chipset_version);
if (chipset_version) {
soc_id = chipset_version;
+ if (soc_id == QCA_HSP_SOC_ID_0100 ||
+ soc_id == QCA_HSP_SOC_ID_0110 ||
+ soc_id == QCA_HSP_SOC_ID_0200) {
+ ret = bt_disable_asd();
+ }
} else {
BT_PWR_ERR("got invalid soc version");
soc_id = 0;
diff --git a/drivers/bluetooth/btfm_slim.c b/drivers/bluetooth/btfm_slim.c
index bf886e6..9ed65d8 100644
--- a/drivers/bluetooth/btfm_slim.c
+++ b/drivers/bluetooth/btfm_slim.c
@@ -371,6 +371,9 @@ static int btfm_slim_alloc_port(struct btfmslim *btfmslim)
int btfm_slim_hw_init(struct btfmslim *btfmslim)
{
int ret;
+ int chipset_ver;
+ struct slim_device *slim = btfmslim->slim_pgd;
+ struct slim_device *slim_ifd = &btfmslim->slim_ifd;
BTFMSLIM_DBG("");
if (!btfmslim)
@@ -381,6 +384,61 @@ int btfm_slim_hw_init(struct btfmslim *btfmslim)
return 0;
}
mutex_lock(&btfmslim->io_lock);
+ BTFMSLIM_INFO(
+ "PGD Enum Addr: %.02x:%.02x:%.02x:%.02x:%.02x: %.02x",
+ slim->e_addr[0], slim->e_addr[1], slim->e_addr[2],
+ slim->e_addr[3], slim->e_addr[4], slim->e_addr[5]);
+ BTFMSLIM_INFO(
+ "IFD Enum Addr: %.02x:%.02x:%.02x:%.02x:%.02x: %.02x",
+ slim_ifd->e_addr[0], slim_ifd->e_addr[1],
+ slim_ifd->e_addr[2], slim_ifd->e_addr[3],
+ slim_ifd->e_addr[4], slim_ifd->e_addr[5]);
+
+ chipset_ver = get_chipset_version();
+ BTFMSLIM_INFO("chipset soc version:%x", chipset_ver);
+
+ if (chipset_ver == QCA_HSP_SOC_ID_0100 ||
+ chipset_ver == QCA_HSP_SOC_ID_0110 ||
+ chipset_ver == QCA_HSP_SOC_ID_0200) {
+ BTFMSLIM_INFO("chipset is hastings prime, overwriting EA");
+ slim->e_addr[0] = 0x00;
+ slim->e_addr[1] = 0x01;
+ slim->e_addr[2] = 0x21;
+ slim->e_addr[3] = 0x02;
+ slim->e_addr[4] = 0x17;
+ slim->e_addr[5] = 0x02;
+
+ slim_ifd->e_addr[0] = 0x00;
+ slim_ifd->e_addr[1] = 0x00;
+ slim_ifd->e_addr[2] = 0x21;
+ slim_ifd->e_addr[3] = 0x02;
+ slim_ifd->e_addr[4] = 0x17;
+ slim_ifd->e_addr[5] = 0x02;
+ } else if (chipset_ver == QCA_HASTINGS_SOC_ID_0200) {
+ BTFMSLIM_INFO("chipset is hastings 2.0, overwriting EA");
+ slim->e_addr[0] = 0x00;
+ slim->e_addr[1] = 0x01;
+ slim->e_addr[2] = 0x20;
+ slim->e_addr[3] = 0x02;
+ slim->e_addr[4] = 0x17;
+ slim->e_addr[5] = 0x02;
+
+ slim_ifd->e_addr[0] = 0x00;
+ slim_ifd->e_addr[1] = 0x00;
+ slim_ifd->e_addr[2] = 0x20;
+ slim_ifd->e_addr[3] = 0x02;
+ slim_ifd->e_addr[4] = 0x17;
+ slim_ifd->e_addr[5] = 0x02;
+ }
+ BTFMSLIM_INFO(
+ "PGD Enum Addr: %.02x:%.02x:%.02x:%.02x:%.02x: %.02x",
+ slim->e_addr[0], slim->e_addr[1], slim->e_addr[2],
+ slim->e_addr[3], slim->e_addr[4], slim->e_addr[5]);
+ BTFMSLIM_INFO(
+ "IFD Enum Addr: %.02x:%.02x:%.02x:%.02x:%.02x: %.02x",
+ slim_ifd->e_addr[0], slim_ifd->e_addr[1],
+ slim_ifd->e_addr[2], slim_ifd->e_addr[3],
+ slim_ifd->e_addr[4], slim_ifd->e_addr[5]);
/* Assign Logical Address for PGD (Ported Generic Device)
* enumeration address
diff --git a/drivers/bluetooth/btfm_slim_slave.h b/drivers/bluetooth/btfm_slim_slave.h
index 88d1484..67e08a6 100644
--- a/drivers/bluetooth/btfm_slim_slave.h
+++ b/drivers/bluetooth/btfm_slim_slave.h
@@ -103,6 +103,12 @@ enum {
QCA_HASTINGS_SOC_ID_0200 = 0x400A0200,
};
+enum {
+ QCA_HSP_SOC_ID_0100 = 0x400C0100,
+ QCA_HSP_SOC_ID_0110 = 0x400C0110,
+ QCA_HSP_SOC_ID_0200 = 0x400C0200,
+};
+
/* Function Prototype */
/*
diff --git a/drivers/bus/mhi/controllers/mhi_arch_qcom.c b/drivers/bus/mhi/controllers/mhi_arch_qcom.c
index 1678f4c..ce4a33b 100644
--- a/drivers/bus/mhi/controllers/mhi_arch_qcom.c
+++ b/drivers/bus/mhi/controllers/mhi_arch_qcom.c
@@ -39,16 +39,18 @@ struct arch_info {
#define DLOG "Dev->Host: "
#define HLOG "Host: "
-#define MHI_TSYNC_LOG_PAGES (10)
+#define MHI_TSYNC_LOG_PAGES (2)
#ifdef CONFIG_MHI_DEBUG
#define MHI_IPC_LOG_PAGES (100)
+#define MHI_CNTRL_LOG_PAGES (25)
enum MHI_DEBUG_LEVEL mhi_ipc_log_lvl = MHI_MSG_LVL_VERBOSE;
#else
#define MHI_IPC_LOG_PAGES (10)
+#define MHI_CNTRL_LOG_PAGES (5)
enum MHI_DEBUG_LEVEL mhi_ipc_log_lvl = MHI_MSG_LVL_ERROR;
#endif
@@ -143,7 +145,7 @@ static void mhi_arch_pci_link_state_cb(struct msm_pcie_notify *notify)
switch (notify->event) {
case MSM_PCIE_EVENT_WAKEUP:
- MHI_LOG("Received MSM_PCIE_EVENT_WAKE signal\n");
+ MHI_CNTRL_LOG("Received PCIE_WAKE signal\n");
/* bring link out of d3cold */
if (mhi_dev->powered_on) {
@@ -152,14 +154,14 @@ static void mhi_arch_pci_link_state_cb(struct msm_pcie_notify *notify)
}
break;
case MSM_PCIE_EVENT_L1SS_TIMEOUT:
- MHI_VERB("Received MSM_PCIE_EVENT_L1SS_TIMEOUT signal\n");
+ MHI_VERB("Received PCIE_L1SS_TIMEOUT signal\n");
pm_runtime_mark_last_busy(&pci_dev->dev);
pm_request_autosuspend(&pci_dev->dev);
break;
case MSM_PCIE_EVENT_DRV_CONNECT:
/* drv is connected we can suspend now */
- MHI_LOG("Received MSM_PCIE_EVENT_DRV_CONNECT signal\n");
+ MHI_CNTRL_LOG("Received DRV_CONNECT signal\n");
arch_info->drv_connected = true;
@@ -174,7 +176,7 @@ static void mhi_arch_pci_link_state_cb(struct msm_pcie_notify *notify)
mutex_unlock(&mhi_cntrl->pm_mutex);
break;
case MSM_PCIE_EVENT_DRV_DISCONNECT:
- MHI_LOG("Received MSM_PCIE_EVENT_DRV_DISCONNECT signal\n");
+ MHI_CNTRL_LOG("Received DRV_DISCONNECT signal\n");
/*
* if link suspended bring it out of suspend and disable runtime
@@ -184,7 +186,7 @@ static void mhi_arch_pci_link_state_cb(struct msm_pcie_notify *notify)
pm_runtime_forbid(&pci_dev->dev);
break;
default:
- MHI_ERR("Unhandled event 0x%x\n", notify->event);
+ MHI_CNTRL_LOG("Unhandled event 0x%x\n", notify->event);
}
}
@@ -197,12 +199,12 @@ static int mhi_arch_esoc_ops_power_on(void *priv, unsigned int flags)
mutex_lock(&mhi_cntrl->pm_mutex);
if (mhi_dev->powered_on) {
- MHI_LOG("MHI still in active state\n");
+ MHI_CNTRL_LOG("MHI still in active state\n");
mutex_unlock(&mhi_cntrl->pm_mutex);
return 0;
}
- MHI_LOG("Enter: mdm_crashed:%d\n", flags & ESOC_HOOK_MDM_CRASH);
+ MHI_CNTRL_LOG("Enter: mdm_crashed:%d\n", flags & ESOC_HOOK_MDM_CRASH);
/* reset rpm state */
pm_runtime_set_active(&pci_dev->dev);
@@ -211,7 +213,7 @@ static int mhi_arch_esoc_ops_power_on(void *priv, unsigned int flags)
pm_runtime_forbid(&pci_dev->dev);
ret = pm_runtime_get_sync(&pci_dev->dev);
if (ret < 0) {
- MHI_ERR("Error with rpm resume, ret:%d\n", ret);
+ MHI_CNTRL_ERR("Error with rpm resume, ret:%d\n", ret);
return ret;
}
@@ -219,7 +221,7 @@ static int mhi_arch_esoc_ops_power_on(void *priv, unsigned int flags)
ret = msm_pcie_pm_control(MSM_PCIE_RESUME, pci_dev->bus->number,
pci_dev, NULL, 0);
if (ret) {
- MHI_ERR("Failed to resume pcie bus ret %d\n", ret);
+ MHI_CNTRL_ERR("Failed to resume pcie bus ret %d\n", ret);
return ret;
}
@@ -231,7 +233,7 @@ static void mhi_arch_link_off(struct mhi_controller *mhi_cntrl)
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
- MHI_LOG("Entered\n");
+ MHI_CNTRL_LOG("Entered\n");
pci_set_power_state(pci_dev, PCI_D3hot);
@@ -239,7 +241,7 @@ static void mhi_arch_link_off(struct mhi_controller *mhi_cntrl)
msm_pcie_pm_control(MSM_PCIE_SUSPEND, mhi_cntrl->bus, pci_dev, NULL, 0);
mhi_arch_set_bus_request(mhi_cntrl, 0);
- MHI_LOG("Exited\n");
+ MHI_CNTRL_LOG("Exited\n");
}
static void mhi_arch_esoc_ops_power_off(void *priv, unsigned int flags)
@@ -250,7 +252,7 @@ static void mhi_arch_esoc_ops_power_off(void *priv, unsigned int flags)
struct pci_dev *pci_dev = mhi_dev->pci_dev;
bool mdm_state = (flags & ESOC_HOOK_MDM_CRASH);
- MHI_LOG("Enter: mdm_crashed:%d\n", mdm_state);
+ MHI_CNTRL_LOG("Enter: mdm_crashed:%d\n", mdm_state);
/*
* Abort system suspend if system is preparing to go to suspend
@@ -266,7 +268,7 @@ static void mhi_arch_esoc_ops_power_off(void *priv, unsigned int flags)
mutex_lock(&mhi_cntrl->pm_mutex);
if (!mhi_dev->powered_on) {
- MHI_LOG("Not in active state\n");
+ MHI_CNTRL_LOG("Not in active state\n");
mutex_unlock(&mhi_cntrl->pm_mutex);
pm_runtime_put_noidle(&pci_dev->dev);
return;
@@ -276,7 +278,7 @@ static void mhi_arch_esoc_ops_power_off(void *priv, unsigned int flags)
pm_runtime_put_noidle(&pci_dev->dev);
- MHI_LOG("Triggering shutdown process\n");
+ MHI_CNTRL_LOG("Triggering shutdown process\n");
mhi_power_down(mhi_cntrl, !mdm_state);
/* turn the link off */
@@ -293,12 +295,10 @@ static void mhi_arch_esoc_ops_mdm_error(void *priv)
{
struct mhi_controller *mhi_cntrl = priv;
- MHI_LOG("Enter: mdm asserted\n");
+ MHI_CNTRL_LOG("Enter: mdm asserted\n");
/* transition MHI state into error state */
mhi_control_error(mhi_cntrl);
-
- MHI_LOG("Exit\n");
}
static void mhi_bl_dl_cb(struct mhi_device *mhi_device,
@@ -372,8 +372,9 @@ static int mhi_arch_pcie_scale_bw(struct mhi_controller *mhi_cntrl,
/* do a bus scale vote based on gen speeds */
mhi_arch_set_bus_request(mhi_cntrl, link_info->target_link_speed);
- MHI_VERB("bw changed to speed:0x%x width:0x%x\n",
- link_info->target_link_speed, link_info->target_link_width);
+ MHI_LOG("BW changed to speed:0x%x width:0x%x\n",
+ link_info->target_link_speed,
+ link_info->target_link_width);
return 0;
}
@@ -400,7 +401,7 @@ static int mhi_bl_probe(struct mhi_device *mhi_device,
mhi_device->slot);
arch_info->boot_dev = mhi_device;
- arch_info->boot_ipc_log = ipc_log_context_create(MHI_IPC_LOG_PAGES,
+ arch_info->boot_ipc_log = ipc_log_context_create(MHI_CNTRL_LOG_PAGES,
node_name, 0);
ipc_log_string(arch_info->boot_ipc_log, HLOG
"Entered SBL, Session ID:0x%x\n", mhi_cntrl->session_id);
@@ -454,6 +455,12 @@ int mhi_arch_pcie_init(struct mhi_controller *mhi_cntrl)
node, 0);
mhi_cntrl->log_lvl = mhi_ipc_log_lvl;
+ snprintf(node, sizeof(node), "mhi_cntrl_%04x_%02u.%02u.%02u",
+ mhi_cntrl->dev_id, mhi_cntrl->domain, mhi_cntrl->bus,
+ mhi_cntrl->slot);
+ mhi_cntrl->cntrl_log_buf = ipc_log_context_create(
+ MHI_CNTRL_LOG_PAGES, node, 0);
+
snprintf(node, sizeof(node), "mhi_tsync_%04x_%02u.%02u.%02u",
mhi_cntrl->dev_id, mhi_cntrl->domain, mhi_cntrl->bus,
mhi_cntrl->slot);
@@ -495,7 +502,8 @@ int mhi_arch_pcie_init(struct mhi_controller *mhi_cntrl)
reg_event->notify.data = mhi_cntrl;
ret = msm_pcie_register_event(reg_event);
if (ret)
- MHI_LOG("Failed to reg. for link up notification\n");
+ MHI_CNTRL_ERR(
+ "Failed to reg. for link up notification\n");
init_completion(&arch_info->pm_completion);
@@ -512,7 +520,7 @@ int mhi_arch_pcie_init(struct mhi_controller *mhi_cntrl)
arch_info->esoc_client = devm_register_esoc_client(
&mhi_dev->pci_dev->dev, "mdm");
if (IS_ERR_OR_NULL(arch_info->esoc_client)) {
- MHI_ERR("Failed to register esoc client\n");
+ MHI_CNTRL_ERR("Failed to register esoc client\n");
} else {
/* register for power on/off hooks */
struct esoc_client_hook *esoc_ops =
@@ -530,7 +538,7 @@ int mhi_arch_pcie_init(struct mhi_controller *mhi_cntrl)
ret = esoc_register_client_hook(arch_info->esoc_client,
esoc_ops);
if (ret)
- MHI_ERR("Failed to register esoc ops\n");
+ MHI_CNTRL_ERR("Failed to register esoc ops\n");
}
/*
@@ -579,12 +587,17 @@ static int mhi_arch_drv_suspend(struct mhi_controller *mhi_cntrl)
if (cur_link_info->target_link_speed != PCI_EXP_LNKSTA_CLS_2_5GB) {
link_info.target_link_speed = PCI_EXP_LNKSTA_CLS_2_5GB;
link_info.target_link_width = cur_link_info->target_link_width;
- ret = mhi_arch_pcie_scale_bw(mhi_cntrl, pci_dev, &link_info);
+
+ ret = msm_pcie_set_link_bandwidth(pci_dev,
+ link_info.target_link_speed,
+ link_info.target_link_width);
if (ret) {
- MHI_ERR("Failed to switch Gen1 speed\n");
+ MHI_CNTRL_ERR("Failed to switch Gen1 speed\n");
return -EBUSY;
}
+ /* no DDR votes when doing a drv suspend */
+ mhi_arch_set_bus_request(mhi_cntrl, 0);
bw_switched = true;
}
@@ -593,9 +606,7 @@ static int mhi_arch_drv_suspend(struct mhi_controller *mhi_cntrl)
pci_dev, NULL, mhi_cntrl->wake_set ?
MSM_PCIE_CONFIG_NO_L1SS_TO : 0);
- /*
- * we failed to suspend and scaled down pcie bw.. need to scale up again
- */
+ /* failed to suspend and scaled down pcie bw, need to scale up again */
if (ret && bw_switched) {
mhi_arch_pcie_scale_bw(mhi_cntrl, pci_dev, cur_link_info);
return ret;
@@ -611,7 +622,8 @@ int mhi_arch_link_suspend(struct mhi_controller *mhi_cntrl)
struct pci_dev *pci_dev = mhi_dev->pci_dev;
int ret = 0;
- MHI_LOG("Entered\n");
+ MHI_LOG("Entered with suspend_mode:%s\n",
+ TO_MHI_SUSPEND_MODE_STR(mhi_dev->suspend_mode));
/* disable inactivity timer */
msm_pcie_l1ss_timeout_disable(pci_dev);
@@ -621,7 +633,8 @@ int mhi_arch_link_suspend(struct mhi_controller *mhi_cntrl)
pci_clear_master(pci_dev);
ret = pci_save_state(mhi_dev->pci_dev);
if (ret) {
- MHI_ERR("Failed with pci_save_state, ret:%d\n", ret);
+ MHI_CNTRL_ERR("Failed with pci_save_state, ret:%d\n",
+ ret);
goto exit_suspend;
}
@@ -640,6 +653,7 @@ int mhi_arch_link_suspend(struct mhi_controller *mhi_cntrl)
break;
case MHI_ACTIVE_STATE:
case MHI_FAST_LINK_ON:/* keeping link on do nothing */
+ default:
break;
}
@@ -660,8 +674,6 @@ static int __mhi_arch_link_resume(struct mhi_controller *mhi_cntrl)
struct mhi_link_info *cur_info = &mhi_cntrl->mhi_link_info;
int ret;
- MHI_LOG("Entered\n");
-
/* request bus scale voting based on higher gen speed */
ret = mhi_arch_set_bus_request(mhi_cntrl,
cur_info->target_link_speed);
@@ -704,7 +716,8 @@ int mhi_arch_link_resume(struct mhi_controller *mhi_cntrl)
struct mhi_link_info *cur_info = &mhi_cntrl->mhi_link_info;
int ret = 0;
- MHI_LOG("Entered\n");
+ MHI_LOG("Entered with suspend_mode:%s\n",
+ TO_MHI_SUSPEND_MODE_STR(mhi_dev->suspend_mode));
switch (mhi_dev->suspend_mode) {
case MHI_DEFAULT_SUSPEND:
@@ -713,35 +726,37 @@ int mhi_arch_link_resume(struct mhi_controller *mhi_cntrl)
case MHI_FAST_LINK_OFF:
ret = msm_pcie_pm_control(MSM_PCIE_RESUME, mhi_cntrl->bus,
pci_dev, NULL, 0);
- if (ret ||
- cur_info->target_link_speed == PCI_EXP_LNKSTA_CLS_2_5GB)
+ if (ret)
break;
+ if (cur_info->target_link_speed == PCI_EXP_LNKSTA_CLS_2_5GB) {
+ mhi_arch_set_bus_request(mhi_cntrl,
+ cur_info->target_link_speed);
+ break;
+ }
+
/*
* BW request from device isn't for gen 1 link speed, we can
* only print an error here.
*/
if (mhi_arch_pcie_scale_bw(mhi_cntrl, pci_dev, cur_info))
- MHI_ERR(
+ MHI_CNTRL_ERR(
"Failed to honor bw request: speed:0x%x width:0x%x\n",
cur_info->target_link_speed,
cur_info->target_link_width);
break;
case MHI_ACTIVE_STATE:
case MHI_FAST_LINK_ON:
+ default:
break;
}
- if (ret) {
- MHI_ERR("Link training failed, ret:%d\n", ret);
- return ret;
- }
+ if (!ret)
+ msm_pcie_l1ss_timeout_enable(pci_dev);
- msm_pcie_l1ss_timeout_enable(pci_dev);
+ MHI_LOG("Exited with ret:%d\n", ret);
- MHI_LOG("Exited\n");
-
- return 0;
+ return ret;
}
int mhi_arch_link_lpm_disable(struct mhi_controller *mhi_cntrl)
diff --git a/drivers/bus/mhi/controllers/mhi_qcom.c b/drivers/bus/mhi/controllers/mhi_qcom.c
index 1257338..61de2b1 100644
--- a/drivers/bus/mhi/controllers/mhi_qcom.c
+++ b/drivers/bus/mhi/controllers/mhi_qcom.c
@@ -34,12 +34,19 @@ static const struct firmware_info firmware_table[] = {
static int debug_mode;
module_param_named(debug_mode, debug_mode, int, 0644);
+const char * const mhi_suspend_mode_str[MHI_SUSPEND_MODE_MAX] = {
+ [MHI_ACTIVE_STATE] = "Active",
+ [MHI_DEFAULT_SUSPEND] = "Default",
+ [MHI_FAST_LINK_OFF] = "Fast Link Off",
+ [MHI_FAST_LINK_ON] = "Fast Link On",
+};
+
int mhi_debugfs_trigger_m0(void *data, u64 val)
{
struct mhi_controller *mhi_cntrl = data;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
- MHI_LOG("Trigger M3 Exit\n");
+ MHI_CNTRL_LOG("Trigger M3 Exit\n");
pm_runtime_get(&mhi_dev->pci_dev->dev);
pm_runtime_put(&mhi_dev->pci_dev->dev);
@@ -53,7 +60,7 @@ int mhi_debugfs_trigger_m3(void *data, u64 val)
struct mhi_controller *mhi_cntrl = data;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
- MHI_LOG("Trigger M3 Entry\n");
+ MHI_CNTRL_LOG("Trigger M3 Entry\n");
pm_runtime_mark_last_busy(&mhi_dev->pci_dev->dev);
pm_request_autosuspend(&mhi_dev->pci_dev->dev);
@@ -92,19 +99,19 @@ static int mhi_init_pci_dev(struct mhi_controller *mhi_cntrl)
mhi_dev->resn = MHI_PCI_BAR_NUM;
ret = pci_assign_resource(pci_dev, mhi_dev->resn);
if (ret) {
- MHI_ERR("Error assign pci resources, ret:%d\n", ret);
+ MHI_CNTRL_ERR("Error assign pci resources, ret:%d\n", ret);
return ret;
}
ret = pci_enable_device(pci_dev);
if (ret) {
- MHI_ERR("Error enabling device, ret:%d\n", ret);
+ MHI_CNTRL_ERR("Error enabling device, ret:%d\n", ret);
goto error_enable_device;
}
ret = pci_request_region(pci_dev, mhi_dev->resn, "mhi");
if (ret) {
- MHI_ERR("Error pci_request_region, ret:%d\n", ret);
+ MHI_CNTRL_ERR("Error pci_request_region, ret:%d\n", ret);
goto error_request_region;
}
@@ -114,14 +121,14 @@ static int mhi_init_pci_dev(struct mhi_controller *mhi_cntrl)
len = pci_resource_len(pci_dev, mhi_dev->resn);
mhi_cntrl->regs = ioremap_nocache(mhi_cntrl->base_addr, len);
if (!mhi_cntrl->regs) {
- MHI_ERR("Error ioremap region\n");
+ MHI_CNTRL_ERR("Error ioremap region\n");
goto error_ioremap;
}
ret = pci_alloc_irq_vectors(pci_dev, mhi_cntrl->msi_required,
mhi_cntrl->msi_required, PCI_IRQ_MSI);
if (IS_ERR_VALUE((ulong)ret) || ret < mhi_cntrl->msi_required) {
- MHI_ERR("Failed to enable MSI, ret:%d\n", ret);
+ MHI_CNTRL_ERR("Failed to enable MSI, ret:%d\n", ret);
goto error_req_msi;
}
@@ -395,7 +402,12 @@ static int mhi_force_suspend(struct mhi_controller *mhi_cntrl)
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
int itr = DIV_ROUND_UP(mhi_cntrl->timeout_ms, delayms);
- MHI_LOG("Entered\n");
+ MHI_CNTRL_LOG("Entered\n");
+
+ if (debug_mode == MHI_DEBUG_NO_D3 || debug_mode == MHI_FWIMAGE_NO_D3) {
+ MHI_CNTRL_LOG("Exited due to debug mode:%d\n", debug_mode);
+ return ret;
+ }
mutex_lock(&mhi_cntrl->pm_mutex);
@@ -411,12 +423,12 @@ static int mhi_force_suspend(struct mhi_controller *mhi_cntrl)
if (!ret || ret != -EBUSY)
break;
- MHI_LOG("MHI busy, sleeping and retry\n");
+ MHI_CNTRL_LOG("MHI busy, sleeping and retry\n");
msleep(delayms);
}
if (ret) {
- MHI_ERR("Force suspend ret with %d\n", ret);
+ MHI_CNTRL_ERR("Force suspend ret:%d\n", ret);
goto exit_force_suspend;
}
@@ -552,14 +564,14 @@ static void mhi_status_cb(struct mhi_controller *mhi_cntrl,
pm_runtime_get(dev);
ret = mhi_force_suspend(mhi_cntrl);
if (!ret) {
- MHI_LOG("Attempt resume after forced suspend\n");
+ MHI_CNTRL_LOG("Attempt resume after forced suspend\n");
mhi_runtime_resume(dev);
}
pm_runtime_put(dev);
mhi_arch_mission_mode_enter(mhi_cntrl);
break;
default:
- MHI_ERR("Unhandled cb:0x%x\n", reason);
+ MHI_CNTRL_LOG("Unhandled cb:0x%x\n", reason);
}
}
@@ -670,7 +682,7 @@ static struct mhi_controller *mhi_register_controller(struct pci_dev *pci_dev)
bool use_s1;
u32 addr_win[2];
const char *iommu_dma_type;
- int ret, i;
+ int ret, i, len;
if (!of_node)
return ERR_PTR(-ENODEV);
@@ -746,14 +758,22 @@ static struct mhi_controller *mhi_register_controller(struct pci_dev *pci_dev)
if (ret)
goto error_register;
- for (i = 0; i < ARRAY_SIZE(firmware_table); i++) {
+ len = ARRAY_SIZE(firmware_table);
+ for (i = 0; i < len; i++) {
firmware_info = firmware_table + i;
- /* debug mode always use default */
- if (!debug_mode && mhi_cntrl->dev_id == firmware_info->dev_id)
+ if (mhi_cntrl->dev_id == firmware_info->dev_id)
break;
}
+ if (debug_mode) {
+ if (debug_mode <= MHI_DEBUG_D3)
+ firmware_info = firmware_table + (len - 1);
+ MHI_CNTRL_LOG("fw info: debug_mode:%d dev_id:%d image:%s\n",
+ debug_mode, firmware_info->dev_id,
+ firmware_info->fw_image);
+ }
+
mhi_cntrl->fw_image = firmware_info->fw_image;
mhi_cntrl->edl_image = firmware_info->edl_image;
@@ -773,7 +793,7 @@ static struct mhi_controller *mhi_register_controller(struct pci_dev *pci_dev)
atomic_set(&mhi_cntrl->write_idx, -1);
if (sysfs_create_group(&mhi_cntrl->mhi_dev->dev.kobj, &mhi_qcom_group))
- MHI_ERR("Error while creating the sysfs group\n");
+ MHI_CNTRL_ERR("Error while creating the sysfs group\n");
return mhi_cntrl;
@@ -830,7 +850,7 @@ int mhi_pci_probe(struct pci_dev *pci_dev,
pm_runtime_mark_last_busy(&pci_dev->dev);
- MHI_LOG("Return successful\n");
+ MHI_CNTRL_LOG("Return successful\n");
return 0;
diff --git a/drivers/bus/mhi/controllers/mhi_qcom.h b/drivers/bus/mhi/controllers/mhi_qcom.h
index e1d58a8..2a5a82b 100644
--- a/drivers/bus/mhi/controllers/mhi_qcom.h
+++ b/drivers/bus/mhi/controllers/mhi_qcom.h
@@ -35,14 +35,26 @@
extern const char * const mhi_ee_str[MHI_EE_MAX];
#define TO_MHI_EXEC_STR(ee) (ee >= MHI_EE_MAX ? "INVALID_EE" : mhi_ee_str[ee])
+enum mhi_debug_mode {
+ MHI_DEBUG_MODE_OFF,
+ MHI_DEBUG_NO_D3, /* use debug.mbn as fw image and skip first M3/D3 */
+ MHI_DEBUG_D3, /* use debug.mbn as fw image and allow first M3/D3 */
+ MHI_FWIMAGE_NO_D3, /* use fw image if found and skip first M3/D3 */
+ MHI_FWIMAGE_D3, /* use fw image if found and allow first M3/D3 */
+ MHI_DEBUG_MODE_MAX = MHI_FWIMAGE_D3,
+};
+
enum mhi_suspend_mode {
MHI_ACTIVE_STATE,
MHI_DEFAULT_SUSPEND,
MHI_FAST_LINK_OFF,
MHI_FAST_LINK_ON,
+ MHI_SUSPEND_MODE_MAX,
};
-#define MHI_IS_SUSPENDED(mode) (mode)
+extern const char * const mhi_suspend_mode_str[MHI_SUSPEND_MODE_MAX];
+#define TO_MHI_SUSPEND_MODE_STR(mode) \
+ (mode >= MHI_SUSPEND_MODE_MAX ? "Invalid" : mhi_suspend_mode_str[mode])
struct mhi_dev {
struct pci_dev *pci_dev;
diff --git a/drivers/bus/mhi/core/mhi_boot.c b/drivers/bus/mhi/core/mhi_boot.c
index 3a6b745..e720028 100644
--- a/drivers/bus/mhi/core/mhi_boot.c
+++ b/drivers/bus/mhi/core/mhi_boot.c
@@ -51,7 +51,7 @@ static void mhi_process_sfr(struct mhi_controller *mhi_cntrl,
rem_seg_len = 0;
seg_idx++;
if (seg_idx == mhi_cntrl->rddm_image->entries) {
- MHI_ERR("invalid size for SFR file\n");
+ MHI_CNTRL_ERR("invalid size for SFR file\n");
goto err;
}
}
@@ -80,7 +80,7 @@ static int mhi_find_next_file_offset(struct mhi_controller *mhi_cntrl,
while (info->file_size) {
info->seg_idx++;
if (info->seg_idx == mhi_cntrl->rddm_image->entries) {
- MHI_ERR("invalid size for file %s\n",
+ MHI_CNTRL_ERR("invalid size for file %s\n",
table_info->file_name);
return -EINVAL;
}
@@ -109,14 +109,14 @@ void mhi_dump_sfr(struct mhi_controller *mhi_cntrl)
if (rddm_header->header_size > sizeof(*rddm_header) ||
rddm_header->header_size < 8) {
- MHI_ERR("invalid reported header size %u\n",
+ MHI_CNTRL_ERR("invalid reported header size %u\n",
rddm_header->header_size);
return;
}
table_size = (rddm_header->header_size - 8) / sizeof(*table_info);
if (!table_size) {
- MHI_ERR("invalid rddm table size %u\n", table_size);
+ MHI_CNTRL_ERR("invalid rddm table size %u\n", table_size);
return;
}
@@ -148,13 +148,13 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
int i = 0;
for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
- MHI_VERB("Setting vector:%pad size:%zu\n",
- &mhi_buf->dma_addr, mhi_buf->len);
+ MHI_CNTRL_LOG("Setting vector:%pad size:%zu\n",
+ &mhi_buf->dma_addr, mhi_buf->len);
bhi_vec->dma_addr = mhi_buf->dma_addr;
bhi_vec->size = mhi_buf->len;
}
- MHI_LOG("BHIe programming for RDDM\n");
+ MHI_CNTRL_LOG("BHIe programming for RDDM\n");
mhi_cntrl->write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS,
upper_32_bits(mhi_buf->dma_addr));
@@ -173,8 +173,8 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
sequence_id);
- MHI_LOG("address:%pad len:0x%lx sequence:%u\n",
- &mhi_buf->dma_addr, mhi_buf->len, sequence_id);
+ MHI_CNTRL_LOG("address:%pad len:0x%lx sequence:%u\n",
+ &mhi_buf->dma_addr, mhi_buf->len, sequence_id);
}
/* collect rddm during kernel panic */
@@ -189,10 +189,10 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
int rddm_retry = rddm_timeout_us / delayus; /* time to enter rddm */
void __iomem *base = mhi_cntrl->bhie;
- MHI_LOG("Entered with pm_state:%s dev_state:%s ee:%s\n",
- to_mhi_pm_state_str(mhi_cntrl->pm_state),
- TO_MHI_STATE_STR(mhi_cntrl->dev_state),
- TO_MHI_EXEC_STR(mhi_cntrl->ee));
+ MHI_CNTRL_LOG("Entered with pm_state:%s dev_state:%s ee:%s\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state),
+ TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+ TO_MHI_EXEC_STR(mhi_cntrl->ee));
/*
* This should only be executing during a kernel panic, we expect all
@@ -217,10 +217,10 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
ee = mhi_get_exec_env(mhi_cntrl);
if (ee != MHI_EE_RDDM) {
- MHI_LOG("Trigger device into RDDM mode using SYSERR\n");
+ MHI_CNTRL_LOG("Trigger device into RDDM mode using SYSERR\n");
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
- MHI_LOG("Waiting for device to enter RDDM\n");
+ MHI_CNTRL_LOG("Waiting for device to enter RDDM\n");
while (rddm_retry--) {
ee = mhi_get_exec_env(mhi_cntrl);
if (ee == MHI_EE_RDDM)
@@ -231,7 +231,7 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
if (rddm_retry <= 0) {
/* Hardware reset; force device to enter rddm */
- MHI_LOG(
+ MHI_CNTRL_LOG(
"Did not enter RDDM, do a host req. reset\n");
mhi_cntrl->write_reg(mhi_cntrl, mhi_cntrl->regs,
MHI_SOC_RESET_REQ_OFFSET,
@@ -242,8 +242,8 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
ee = mhi_get_exec_env(mhi_cntrl);
}
- MHI_LOG("Waiting for image download completion, current EE:%s\n",
- TO_MHI_EXEC_STR(ee));
+ MHI_CNTRL_LOG("Waiting for image download completion, current EE:%s\n",
+ TO_MHI_EXEC_STR(ee));
while (retry--) {
ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
BHIE_RXVECSTATUS_STATUS_BMSK,
@@ -253,7 +253,7 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
return -EIO;
if (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) {
- MHI_LOG("RDDM successfully collected\n");
+ MHI_CNTRL_LOG("RDDM successfully collected\n");
return 0;
}
@@ -263,9 +263,9 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
ee = mhi_get_exec_env(mhi_cntrl);
ret = mhi_read_reg(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS, &rx_status);
- MHI_ERR("Did not complete RDDM transfer\n");
- MHI_ERR("Current EE:%s\n", TO_MHI_EXEC_STR(ee));
- MHI_ERR("RXVEC_STATUS:0x%x, ret:%d\n", rx_status, ret);
+ MHI_CNTRL_ERR("Did not complete RDDM transfer\n");
+ MHI_CNTRL_ERR("Current EE:%s\n", TO_MHI_EXEC_STR(ee));
+ MHI_CNTRL_ERR("RXVEC_STATUS:0x%x, ret:%d\n", rx_status, ret);
return -EIO;
}
@@ -279,7 +279,7 @@ int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic)
if (in_panic)
return __mhi_download_rddm_in_panic(mhi_cntrl);
- MHI_LOG("Waiting for image download completion\n");
+ MHI_CNTRL_LOG("Waiting for image download completion\n");
/* waiting for image download completion */
wait_event_timeout(mhi_cntrl->state_event,
@@ -307,7 +307,7 @@ static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
return -EIO;
}
- MHI_LOG("Starting BHIe Programming\n");
+ MHI_CNTRL_LOG("Starting BHIe Programming\n");
mhi_cntrl->write_reg(mhi_cntrl, base, BHIE_TXVECADDR_HIGH_OFFS,
upper_32_bits(mhi_buf->dma_addr));
@@ -327,11 +327,11 @@ static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
mhi_cntrl->sequence_id);
read_unlock_bh(pm_lock);
- MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
- upper_32_bits(mhi_buf->dma_addr),
- lower_32_bits(mhi_buf->dma_addr),
- mhi_buf->len, mhi_cntrl->sequence_id);
- MHI_LOG("Waiting for image transfer completion\n");
+ MHI_CNTRL_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
+ upper_32_bits(mhi_buf->dma_addr),
+ lower_32_bits(mhi_buf->dma_addr),
+ mhi_buf->len, mhi_cntrl->sequence_id);
+ MHI_CNTRL_LOG("Waiting for image transfer completion\n");
/* waiting for image download completion */
wait_event_timeout(mhi_cntrl->state_event,
@@ -368,7 +368,7 @@ static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
{ NULL },
};
- MHI_LOG("Starting BHI programming\n");
+ MHI_CNTRL_LOG("Starting BHI programming\n");
/* program start sbl download via bhi protocol */
read_lock_bh(pm_lock);
@@ -391,7 +391,7 @@ static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
mhi_cntrl->session_id);
read_unlock_bh(pm_lock);
- MHI_LOG("Waiting for image transfer completion\n");
+ MHI_CNTRL_LOG("Waiting for image transfer completion\n");
/* waiting for image download completion */
wait_event_timeout(mhi_cntrl->state_event,
@@ -404,7 +404,7 @@ static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
goto invalid_pm_state;
if (tx_status == BHI_STATUS_ERROR) {
- MHI_ERR("Image transfer failed\n");
+ MHI_CNTRL_ERR("Image transfer failed\n");
read_lock_bh(pm_lock);
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
for (i = 0; error_reg[i].name; i++) {
@@ -412,8 +412,8 @@ static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
error_reg[i].offset, &val);
if (ret)
break;
- MHI_ERR("reg:%s value:0x%x\n",
- error_reg[i].name, val);
+ MHI_CNTRL_ERR("reg:%s value:0x%x\n",
+ error_reg[i].name, val);
}
}
read_unlock_bh(pm_lock);
@@ -452,8 +452,8 @@ int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info *img_info;
struct mhi_buf *mhi_buf;
- MHI_LOG("Allocating bytes:%zu seg_size:%zu total_seg:%u\n",
- alloc_size, seg_size, segments);
+ MHI_CNTRL_LOG("Allocating bytes:%zu seg_size:%zu total_seg:%u\n",
+ alloc_size, seg_size, segments);
img_info = kzalloc(sizeof(*img_info), GFP_KERNEL);
if (!img_info)
@@ -480,7 +480,7 @@ int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
if (!mhi_buf->buf)
goto error_alloc_segment;
- MHI_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
+ MHI_CNTRL_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
mhi_buf->dma_addr, mhi_buf->len);
}
@@ -488,7 +488,7 @@ int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
img_info->entries = segments;
*image_info = img_info;
- MHI_LOG("Successfully allocated bhi vec table\n");
+ MHI_CNTRL_LOG("Successfully allocated bhi vec table\n");
return 0;
@@ -543,11 +543,11 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
size_t size;
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
- MHI_ERR("MHI is not in valid state\n");
+ MHI_CNTRL_ERR("MHI is not in valid state\n");
return;
}
- MHI_LOG("Device current EE:%s\n", TO_MHI_EXEC_STR(mhi_cntrl->ee));
+ MHI_CNTRL_LOG("Device current EE:%s\n", TO_MHI_EXEC_STR(mhi_cntrl->ee));
/* if device in pthru, do reset to ready state transition */
if (mhi_cntrl->ee == MHI_EE_PTHRU)
@@ -558,14 +558,28 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
if (!fw_name || (mhi_cntrl->fbc_download && (!mhi_cntrl->sbl_size ||
!mhi_cntrl->seg_len))) {
- MHI_ERR("No firmware image defined or !sbl_size || !seg_len\n");
+ MHI_CNTRL_ERR(
+ "No firmware image defined or !sbl_size || !seg_len\n");
return;
}
ret = request_firmware(&firmware, fw_name, mhi_cntrl->dev);
if (ret) {
- MHI_ERR("Error loading firmware, ret:%d\n", ret);
- return;
+ if (!mhi_cntrl->fw_image_fallback) {
+ MHI_ERR("Error loading fw, ret:%d\n", ret);
+ return;
+ }
+
+ /* re-try with fall back fw image */
+ ret = request_firmware(&firmware, mhi_cntrl->fw_image_fallback,
+ mhi_cntrl->dev);
+ if (ret) {
+ MHI_ERR("Error loading fw_fb, ret:%d\n", ret);
+ return;
+ }
+
+ mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data,
+ MHI_CB_FW_FALLBACK_IMG);
}
size = (mhi_cntrl->fbc_download) ? mhi_cntrl->sbl_size : firmware->size;
@@ -576,7 +590,7 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
buf = mhi_alloc_coherent(mhi_cntrl, size, &dma_addr, GFP_KERNEL);
if (!buf) {
- MHI_ERR("Could not allocate memory for image\n");
+ MHI_CNTRL_ERR("Could not allocate memory for image\n");
release_firmware(firmware);
return;
}
@@ -605,11 +619,11 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
ret = mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->fbc_image,
firmware->size);
if (ret) {
- MHI_ERR("Error alloc size of %zu\n", firmware->size);
+ MHI_CNTRL_ERR("Error alloc size:%zu\n", firmware->size);
goto error_alloc_fw_table;
}
- MHI_LOG("Copying firmware image into vector table\n");
+ MHI_CNTRL_LOG("Copying firmware image into vector table\n");
/* load the firmware into BHIE vec table */
mhi_firmware_copy(mhi_cntrl, firmware, mhi_cntrl->fbc_image);
@@ -619,16 +633,16 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
/* transitioning into MHI RESET->READY state */
ret = mhi_ready_state_transition(mhi_cntrl);
- MHI_LOG("To Reset->Ready PM_STATE:%s MHI_STATE:%s EE:%s, ret:%d\n",
- to_mhi_pm_state_str(mhi_cntrl->pm_state),
- TO_MHI_STATE_STR(mhi_cntrl->dev_state),
- TO_MHI_EXEC_STR(mhi_cntrl->ee), ret);
+ MHI_CNTRL_LOG("To Reset->Ready PM_STATE:%s MHI_STATE:%s EE:%s ret:%d\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state),
+ TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+ TO_MHI_EXEC_STR(mhi_cntrl->ee), ret);
if (!mhi_cntrl->fbc_download)
return;
if (ret) {
- MHI_ERR("Did not transition to READY state\n");
+ MHI_CNTRL_ERR("Did not transition to READY state\n");
goto error_read;
}
@@ -639,7 +653,7 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
- MHI_ERR("MHI did not enter BHIE\n");
+ MHI_CNTRL_ERR("MHI did not enter BHIE\n");
goto error_read;
}
@@ -649,7 +663,7 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
/* last entry is vec table */
&image_info->mhi_buf[image_info->entries - 1]);
- MHI_LOG("amss fw_load, ret:%d\n", ret);
+ MHI_CNTRL_LOG("amss fw_load ret:%d\n", ret);
release_firmware(firmware);
diff --git a/drivers/bus/mhi/core/mhi_init.c b/drivers/bus/mhi/core/mhi_init.c
index 96a734d..32d6285 100644
--- a/drivers/bus/mhi/core/mhi_init.c
+++ b/drivers/bus/mhi/core/mhi_init.c
@@ -430,8 +430,8 @@ int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl)
mhi_msi_handlr, IRQF_SHARED | IRQF_NO_SUSPEND,
"mhi", mhi_event);
if (ret) {
- MHI_ERR("Error requesting irq:%d for ev:%d\n",
- mhi_cntrl->irq[mhi_event->msi], i);
+ MHI_CNTRL_ERR("Error requesting irq:%d for ev:%d\n",
+ mhi_cntrl->irq[mhi_event->msi], i);
goto error_request;
}
}
@@ -767,7 +767,7 @@ static int mhi_init_timesync(struct mhi_controller *mhi_cntrl)
ret = mhi_get_capability_offset(mhi_cntrl, TIMESYNC_CAP_ID,
&time_offset);
if (ret) {
- MHI_LOG("No timesync capability found\n");
+ MHI_CNTRL_LOG("No timesync capability found\n");
return ret;
}
@@ -782,7 +782,7 @@ static int mhi_init_timesync(struct mhi_controller *mhi_cntrl)
INIT_LIST_HEAD(&mhi_tsync->head);
/* save time_offset for obtaining time */
- MHI_LOG("TIME OFFS:0x%x\n", time_offset);
+ MHI_CNTRL_LOG("TIME OFFS:0x%x\n", time_offset);
mhi_tsync->time_reg = mhi_cntrl->regs + time_offset
+ TIMESYNC_TIME_LOW_OFFSET;
@@ -791,7 +791,7 @@ static int mhi_init_timesync(struct mhi_controller *mhi_cntrl)
/* get timesync event ring configuration */
er_index = mhi_get_er_index(mhi_cntrl, MHI_ER_TSYNC_ELEMENT_TYPE);
if (er_index < 0) {
- MHI_LOG("Could not find timesync event ring\n");
+ MHI_CNTRL_LOG("Could not find timesync event ring\n");
return er_index;
}
@@ -820,7 +820,7 @@ int mhi_init_sfr(struct mhi_controller *mhi_cntrl)
sfr_info->buf_addr = mhi_alloc_coherent(mhi_cntrl, sfr_info->len,
&sfr_info->dma_addr, GFP_KERNEL);
if (!sfr_info->buf_addr) {
- MHI_ERR("Failed to allocate memory for sfr\n");
+ MHI_CNTRL_ERR("Failed to allocate memory for sfr\n");
return -ENOMEM;
}
@@ -828,14 +828,14 @@ int mhi_init_sfr(struct mhi_controller *mhi_cntrl)
ret = mhi_send_cmd(mhi_cntrl, NULL, MHI_CMD_SFR_CFG);
if (ret) {
- MHI_ERR("Failed to send sfr cfg cmd\n");
+ MHI_CNTRL_ERR("Failed to send sfr cfg cmd\n");
return ret;
}
ret = wait_for_completion_timeout(&sfr_info->completion,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || sfr_info->ccs != MHI_EV_CC_SUCCESS) {
- MHI_ERR("Failed to get sfr cfg cmd completion\n");
+ MHI_CNTRL_ERR("Failed to get sfr cfg cmd completion\n");
return -EIO;
}
@@ -863,7 +863,7 @@ static int mhi_init_bw_scale(struct mhi_controller *mhi_cntrl)
bw_cfg_offset += BW_SCALE_CFG_OFFSET;
- MHI_LOG("BW_CFG OFFSET:0x%x\n", bw_cfg_offset);
+ MHI_CNTRL_LOG("BW_CFG OFFSET:0x%x\n", bw_cfg_offset);
/* advertise host support */
mhi_cntrl->write_reg(mhi_cntrl, mhi_cntrl->regs, bw_cfg_offset,
@@ -952,7 +952,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
{ 0, 0, 0 }
};
- MHI_LOG("Initializing MMIO\n");
+ MHI_CNTRL_LOG("Initializing MMIO\n");
/* set up DB register for all the chan rings */
ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK,
@@ -960,7 +960,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
if (ret)
return -EIO;
- MHI_LOG("CHDBOFF:0x%x\n", val);
+ MHI_CNTRL_LOG("CHDBOFF:0x%x\n", val);
/* setup wake db */
mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
@@ -983,7 +983,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
if (ret)
return -EIO;
- MHI_LOG("ERDBOFF:0x%x\n", val);
+ MHI_CNTRL_LOG("ERDBOFF:0x%x\n", val);
mhi_event = mhi_cntrl->mhi_event;
for (i = 0; i < mhi_cntrl->total_ev_rings; i++, val += 8, mhi_event++) {
@@ -996,7 +996,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
/* set up DB register for primary CMD rings */
mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING].ring.db_addr = base + CRDB_LOWER;
- MHI_LOG("Programming all MMIO values.\n");
+ MHI_CNTRL_LOG("Programming all MMIO values.\n");
for (i = 0; reg_info[i].offset; i++)
mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
reg_info[i].mask, reg_info[i].shift,
@@ -1225,7 +1225,7 @@ static int of_parse_ev_cfg(struct mhi_controller *mhi_cntrl,
mhi_event->process_event = mhi_process_ctrl_ev_ring;
break;
case MHI_ER_TSYNC_ELEMENT_TYPE:
- mhi_event->process_event = mhi_process_tsync_event_ring;
+ mhi_event->process_event = mhi_process_tsync_ev_ring;
break;
case MHI_ER_BW_SCALE_ELEMENT_TYPE:
mhi_event->process_event = mhi_process_bw_scale_ev_ring;
@@ -1729,7 +1729,7 @@ int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
ret = mhi_init_dev_ctxt(mhi_cntrl);
if (ret) {
- MHI_ERR("Error with init dev_ctxt\n");
+ MHI_CNTRL_ERR("Error with init dev_ctxt\n");
goto error_dev_ctxt;
}
@@ -1749,7 +1749,7 @@ int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF,
&bhie_off);
if (ret) {
- MHI_ERR("Error getting bhie offset\n");
+ MHI_CNTRL_ERR("Error getting bhie offset\n");
goto bhie_error;
}
diff --git a/drivers/bus/mhi/core/mhi_internal.h b/drivers/bus/mhi/core/mhi_internal.h
index c885d63..c2b99e8 100644
--- a/drivers/bus/mhi/core/mhi_internal.h
+++ b/drivers/bus/mhi/core/mhi_internal.h
@@ -717,8 +717,6 @@ struct mhi_chan {
struct tsync_node {
struct list_head node;
u32 sequence;
- u32 int_sequence;
- u64 local_time;
u64 remote_time;
struct mhi_device *mhi_dev;
void (*cb_func)(struct mhi_device *mhi_dev, u32 sequence,
@@ -728,7 +726,9 @@ struct tsync_node {
struct mhi_timesync {
void __iomem *time_reg;
u32 int_sequence;
+ u64 local_time;
bool db_support;
+ bool db_response_pending;
spinlock_t lock; /* list protection */
struct list_head head;
};
@@ -787,8 +787,8 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event, u32 event_quota);
int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event, u32 event_quota);
-int mhi_process_tsync_event_ring(struct mhi_controller *mhi_cntrl,
- struct mhi_event *mhi_event, u32 event_quota);
+int mhi_process_tsync_ev_ring(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event, u32 event_quota);
int mhi_process_bw_scale_ev_ring(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event, u32 event_quota);
int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
diff --git a/drivers/bus/mhi/core/mhi_main.c b/drivers/bus/mhi/core/mhi_main.c
index 3deea95..9237651 100644
--- a/drivers/bus/mhi/core/mhi_main.c
+++ b/drivers/bus/mhi/core/mhi_main.c
@@ -118,7 +118,21 @@ static void mhi_reg_write_enqueue(struct mhi_controller *mhi_cntrl,
mhi_cntrl->reg_write_q[q_index].reg_addr = reg_addr;
mhi_cntrl->reg_write_q[q_index].val = val;
+
+ /*
+ * prevent reordering to make sure val is set before valid is set to
+ * true. This prevents offload worker running on another core to write
+ * stale value to register with valid set to true.
+ */
+ smp_wmb();
+
mhi_cntrl->reg_write_q[q_index].valid = true;
+
+ /*
+ * make sure valid value is visible to other cores to prevent offload
+ * worker from skipping the reg write.
+ */
+ smp_wmb();
}
void mhi_write_reg_offload(struct mhi_controller *mhi_cntrl,
@@ -1056,7 +1070,9 @@ static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
-EOVERFLOW : 0;
- result.bytes_xferd = xfer_len;
+
+ /* truncate to buf len if xfer_len is larger */
+ result.bytes_xferd = min_t(u16, xfer_len, buf_info->len);
result.buf_addr = buf_info->cb_buf;
result.dir = mhi_chan->dir;
@@ -1333,82 +1349,89 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
return count;
}
-int mhi_process_tsync_event_ring(struct mhi_controller *mhi_cntrl,
- struct mhi_event *mhi_event,
- u32 event_quota)
+int mhi_process_tsync_ev_ring(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event,
+ u32 event_quota)
{
- struct mhi_tre *dev_rp, *local_rp;
+ struct mhi_tre *dev_rp;
struct mhi_ring *ev_ring = &mhi_event->ring;
struct mhi_event_ctxt *er_ctxt =
&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
- int count = 0;
- u32 int_sequence, unit;
+ u32 sequence;
u64 remote_time;
+ int ret = 0;
- if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state))) {
- MHI_LOG("No EV access, PM_STATE:%s\n",
- to_mhi_pm_state_str(mhi_cntrl->pm_state));
- return -EIO;
- }
-
+ spin_lock_bh(&mhi_event->lock);
dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
- local_rp = ev_ring->rp;
-
- while (dev_rp != local_rp) {
- enum MHI_PKT_TYPE type = MHI_TRE_GET_EV_TYPE(local_rp);
- struct tsync_node *tsync_node;
-
- MHI_VERB("Processing Event:0x%llx 0x%08x 0x%08x\n",
- local_rp->ptr, local_rp->dword[0], local_rp->dword[1]);
-
- MHI_ASSERT(type != MHI_PKT_TYPE_TSYNC_EVENT, "!TSYNC event");
-
- int_sequence = MHI_TRE_GET_EV_TSYNC_SEQ(local_rp);
- unit = MHI_TRE_GET_EV_TSYNC_UNIT(local_rp);
- remote_time = MHI_TRE_GET_EV_TIME(local_rp);
-
- do {
- spin_lock(&mhi_tsync->lock);
- tsync_node = list_first_entry_or_null(&mhi_tsync->head,
- struct tsync_node, node);
- if (!tsync_node) {
- spin_unlock(&mhi_tsync->lock);
- break;
- }
-
- list_del(&tsync_node->node);
- spin_unlock(&mhi_tsync->lock);
-
- /*
- * device may not able to process all time sync commands
- * host issue and only process last command it receive
- */
- if (tsync_node->int_sequence == int_sequence) {
- tsync_node->cb_func(tsync_node->mhi_dev,
- tsync_node->sequence,
- tsync_node->local_time,
- remote_time);
- kfree(tsync_node);
- } else {
- kfree(tsync_node);
- }
- } while (true);
-
- mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
- local_rp = ev_ring->rp;
- dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
- count++;
+ if (ev_ring->rp == dev_rp) {
+ spin_unlock_bh(&mhi_event->lock);
+ goto exit_tsync_process;
}
+ /* if rp points to base, we need to wrap it around */
+ if (dev_rp == ev_ring->base)
+ dev_rp = ev_ring->base + ev_ring->len;
+ dev_rp--;
+
+ /* fast forward to currently processed element and recycle er */
+ ev_ring->rp = dev_rp;
+ ev_ring->wp = dev_rp - 1;
+ if (ev_ring->wp < ev_ring->base)
+ ev_ring->wp = ev_ring->base + ev_ring->len - ev_ring->el_size;
+ mhi_recycle_fwd_ev_ring_element(mhi_cntrl, ev_ring);
+
+ MHI_ASSERT(MHI_TRE_GET_EV_TYPE(dev_rp) != MHI_PKT_TYPE_TSYNC_EVENT,
+ "!TSYNC event");
+
+ sequence = MHI_TRE_GET_EV_TSYNC_SEQ(dev_rp);
+ remote_time = MHI_TRE_GET_EV_TIME(dev_rp);
+
+ MHI_VERB("Received TSYNC event with seq:0x%llx time:0x%llx\n",
+ sequence, remote_time);
+
read_lock_bh(&mhi_cntrl->pm_lock);
if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
mhi_ring_er_db(mhi_event);
read_unlock_bh(&mhi_cntrl->pm_lock);
+ spin_unlock_bh(&mhi_event->lock);
+ mutex_lock(&mhi_cntrl->tsync_mutex);
+
+ if (unlikely(mhi_tsync->int_sequence != sequence)) {
+ MHI_ASSERT(1, "Unexpected response:0x%llx Expected:0x%llx\n",
+ sequence, mhi_tsync->int_sequence);
+ mutex_unlock(&mhi_cntrl->tsync_mutex);
+ goto exit_tsync_process;
+ }
+
+ do {
+ struct tsync_node *tsync_node;
+
+ spin_lock(&mhi_tsync->lock);
+ tsync_node = list_first_entry_or_null(&mhi_tsync->head,
+ struct tsync_node, node);
+ if (!tsync_node) {
+ spin_unlock(&mhi_tsync->lock);
+ break;
+ }
+
+ list_del(&tsync_node->node);
+ spin_unlock(&mhi_tsync->lock);
+
+ tsync_node->cb_func(tsync_node->mhi_dev,
+ tsync_node->sequence,
+ mhi_tsync->local_time, remote_time);
+ kfree(tsync_node);
+ } while (true);
+
+ mhi_tsync->db_response_pending = false;
+ mutex_unlock(&mhi_cntrl->tsync_mutex);
+
+exit_tsync_process:
MHI_VERB("exit er_index:%u\n", mhi_event->er_index);
- return count;
+ return ret;
}
int mhi_process_bw_scale_ev_ring(struct mhi_controller *mhi_cntrl,
@@ -2561,7 +2584,7 @@ int mhi_get_remote_time(struct mhi_device *mhi_dev,
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
struct tsync_node *tsync_node;
- int ret;
+ int ret = 0;
/* not all devices support all time features */
mutex_lock(&mhi_cntrl->tsync_mutex);
@@ -2585,6 +2608,10 @@ int mhi_get_remote_time(struct mhi_device *mhi_dev,
}
read_unlock_bh(&mhi_cntrl->pm_lock);
+ MHI_LOG("Enter with pm_state:%s MHI_STATE:%s\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state),
+ TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+
/*
* technically we can use GFP_KERNEL, but wants to avoid
* # of times scheduling out
@@ -2595,15 +2622,17 @@ int mhi_get_remote_time(struct mhi_device *mhi_dev,
goto error_no_mem;
}
+ tsync_node->sequence = sequence;
+ tsync_node->cb_func = cb_func;
+ tsync_node->mhi_dev = mhi_dev;
+
+ if (mhi_tsync->db_response_pending)
+ goto skip_tsync_db;
+
mhi_tsync->int_sequence++;
if (mhi_tsync->int_sequence == 0xFFFFFFFF)
mhi_tsync->int_sequence = 0;
- tsync_node->sequence = sequence;
- tsync_node->int_sequence = mhi_tsync->int_sequence;
- tsync_node->cb_func = cb_func;
- tsync_node->mhi_dev = mhi_dev;
-
/* disable link level low power modes */
ret = mhi_cntrl->lpm_disable(mhi_cntrl, mhi_cntrl->priv_data);
if (ret) {
@@ -2612,10 +2641,6 @@ int mhi_get_remote_time(struct mhi_device *mhi_dev,
goto error_invalid_state;
}
- spin_lock(&mhi_tsync->lock);
- list_add_tail(&tsync_node->node, &mhi_tsync->head);
- spin_unlock(&mhi_tsync->lock);
-
/*
* time critical code, delay between these two steps should be
* deterministic as possible.
@@ -2623,9 +2648,9 @@ int mhi_get_remote_time(struct mhi_device *mhi_dev,
preempt_disable();
local_irq_disable();
- tsync_node->local_time =
+ mhi_tsync->local_time =
mhi_cntrl->time_get(mhi_cntrl, mhi_cntrl->priv_data);
- writel_relaxed_no_log(tsync_node->int_sequence, mhi_cntrl->tsync_db);
+ writel_relaxed_no_log(mhi_tsync->int_sequence, mhi_cntrl->tsync_db);
/* write must go thru immediately */
wmb();
@@ -2634,6 +2659,15 @@ int mhi_get_remote_time(struct mhi_device *mhi_dev,
mhi_cntrl->lpm_enable(mhi_cntrl, mhi_cntrl->priv_data);
+ MHI_VERB("time DB request with seq:0x%llx\n", mhi_tsync->int_sequence);
+
+ mhi_tsync->db_response_pending = true;
+
+skip_tsync_db:
+ spin_lock(&mhi_tsync->lock);
+ list_add_tail(&tsync_node->node, &mhi_tsync->head);
+ spin_unlock(&mhi_tsync->lock);
+
ret = 0;
error_invalid_state:
diff --git a/drivers/bus/mhi/core/mhi_pm.c b/drivers/bus/mhi/core/mhi_pm.c
index d28a749..a4e63bc9 100644
--- a/drivers/bus/mhi/core/mhi_pm.c
+++ b/drivers/bus/mhi/core/mhi_pm.c
@@ -253,7 +253,7 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
enum MHI_PM_STATE cur_state;
int ret, i;
- MHI_LOG("Waiting to enter READY state\n");
+ MHI_CNTRL_LOG("Waiting to enter READY state\n");
/* wait for RESET to be cleared and READY bit to be set */
wait_event_timeout(mhi_cntrl->state_event,
@@ -275,16 +275,16 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
if (reset || !ready)
return -ETIMEDOUT;
- MHI_LOG("Device in READY State\n");
+ MHI_CNTRL_LOG("Device in READY State\n");
write_lock_irq(&mhi_cntrl->pm_lock);
cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_POR);
mhi_cntrl->dev_state = MHI_STATE_READY;
write_unlock_irq(&mhi_cntrl->pm_lock);
if (cur_state != MHI_PM_POR) {
- MHI_ERR("Error moving to state %s from %s\n",
- to_mhi_pm_state_str(MHI_PM_POR),
- to_mhi_pm_state_str(cur_state));
+ MHI_CNTRL_ERR("Error moving to state %s from %s\n",
+ to_mhi_pm_state_str(MHI_PM_POR),
+ to_mhi_pm_state_str(cur_state));
return -EIO;
}
read_lock_bh(&mhi_cntrl->pm_lock);
@@ -293,7 +293,7 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
ret = mhi_init_mmio(mhi_cntrl);
if (ret) {
- MHI_ERR("Error programming mmio registers\n");
+ MHI_CNTRL_ERR("Error programming mmio registers\n");
goto error_mmio;
}
@@ -465,7 +465,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
enum mhi_ee ee = 0;
struct mhi_event *mhi_event;
- MHI_LOG("Processing Mission Mode Transition\n");
+ MHI_CNTRL_LOG("Processing Mission Mode Transition\n");
write_lock_irq(&mhi_cntrl->pm_lock);
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
@@ -473,7 +473,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
write_unlock_irq(&mhi_cntrl->pm_lock);
if (!MHI_IN_MISSION_MODE(ee)) {
- MHI_ERR("Invalid EE:%s\n", TO_MHI_EXEC_STR(ee));
+ MHI_CNTRL_ERR("Invalid EE:%s\n", TO_MHI_EXEC_STR(ee));
return -EIO;
}
@@ -536,7 +536,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
/* setup sysfs nodes for userspace votes */
mhi_create_sysfs(mhi_cntrl);
- MHI_LOG("Adding new devices\n");
+ MHI_CNTRL_LOG("Adding new devices\n");
/* add supported devices */
mhi_create_devices(mhi_cntrl);
@@ -547,7 +547,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
mhi_cntrl->wake_put(mhi_cntrl, false);
read_unlock_bh(&mhi_cntrl->pm_lock);
- MHI_LOG("Exit with ret:%d\n", ret);
+ MHI_CNTRL_LOG("Exit with ret:%d\n", ret);
return ret;
}
@@ -564,7 +564,8 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
struct mhi_sfr_info *sfr_info = mhi_cntrl->mhi_sfr;
int ret, i;
- MHI_LOG("Enter with from pm_state:%s MHI_STATE:%s to pm_state:%s\n",
+ MHI_CNTRL_LOG(
+ "Enter with from pm_state:%s MHI_STATE:%s to pm_state:%s\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
to_mhi_pm_state_str(transition_state));
@@ -595,8 +596,8 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
/* not handling sys_err, could be middle of shut down */
if (cur_state != transition_state) {
- MHI_LOG("Failed to transition to state:0x%x from:0x%x\n",
- transition_state, cur_state);
+ MHI_CNTRL_LOG("Failed to transition to state:0x%x from:0x%x\n",
+ transition_state, cur_state);
mutex_unlock(&mhi_cntrl->pm_mutex);
return;
}
@@ -605,7 +606,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
if (MHI_REG_ACCESS_VALID(prev_state)) {
unsigned long timeout = msecs_to_jiffies(mhi_cntrl->timeout_ms);
- MHI_LOG("Trigger device into MHI_RESET\n");
+ MHI_CNTRL_LOG("Trigger device into MHI_RESET\n");
write_lock_irq(&mhi_cntrl->pm_lock);
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
@@ -631,7 +632,8 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
mhi_cntrl->initiate_mhi_reset = false;
}
- MHI_LOG("Waiting for all pending event ring processing to complete\n");
+ MHI_CNTRL_LOG(
+ "Waiting for all pending event ring processing to complete\n");
mhi_event = mhi_cntrl->mhi_event;
for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
if (!mhi_event->request_irq)
@@ -641,7 +643,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
mutex_unlock(&mhi_cntrl->pm_mutex);
- MHI_LOG("Reset all active channels and remove mhi devices\n");
+ MHI_CNTRL_LOG("Reset all active channels and remove mhi devices\n");
device_for_each_child(mhi_cntrl->dev, NULL, mhi_destroy_device);
MHI_LOG("Finish resetting channels\n");
@@ -649,7 +651,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
/* remove support for userspace votes */
mhi_destroy_sysfs(mhi_cntrl);
- MHI_LOG("Waiting for all pending threads to complete\n");
+ MHI_CNTRL_LOG("Waiting for all pending threads to complete\n");
wake_up_all(&mhi_cntrl->state_event);
flush_work(&mhi_cntrl->special_work);
@@ -665,7 +667,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
MHI_ASSERT(atomic_read(&mhi_cntrl->pending_pkts), "pending_pkts != 0");
/* reset the ev rings and cmd rings */
- MHI_LOG("Resetting EV CTXT and CMD CTXT\n");
+ MHI_CNTRL_LOG("Resetting EV CTXT and CMD CTXT\n");
mhi_cmd = mhi_cntrl->mhi_cmd;
cmd_ctxt = mhi_cntrl->mhi_ctxt->cmd_ctxt;
for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
@@ -701,14 +703,15 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_DISABLE);
write_unlock_irq(&mhi_cntrl->pm_lock);
if (unlikely(cur_state != MHI_PM_DISABLE))
- MHI_ERR("Error moving from pm state:%s to state:%s\n",
+ MHI_CNTRL_ERR(
+ "Error moving from pm state:%s to state:%s\n",
to_mhi_pm_state_str(cur_state),
to_mhi_pm_state_str(MHI_PM_DISABLE));
}
- MHI_LOG("Exit with pm_state:%s mhi_state:%s\n",
- to_mhi_pm_state_str(mhi_cntrl->pm_state),
- TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+ MHI_CNTRL_LOG("Exit with pm_state:%s mhi_state:%s\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state),
+ TO_MHI_STATE_STR(mhi_cntrl->dev_state));
mutex_unlock(&mhi_cntrl->pm_mutex);
}
@@ -719,7 +722,7 @@ int mhi_debugfs_trigger_reset(void *data, u64 val)
enum MHI_PM_STATE cur_state;
int ret;
- MHI_LOG("Trigger MHI Reset\n");
+ MHI_CNTRL_LOG("Trigger MHI Reset\n");
/* exit lpm first */
mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
@@ -731,7 +734,8 @@ int mhi_debugfs_trigger_reset(void *data, u64 val)
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
- MHI_ERR("Did not enter M0 state, cur_state:%s pm_state:%s\n",
+ MHI_CNTRL_ERR(
+ "Did not enter M0 state, cur_state:%s pm_state:%s\n",
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
to_mhi_pm_state_str(mhi_cntrl->pm_state));
return -EIO;
@@ -820,6 +824,9 @@ void mhi_special_purpose_work(struct work_struct *work)
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
TO_MHI_EXEC_STR(mhi_cntrl->ee));
+ if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+ return;
+
/* check special purpose event rings and process events */
list_for_each_entry(mhi_event, &mhi_cntrl->sp_ev_rings, node)
mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX);
@@ -832,7 +839,8 @@ void mhi_process_sys_err(struct mhi_controller *mhi_cntrl)
* instead we will jump directly to rddm state
*/
if (mhi_cntrl->rddm_image) {
- MHI_LOG("Controller supports RDDM, skipping SYS_ERR_PROCESS\n");
+ MHI_CNTRL_LOG(
+ "Controller supports RDDM, skipping SYS_ERR_PROCESS\n");
return;
}
@@ -856,8 +864,8 @@ void mhi_pm_st_worker(struct work_struct *work)
list_for_each_entry_safe(itr, tmp, &head, node) {
list_del(&itr->node);
- MHI_LOG("Transition to state:%s\n",
- TO_MHI_STATE_TRANS_STR(itr->state));
+ MHI_CNTRL_LOG("Transition to state:%s\n",
+ TO_MHI_STATE_TRANS_STR(itr->state));
switch (itr->state) {
case MHI_ST_TRANSITION_PBL:
@@ -894,7 +902,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
enum MHI_ST_TRANSITION next_state;
struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
- MHI_LOG("Requested to power on\n");
+ MHI_CNTRL_LOG("Requested to power on\n");
if (mhi_cntrl->msi_allocated < mhi_cntrl->total_ev_rings)
return -EINVAL;
@@ -919,14 +927,14 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
/* setup device context */
ret = mhi_init_dev_ctxt(mhi_cntrl);
if (ret) {
- MHI_ERR("Error setting dev_context\n");
+ MHI_CNTRL_ERR("Error setting dev_context\n");
goto error_dev_ctxt;
}
}
ret = mhi_init_irq_setup(mhi_cntrl);
if (ret) {
- MHI_ERR("Error setting up irq\n");
+ MHI_CNTRL_ERR("Error setting up irq\n");
goto error_setup_irq;
}
@@ -946,7 +954,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF, &val);
if (ret) {
write_unlock_irq(&mhi_cntrl->pm_lock);
- MHI_ERR("Error getting bhie offset\n");
+ MHI_CNTRL_ERR("Error getting bhie offset\n");
goto error_bhi_offset;
}
@@ -962,7 +970,8 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
/* confirm device is in valid exec env */
if (!MHI_IN_PBL(current_ee) && current_ee != MHI_EE_AMSS) {
- MHI_ERR("Not a valid ee for power on\n");
+ MHI_CNTRL_ERR("Not a valid EE for power on:%s\n",
+ TO_MHI_EXEC_STR(current_ee));
ret = -EIO;
goto error_bhi_offset;
}
@@ -977,7 +986,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
mutex_unlock(&mhi_cntrl->pm_mutex);
- MHI_LOG("Power on setup success\n");
+ MHI_CNTRL_LOG("Power on setup success\n");
return 0;
@@ -1002,15 +1011,15 @@ void mhi_control_error(struct mhi_controller *mhi_cntrl)
enum MHI_PM_STATE cur_state, transition_state;
struct mhi_sfr_info *sfr_info = mhi_cntrl->mhi_sfr;
- MHI_LOG("Enter with pm_state:%s MHI_STATE:%s\n",
- to_mhi_pm_state_str(mhi_cntrl->pm_state),
- TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+ MHI_CNTRL_LOG("Enter with pm_state:%s MHI_STATE:%s\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state),
+ TO_MHI_STATE_STR(mhi_cntrl->dev_state));
/* copy subsystem failure reason string if supported */
if (sfr_info && sfr_info->buf_addr) {
memcpy(sfr_info->str, sfr_info->buf_addr, sfr_info->len);
- pr_err("mhi: %s sfr: %s\n", mhi_cntrl->name,
- sfr_info->buf_addr);
+ MHI_CNTRL_ERR("mhi:%s sfr: %s\n", mhi_cntrl->name,
+ sfr_info->buf_addr);
}
/* link is not down if device is in RDDM */
@@ -1023,9 +1032,9 @@ void mhi_control_error(struct mhi_controller *mhi_cntrl)
/* proceed if we move to device error or are already in error state */
if (!MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
- MHI_ERR("Failed to transition to state:%s from:%s\n",
- to_mhi_pm_state_str(transition_state),
- to_mhi_pm_state_str(cur_state));
+ MHI_CNTRL_ERR("Failed to transition to state:%s from:%s\n",
+ to_mhi_pm_state_str(transition_state),
+ to_mhi_pm_state_str(cur_state));
goto exit_control_error;
}
@@ -1038,9 +1047,9 @@ void mhi_control_error(struct mhi_controller *mhi_cntrl)
device_for_each_child(mhi_cntrl->dev, NULL, mhi_early_notify_device);
exit_control_error:
- MHI_LOG("Exit with pm_state:%s MHI_STATE:%s\n",
- to_mhi_pm_state_str(mhi_cntrl->pm_state),
- TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+ MHI_CNTRL_LOG("Exit with pm_state:%s MHI_STATE:%s\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state),
+ TO_MHI_STATE_STR(mhi_cntrl->dev_state));
}
EXPORT_SYMBOL(mhi_control_error);
@@ -1058,7 +1067,7 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
cur_state = mhi_tryset_pm_state(mhi_cntrl,
MHI_PM_LD_ERR_FATAL_DETECT);
if (cur_state != MHI_PM_LD_ERR_FATAL_DETECT)
- MHI_ERR("Failed to move to state:%s from:%s\n",
+ MHI_CNTRL_ERR("Failed to move to state:%s from:%s\n",
to_mhi_pm_state_str(MHI_PM_LD_ERR_FATAL_DETECT),
to_mhi_pm_state_str(mhi_cntrl->pm_state));
transition_state = MHI_PM_SHUTDOWN_NO_ACCESS;
@@ -1069,7 +1078,7 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
mhi_queue_disable_transition(mhi_cntrl, transition_state);
- MHI_LOG("Wait for shutdown to complete\n");
+ MHI_CNTRL_LOG("Wait for shutdown to complete\n");
flush_work(&mhi_cntrl->st_worker);
mhi_deinit_debugfs(mhi_cntrl);
@@ -1405,6 +1414,9 @@ int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
*/
mhi_special_events_pending(mhi_cntrl);
+ if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+ mhi_timesync_log(mhi_cntrl);
+
return 0;
}
@@ -1496,6 +1508,9 @@ int mhi_pm_fast_resume(struct mhi_controller *mhi_cntrl, bool notify_client)
/* schedules worker if any special purpose events need to be handled */
mhi_special_events_pending(mhi_cntrl);
+ if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+ mhi_timesync_log(mhi_cntrl);
+
MHI_LOG("Exit with pm_state:%s dev_state:%s\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
TO_MHI_STATE_STR(mhi_cntrl->dev_state));
@@ -1623,27 +1638,27 @@ int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
{
int ret;
- MHI_LOG("Enter with pm_state:%s ee:%s\n",
- to_mhi_pm_state_str(mhi_cntrl->pm_state),
- TO_MHI_EXEC_STR(mhi_cntrl->ee));
+ MHI_CNTRL_LOG("Enter with pm_state:%s ee:%s\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state),
+ TO_MHI_EXEC_STR(mhi_cntrl->ee));
/* device already in rddm */
if (mhi_cntrl->ee == MHI_EE_RDDM)
return 0;
- MHI_LOG("Triggering SYS_ERR to force rddm state\n");
+ MHI_CNTRL_LOG("Triggering SYS_ERR to force rddm state\n");
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
/* wait for rddm event */
- MHI_LOG("Waiting for device to enter RDDM state\n");
+ MHI_CNTRL_LOG("Waiting for device to enter RDDM state\n");
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_cntrl->ee == MHI_EE_RDDM,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
ret = ret ? 0 : -EIO;
- MHI_LOG("Exiting with pm_state:%s ee:%s ret:%d\n",
- to_mhi_pm_state_str(mhi_cntrl->pm_state),
- TO_MHI_EXEC_STR(mhi_cntrl->ee), ret);
+ MHI_CNTRL_LOG("Exiting with pm_state:%s ee:%s ret:%d\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state),
+ TO_MHI_EXEC_STR(mhi_cntrl->ee), ret);
return ret;
}
diff --git a/drivers/bus/mhi/devices/mhi_netdev.c b/drivers/bus/mhi/devices/mhi_netdev.c
index cd26e55..e99724f 100644
--- a/drivers/bus/mhi/devices/mhi_netdev.c
+++ b/drivers/bus/mhi/devices/mhi_netdev.c
@@ -917,7 +917,7 @@ static void mhi_netdev_create_debugfs_dir(void)
#else
-static void mhi_netdev_create_debugfs(struct mhi_netdev_private *mhi_netdev)
+static void mhi_netdev_create_debugfs(struct mhi_netdev *mhi_netdev)
{
}
diff --git a/drivers/bus/mhi/devices/mhi_uci.c b/drivers/bus/mhi/devices/mhi_uci.c
index d16ba5c..2a7fbfa 100644
--- a/drivers/bus/mhi/devices/mhi_uci.c
+++ b/drivers/bus/mhi/devices/mhi_uci.c
@@ -408,21 +408,29 @@ static ssize_t mhi_uci_read(struct file *file,
}
uci_buf = uci_chan->cur_buf;
- spin_unlock_bh(&uci_chan->lock);
/* Copy the buffer to user space */
to_copy = min_t(size_t, count, uci_chan->rx_size);
ptr = uci_buf->data + (uci_buf->len - uci_chan->rx_size);
+ spin_unlock_bh(&uci_chan->lock);
+
ret = copy_to_user(buf, ptr, to_copy);
if (ret)
return ret;
+ spin_lock_bh(&uci_chan->lock);
+ /* Buffer already queued from diff thread while we dropped lock ? */
+ if (to_copy && !uci_chan->rx_size) {
+ MSG_VERB("Bailout as buffer already queued (%lu %lu)\n",
+ to_copy, uci_chan->rx_size);
+ goto read_error;
+ }
+
MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
uci_chan->rx_size -= to_copy;
/* we finished with this buffer, queue it back to hardware */
if (!uci_chan->rx_size) {
- spin_lock_bh(&uci_chan->lock);
uci_chan->cur_buf = NULL;
if (uci_dev->enabled)
@@ -437,9 +445,8 @@ static ssize_t mhi_uci_read(struct file *file,
kfree(uci_buf->data);
goto read_error;
}
-
- spin_unlock_bh(&uci_chan->lock);
}
+ spin_unlock_bh(&uci_chan->lock);
MSG_VERB("Returning %lu bytes\n", to_copy);
diff --git a/drivers/char/adsprpc.c b/drivers/char/adsprpc.c
index 817a274..6dd25c0 100644
--- a/drivers/char/adsprpc.c
+++ b/drivers/char/adsprpc.c
@@ -384,6 +384,7 @@ struct fastrpc_apps {
uint64_t jobid[NUM_CHANNELS];
struct wakeup_source *wake_source;
struct qos_cores silvercores;
+ uint32_t max_size_limit;
};
struct fastrpc_mmap {
@@ -1070,6 +1071,18 @@ static int fastrpc_mmap_create(struct fastrpc_file *fl, int fd,
}
trace_fastrpc_dma_map(fl->cid, fd, map->phys, map->size,
len, mflags, map->attach->dma_map_attrs);
+ if (map->size < len) {
+ err = -EFAULT;
+ goto bail;
+ }
+
+ VERIFY(err, map->size >= len && map->size < me->max_size_limit);
+ if (err) {
+ err = -EFAULT;
+ pr_err("adsprpc: %s: invalid map size 0x%zx len 0x%zx\n",
+ __func__, map->size, len);
+ goto bail;
+ }
vmid = fl->apps->channel[fl->cid].vmid;
if (!sess->smmu.enabled && !vmid) {
@@ -1112,12 +1125,17 @@ static int fastrpc_buf_alloc(struct fastrpc_file *fl, size_t size,
int remote, struct fastrpc_buf **obuf)
{
int err = 0, vmid;
+ struct fastrpc_apps *me = &gfa;
struct fastrpc_buf *buf = NULL, *fr = NULL;
struct hlist_node *n;
- VERIFY(err, size > 0);
- if (err)
+ VERIFY(err, size > 0 && size < me->max_size_limit);
+ if (err) {
+ err = -EFAULT;
+ pr_err("adsprpc: %s: invalid allocation size 0x%zx\n",
+ __func__, size);
goto bail;
+ }
if (!remote) {
/* find the smallest buffer that fits in the cache */
@@ -1924,7 +1942,8 @@ static int get_args(uint32_t kernel, struct smq_invoke_ctx *ctx)
}
PERF_END);
for (i = bufs; rpra && i < bufs + handles; i++) {
- rpra[i].dma.fd = ctx->fds[i];
+ if (ctx->fds)
+ rpra[i].dma.fd = ctx->fds[i];
rpra[i].dma.len = (uint32_t)lpra[i].buf.len;
rpra[i].dma.offset = (uint32_t)(uintptr_t)lpra[i].buf.pv;
}
@@ -3938,11 +3957,8 @@ static int fastrpc_channel_open(struct fastrpc_file *fl)
static int fastrpc_device_open(struct inode *inode, struct file *filp)
{
int err = 0;
- struct dentry *debugfs_file;
struct fastrpc_file *fl = NULL;
struct fastrpc_apps *me = &gfa;
- char strpid[PID_SIZE];
- int buf_size = 0;
/*
* Indicates the device node opened
@@ -3961,18 +3977,6 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp)
if (err)
return err;
- snprintf(strpid, PID_SIZE, "%d", current->pid);
- buf_size = strlen(current->comm) + strlen("_") + strlen(strpid) + 1;
- VERIFY(err, NULL != (fl->debug_buf = kzalloc(buf_size, GFP_KERNEL)));
- if (err) {
- kfree(fl);
- return err;
- }
- snprintf(fl->debug_buf, UL_SIZE, "%.10s%s%d",
- current->comm, "_", current->pid);
- debugfs_file = debugfs_create_file(fl->debug_buf, 0644, debugfs_root,
- fl, &debugfs_fops);
-
fl->wake_source = wakeup_source_register(fl->debug_buf);
if (IS_ERR_OR_NULL(fl->wake_source)) {
pr_err("adsprpc: Error: %s: %s: wakeup_source_register failed with err %ld\n",
@@ -3986,14 +3990,11 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp)
INIT_HLIST_HEAD(&fl->remote_bufs);
INIT_HLIST_NODE(&fl->hn);
fl->sessionid = 0;
- fl->tgid = current->tgid;
fl->apps = me;
fl->mode = FASTRPC_MODE_SERIAL;
fl->cid = -1;
fl->dev_minor = dev_minor;
fl->init_mem = NULL;
- if (debugfs_file != NULL)
- fl->debugfs_file = debugfs_file;
memset(&fl->perf, 0, sizeof(fl->perf));
fl->qos_request = 0;
fl->dsp_proc_init = 0;
@@ -4007,6 +4008,29 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp)
return 0;
}
+static int fastrpc_set_process_info(struct fastrpc_file *fl)
+{
+ int err = 0, buf_size = 0;
+ char strpid[PID_SIZE];
+
+ fl->tgid = current->tgid;
+ snprintf(strpid, PID_SIZE, "%d", current->pid);
+ buf_size = strlen(current->comm) + strlen("_") + strlen(strpid) + 1;
+ fl->debug_buf = kzalloc(buf_size, GFP_KERNEL);
+ if (!fl->debug_buf) {
+ err = -ENOMEM;
+ return err;
+ }
+ snprintf(fl->debug_buf, UL_SIZE, "%.10s%s%d",
+ current->comm, "_", current->pid);
+ fl->debugfs_file = debugfs_create_file(fl->debug_buf, 0644,
+ debugfs_root, fl, &debugfs_fops);
+ if (!fl->debugfs_file)
+ pr_warn("Error: %s: %s: failed to create debugfs file %s\n",
+ current->comm, __func__, fl->debug_buf);
+ return err;
+}
+
static int fastrpc_get_info(struct fastrpc_file *fl, uint32_t *info)
{
int err = 0;
@@ -4015,6 +4039,9 @@ static int fastrpc_get_info(struct fastrpc_file *fl, uint32_t *info)
VERIFY(err, fl != NULL);
if (err)
goto bail;
+ err = fastrpc_set_process_info(fl);
+ if (err)
+ goto bail;
if (fl->cid == -1) {
cid = *info;
VERIFY(err, cid < NUM_CHANNELS);
@@ -4114,6 +4141,9 @@ static int fastrpc_internal_control(struct fastrpc_file *fl,
fl->ws_timeout = cp->pm.timeout;
fastrpc_pm_awake(fl);
break;
+ case FASTRPC_CONTROL_DSPPROCESS_CLEAN:
+ (void)fastrpc_release_current_dsp_process(fl);
+ break;
default:
err = -EBADRQC;
break;
@@ -4595,9 +4625,11 @@ static int fastrpc_cb_probe(struct device *dev)
struct fastrpc_channel_ctx *chan;
struct fastrpc_session_ctx *sess;
struct of_phandle_args iommuspec;
+ struct fastrpc_apps *me = &gfa;
const char *name;
int err = 0, cid = -1, i = 0;
u32 sharedcb_count = 0, j = 0;
+ uint32_t dma_addr_pool[2] = {0, 0};
VERIFY(err, NULL != (name = of_get_property(dev->of_node,
"label", NULL)));
@@ -4644,6 +4676,11 @@ static int fastrpc_cb_probe(struct device *dev)
dma_set_max_seg_size(sess->smmu.dev, DMA_BIT_MASK(32));
dma_set_seg_boundary(sess->smmu.dev, (unsigned long)DMA_BIT_MASK(64));
+ of_property_read_u32_array(dev->of_node, "qcom,iommu-dma-addr-pool",
+ dma_addr_pool, 2);
+ me->max_size_limit = (dma_addr_pool[1] == 0 ? 0x78000000 :
+ dma_addr_pool[1]);
+
if (of_get_property(dev->of_node, "shared-cb", NULL) != NULL) {
err = of_property_read_u32(dev->of_node, "shared-cb",
&sharedcb_count);
@@ -4783,7 +4820,6 @@ static int fastrpc_probe(struct platform_device *pdev)
init_qos_cores_list(dev, "qcom,qos-cores",
&me->silvercores);
-
of_property_read_u32(dev->of_node, "qcom,rpc-latency-us",
&me->latency);
if (of_get_property(dev->of_node,
diff --git a/drivers/char/adsprpc_shared.h b/drivers/char/adsprpc_shared.h
index bcc63c8..7501b1c 100644
--- a/drivers/char/adsprpc_shared.h
+++ b/drivers/char/adsprpc_shared.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
- * Copyright (c) 2012-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2020, The Linux Foundation. All rights reserved.
*/
#ifndef ADSPRPC_SHARED_H
#define ADSPRPC_SHARED_H
@@ -250,6 +250,8 @@ enum fastrpc_control_type {
FASTRPC_CONTROL_KALLOC = 3,
FASTRPC_CONTROL_WAKELOCK = 4,
FASTRPC_CONTROL_PM = 5,
+/* Clean process on DSP */
+ FASTRPC_CONTROL_DSPPROCESS_CLEAN = 6,
};
struct fastrpc_ctrl_latency {
diff --git a/drivers/char/diag/diag_dci.c b/drivers/char/diag/diag_dci.c
index e2c3344..5108bca 100644
--- a/drivers/char/diag/diag_dci.c
+++ b/drivers/char/diag/diag_dci.c
@@ -2996,6 +2996,8 @@ int diag_dci_register_client(struct diag_dci_reg_tbl_t *reg_entry)
int i, err = 0;
struct diag_dci_client_tbl *new_entry = NULL;
struct diag_dci_buf_peripheral_t *proc_buf = NULL;
+ struct pid *pid_struct = NULL;
+ struct task_struct *task_s = NULL;
if (!reg_entry)
return DIAG_DCI_NO_REG;
@@ -3011,14 +3013,25 @@ int diag_dci_register_client(struct diag_dci_reg_tbl_t *reg_entry)
if (driver->num_dci_client >= MAX_DCI_CLIENTS)
return DIAG_DCI_NO_REG;
- new_entry = kzalloc(sizeof(struct diag_dci_client_tbl), GFP_KERNEL);
- if (!new_entry)
+ pid_struct = find_get_pid(current->tgid);
+ if (!pid_struct)
return DIAG_DCI_NO_REG;
+ task_s = get_pid_task(pid_struct, PIDTYPE_PID);
+ if (!task_s) {
+ put_pid(pid_struct);
+ return DIAG_DCI_NO_REG;
+ }
+ new_entry = kzalloc(sizeof(struct diag_dci_client_tbl), GFP_KERNEL);
+ if (!new_entry) {
+ put_pid(pid_struct);
+ put_task_struct(task_s);
+ return DIAG_DCI_NO_REG;
+ }
+
+ get_task_struct(task_s);
mutex_lock(&driver->dci_mutex);
-
- get_task_struct(current);
- new_entry->client = current;
+ new_entry->client = task_s;
new_entry->tgid = current->tgid;
new_entry->client_info.notification_list =
reg_entry->notification_list;
@@ -3108,7 +3121,8 @@ int diag_dci_register_client(struct diag_dci_reg_tbl_t *reg_entry)
diag_update_proc_vote(DIAG_PROC_DCI, VOTE_UP, reg_entry->token);
queue_work(driver->diag_real_time_wq, &driver->diag_real_time_work);
mutex_unlock(&driver->dci_mutex);
-
+ put_pid(pid_struct);
+ put_task_struct(task_s);
return reg_entry->client_id;
fail_alloc:
@@ -3145,8 +3159,10 @@ int diag_dci_register_client(struct diag_dci_reg_tbl_t *reg_entry)
kfree(new_entry);
new_entry = NULL;
}
- put_task_struct(current);
mutex_unlock(&driver->dci_mutex);
+ put_task_struct(task_s);
+ put_task_struct(task_s);
+ put_pid(pid_struct);
return DIAG_DCI_NO_REG;
}
diff --git a/drivers/char/diag/diag_masks.c b/drivers/char/diag/diag_masks.c
index 088e449..d40f2f2 100644
--- a/drivers/char/diag/diag_masks.c
+++ b/drivers/char/diag/diag_masks.c
@@ -809,7 +809,7 @@ static int diag_cmd_get_ssid_range(unsigned char *src_buf, int src_len,
write_len += sizeof(rsp_ms);
if (rsp_ms.id_valid) {
sub_index = diag_check_subid_mask_index(rsp_ms.sub_id,
- pid);
+ 0);
ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
sub_index);
if (!ms_ptr)
@@ -893,7 +893,7 @@ static int diag_cmd_get_build_mask(unsigned char *src_buf, int src_len,
if (src_len < sizeof(struct diag_build_mask_req_sub_t))
goto fail;
req_sub = (struct diag_build_mask_req_sub_t *)src_buf;
- rsp_sub.header.cmd_code = DIAG_CMD_MSG_CONFIG;
+ rsp_sub.header.cmd_code = req_sub->header.cmd_code;
rsp_sub.sub_cmd = DIAG_CMD_OP_GET_BUILD_MASK;
rsp_sub.ssid_first = req_sub->ssid_first;
rsp_sub.ssid_last = req_sub->ssid_last;
@@ -1004,11 +1004,17 @@ static int diag_cmd_get_msg_mask(unsigned char *src_buf, int src_len,
req_sub = (struct diag_msg_build_mask_sub_t *)src_buf;
rsp_sub = *req_sub;
rsp_sub.status = MSG_STATUS_FAIL;
- sub_index = diag_check_subid_mask_index(req_sub->sub_id, pid);
- ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr, sub_index);
- if (!ms_ptr)
- goto err;
- mask = (struct diag_msg_mask_t *)ms_ptr->sub_ptr;
+ if (req_sub->id_valid) {
+ sub_index = diag_check_subid_mask_index(req_sub->sub_id,
+ 0);
+ ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
+ sub_index);
+ if (!ms_ptr)
+ goto err;
+ mask = (struct diag_msg_mask_t *)ms_ptr->sub_ptr;
+ } else {
+ mask = (struct diag_msg_mask_t *)mask_info->ptr;
+ }
ssid_range.ssid_first = req_sub->ssid_first;
ssid_range.ssid_last = req_sub->ssid_last;
header_len = sizeof(rsp_sub);
@@ -1103,7 +1109,7 @@ static int diag_cmd_set_msg_mask(unsigned char *src_buf, int src_len,
header_len = sizeof(struct diag_msg_build_mask_sub_t);
if (req_sub->id_valid) {
sub_index = diag_check_subid_mask_index(req_sub->sub_id,
- pid);
+ 0);
ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
sub_index);
if (!ms_ptr)
@@ -1304,7 +1310,7 @@ static int diag_cmd_set_all_msg_mask(unsigned char *src_buf, int src_len,
header_len = sizeof(struct diag_msg_config_rsp_sub_t);
if (req_sub->id_valid) {
sub_index = diag_check_subid_mask_index(req_sub->sub_id,
- pid);
+ 0);
ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
sub_index);
if (!ms_ptr)
@@ -1454,7 +1460,7 @@ static int diag_cmd_get_event_mask(unsigned char *src_buf, int src_len,
if (!cmd_ver || !req->id_valid)
memcpy(dest_buf + write_len, event_mask.ptr, mask_size);
else {
- sub_index = diag_check_subid_mask_index(req->sub_id, pid);
+ sub_index = diag_check_subid_mask_index(req->sub_id, 0);
ms_ptr = diag_get_ms_ptr_index(event_mask.ms_ptr, sub_index);
if (!ms_ptr || !ms_ptr->sub_ptr)
return 0;
@@ -1516,7 +1522,7 @@ static int diag_cmd_update_event_mask(unsigned char *src_buf, int src_len,
goto err;
}
if (cmd_ver && req_sub->id_valid) {
- sub_index = diag_check_subid_mask_index(req_sub->sub_id, pid);
+ sub_index = diag_check_subid_mask_index(req_sub->sub_id, 0);
if (sub_index < 0) {
ret = sub_index;
goto err;
@@ -1631,7 +1637,7 @@ static int diag_cmd_toggle_events(unsigned char *src_buf, int src_len,
preset = req->preset_id;
}
if (cmd_ver && req->id_valid) {
- sub_index = diag_check_subid_mask_index(req->sub_id, pid);
+ sub_index = diag_check_subid_mask_index(req->sub_id, 0);
if (sub_index < 0) {
ret = sub_index;
goto err;
@@ -1751,7 +1757,7 @@ static int diag_cmd_get_log_mask(unsigned char *src_buf, int src_len,
req_sub = (struct diag_log_config_rsp_sub_t *)src_buf;
if (req_sub->id_valid) {
sub_index = diag_check_subid_mask_index(req_sub->sub_id,
- pid);
+ 0);
ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
sub_index);
if (!ms_ptr) {
@@ -1875,7 +1881,7 @@ static int diag_cmd_get_log_range(unsigned char *src_buf, int src_len,
req = (struct diag_log_config_req_sub_t *)src_buf;
if (req->id_valid) {
sub_index = diag_check_subid_mask_index(req->sub_id,
- pid);
+ 0);
ms_ptr = diag_get_ms_ptr_index(log_mask.ms_ptr,
sub_index);
if (!ms_ptr)
@@ -1963,7 +1969,7 @@ static int diag_cmd_set_log_mask(unsigned char *src_buf, int src_len,
read_len += sizeof(struct diag_log_config_rsp_sub_t);
if (req_sub->id_valid) {
sub_index = diag_check_subid_mask_index(req_sub->sub_id,
- pid);
+ 0);
ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
sub_index);
if (!ms_ptr) {
@@ -2170,7 +2176,7 @@ static int diag_cmd_disable_log_mask(unsigned char *src_buf, int src_len,
req = (struct diag_log_config_rsp_sub_t *)src_buf;
if (req->id_valid) {
sub_index = diag_check_subid_mask_index(req->sub_id,
- pid);
+ 0);
ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
sub_index);
if (!ms_ptr) {
@@ -3425,7 +3431,9 @@ int diag_process_apps_masks(unsigned char *buf, int len, int pid)
subid = *(uint32_t *)(buf +
sizeof(struct diag_pkt_header_t) +
2*sizeof(uint8_t));
+ mutex_lock(&driver->md_session_lock);
subid_index = diag_check_subid_mask_index(subid, pid);
+ mutex_unlock(&driver->md_session_lock);
}
if (subid_valid && (subid_index < 0))
return 0;
@@ -3608,8 +3616,8 @@ int diag_check_subid_mask_index(uint32_t subid, int pid)
diag_subid_info[i] = subid;
- mutex_lock(&driver->md_session_lock);
- info = diag_md_session_get_pid(pid);
+ if (pid)
+ info = diag_md_session_get_pid(pid);
err = diag_multisim_msg_mask_init(i, info);
if (err)
@@ -3621,10 +3629,8 @@ int diag_check_subid_mask_index(uint32_t subid, int pid)
if (err)
goto fail;
- mutex_unlock(&driver->md_session_lock);
return i;
fail:
- mutex_unlock(&driver->md_session_lock);
pr_err("diag: Could not initialize diag mask for subid: %d buffers\n",
subid);
return -ENOMEM;
diff --git a/drivers/char/diag/diagfwd.h b/drivers/char/diag/diagfwd.h
index fd79491..8960b72 100644
--- a/drivers/char/diag/diagfwd.h
+++ b/drivers/char/diag/diagfwd.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright (c) 2008-2019, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2008-2020, The Linux Foundation. All rights reserved.
*/
#ifndef DIAGFWD_H
@@ -44,4 +44,6 @@ void diag_update_pkt_buffer(unsigned char *buf, uint32_t len, int type);
int diag_process_stm_cmd(unsigned char *buf, unsigned char *dest_buf);
void diag_md_hdlc_reset_timer_func(struct timer_list *tlist);
void diag_update_md_clients(unsigned int type);
+void diag_process_stm_mask(uint8_t cmd, uint8_t data_mask,
+ int data_type);
#endif
diff --git a/drivers/char/diag/diagfwd_cntl.c b/drivers/char/diag/diagfwd_cntl.c
index 5acb25f..c41839f 100644
--- a/drivers/char/diag/diagfwd_cntl.c
+++ b/drivers/char/diag/diagfwd_cntl.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-only
-/* Copyright (c) 2011-2019, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2011-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/slab.h>
@@ -1894,12 +1894,18 @@ int diag_send_passthru_ctrl_pkt(struct diag_hw_accel_cmd_req_t *req_params)
pr_err("diag: Unable to send PASSTHRU ctrl packet to peripheral %d, err: %d\n",
i, err);
}
+ if ((diagid_mask & DIAG_ID_APPS) &&
+ (hw_accel_type == DIAG_HW_ACCEL_TYPE_STM)) {
+ diag_process_stm_mask(req_params->operation,
+ DIAG_STM_APPS, APPS_DATA);
+ }
return 0;
}
int diagfwd_cntl_init(void)
{
uint8_t peripheral = 0;
+ uint32_t diagid_mask = 0;
driver->polling_reg_flag = 0;
driver->log_on_demand_support = 1;
@@ -1920,6 +1926,9 @@ int diagfwd_cntl_init(void)
if (!driver->cntl_wq)
return -ENOMEM;
+ diagid_mask = (BITMASK_DIAGID_FMASK | BITMASK_HW_ACCEL_STM_V1);
+ process_diagid_v2_feature_mask(DIAG_ID_APPS, diagid_mask);
+
return 0;
}
diff --git a/drivers/char/diag/diagfwd_cntl.h b/drivers/char/diag/diagfwd_cntl.h
index b714d5e..b78f7e3 100644
--- a/drivers/char/diag/diagfwd_cntl.h
+++ b/drivers/char/diag/diagfwd_cntl.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright (c) 2011-2019, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2011-2020, The Linux Foundation. All rights reserved.
*/
#ifndef DIAGFWD_CNTL_H
@@ -91,6 +91,10 @@
#define MAX_DIAGID_STR_LEN 30
#define MIN_DIAGID_STR_LEN 5
+#define BITMASK_DIAGID_FMASK 0x0001
+#define BITMASK_HW_ACCEL_STM_V1 0x0002
+#define BITMASK_HW_ACCEL_ATB_V1 0x0004
+
struct diag_ctrl_pkt_header_t {
uint32_t pkt_id;
uint32_t len;
diff --git a/drivers/char/hw_random/msm_rng.c b/drivers/char/hw_random/msm_rng.c
index 4479b1d..541fa71 100644
--- a/drivers/char/hw_random/msm_rng.c
+++ b/drivers/char/hw_random/msm_rng.c
@@ -285,6 +285,10 @@ static int msm_rng_probe(struct platform_device *pdev)
"qcom,msm-rng-iface-clk")) {
msm_rng_dev->prng_clk = clk_get(&pdev->dev,
"iface_clk");
+ } else if (of_property_read_bool(pdev->dev.of_node,
+ "qcom,msm-rng-hwkm-clk")) {
+ msm_rng_dev->prng_clk = clk_get(&pdev->dev,
+ "km_clk_src");
} else {
msm_rng_dev->prng_clk = clk_get(&pdev->dev,
"core_clk");
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 9e62be3..c372a24 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -4018,6 +4018,14 @@ static inline void clk_debug_reparent(struct clk_core *core,
static inline void clk_debug_unregister(struct clk_core *core)
{
}
+
+void clk_debug_print_hw(struct clk_core *clk, struct seq_file *f)
+{
+}
+
+void clock_debug_print_enabled(void)
+{
+}
#endif
/**
diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c
index bb9e9ab..09ed532 100644
--- a/drivers/clk/qcom/clk-rpmh.c
+++ b/drivers/clk/qcom/clk-rpmh.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/clk-provider.h>
@@ -312,6 +312,7 @@ static const struct clk_rpmh_desc clk_rpmh_lito = {
};
DEFINE_CLK_RPMH_ARC(lagoon, bi_tcxo, bi_tcxo_ao, "xo.lvl", 0x3, 4);
+DEFINE_CLK_RPMH_ARC(lagoon, qlink, qlink_ao, "qphy.lvl", 0x1, 4);
DEFINE_CLK_RPMH_VRM(lagoon, ln_bb_clk2, ln_bb_clk2_ao, "lnbclkg2", 4);
DEFINE_CLK_RPMH_VRM(lagoon, ln_bb_clk3, ln_bb_clk3_ao, "lnbclkg3", 4);
@@ -322,6 +323,8 @@ static struct clk_hw *lagoon_rpmh_clocks[] = {
[RPMH_LN_BB_CLK2_A] = &lagoon_ln_bb_clk2_ao.hw,
[RPMH_LN_BB_CLK3] = &lagoon_ln_bb_clk3.hw,
[RPMH_LN_BB_CLK3_A] = &lagoon_ln_bb_clk3_ao.hw,
+ [RPMH_QLINK_CLK] = &lagoon_qlink.hw,
+ [RPMH_QLINK_CLK_A] = &lagoon_qlink_ao.hw,
};
static const struct clk_rpmh_desc clk_rpmh_lagoon = {
diff --git a/drivers/clk/qcom/clk-smd-rpm.c b/drivers/clk/qcom/clk-smd-rpm.c
index 9e72243..2ce13cd 100644
--- a/drivers/clk/qcom/clk-smd-rpm.c
+++ b/drivers/clk/qcom/clk-smd-rpm.c
@@ -860,7 +860,9 @@ static const struct rpm_smd_clk_desc rpm_clk_bengal = {
DEFINE_CLK_SMD_RPM_XO_BUFFER(scuba, ln_bb_clk2, ln_bb_clk2_a, 0x2);
DEFINE_CLK_SMD_RPM_XO_BUFFER(scuba, rf_clk3, rf_clk3_a, 6);
-DEFINE_CLK_SMD_RPM(scuba, qpic_clk, qpic_a_clk, RPM_SMD_QPIC_CLK, 0);
+DEFINE_CLK_SMD_RPM(scuba, qpic_clk, qpic_a_clk, QCOM_SMD_RPM_QPIC_CLK, 0);
+DEFINE_CLK_SMD_RPM(scuba, hwkm_clk, hwkm_a_clk, QCOM_SMD_RPM_HWKM_CLK, 0);
+DEFINE_CLK_SMD_RPM(scuba, pka_clk, pka_a_clk, QCOM_SMD_RPM_PKA_CLK, 0);
/* Scuba */
static struct clk_hw *scuba_clks[] = {
@@ -946,11 +948,15 @@ static struct clk_hw *scuba_clks[] = {
[CXO_SMD_WLAN_CLK] = &bi_tcxo_wlan_clk.hw,
[CXO_SMD_PIL_LPASS_CLK] = &bi_tcxo_pil_lpass_clk.hw,
[CXO_SMD_PIL_CDSP_CLK] = &bi_tcxo_pil_cdsp_clk.hw,
+ [RPM_SMD_HWKM_CLK] = &scuba_hwkm_clk.hw,
+ [RPM_SMD_HWKM_A_CLK] = &scuba_hwkm_a_clk.hw,
+ [RPM_SMD_PKA_CLK] = &scuba_pka_clk.hw,
+ [RPM_SMD_PKA_A_CLK] = &scuba_pka_a_clk.hw,
};
static const struct rpm_smd_clk_desc rpm_clk_scuba = {
.clks = scuba_clks,
- .num_rpm_clks = RPM_SMD_QPIC_A_CLK,
+ .num_rpm_clks = RPM_SMD_PKA_A_CLK,
.num_clks = ARRAY_SIZE(scuba_clks),
};
diff --git a/drivers/clk/qcom/debugcc-bengal.c b/drivers/clk/qcom/debugcc-bengal.c
index bf3a92f..2bb282b 100644
--- a/drivers/clk/qcom/debugcc-bengal.c
+++ b/drivers/clk/qcom/debugcc-bengal.c
@@ -165,7 +165,6 @@ static const char *const gcc_debug_mux_parent_names[] = {
"gcc_gpu_memnoc_gfx_clk",
"gcc_gpu_snoc_dvm_gfx_clk",
"gcc_gpu_throttle_core_clk",
- "gcc_gpu_throttle_xo_clk",
"gcc_pdm2_clk",
"gcc_pdm_ahb_clk",
"gcc_pdm_xo4_clk",
@@ -270,7 +269,6 @@ static int gcc_debug_mux_sels[] = {
0xE8, /* gcc_gpu_memnoc_gfx_clk */
0xEA, /* gcc_gpu_snoc_dvm_gfx_clk */
0xEF, /* gcc_gpu_throttle_core_clk */
- 0xEE, /* gcc_gpu_throttle_xo_clk */
0x73, /* gcc_pdm2_clk */
0x71, /* gcc_pdm_ahb_clk */
0x72, /* gcc_pdm_xo4_clk */
diff --git a/drivers/clk/qcom/debugcc-scuba.c b/drivers/clk/qcom/debugcc-scuba.c
index 7c365ce..9d4820b 100644
--- a/drivers/clk/qcom/debugcc-scuba.c
+++ b/drivers/clk/qcom/debugcc-scuba.c
@@ -26,24 +26,21 @@ static struct measure_clk_data debug_mux_priv = {
};
static const char *const apcs_debug_mux_parent_names[] = {
- "perfcl_clk",
"pwrcl_clk",
};
static int apcs_debug_mux_sels[] = {
- 0x1, /* perfcl_clk */
0x0, /* pwrcl_clk */
};
static int apcs_debug_mux_pre_divs[] = {
- 0x8, /* perfcl_clk */
0x8, /* pwrcl_clk */
};
static struct clk_debug_mux apcs_debug_mux = {
.priv = &debug_mux_priv,
- .debug_offset = 0x1C,
- .post_div_offset = 0x1C,
+ .debug_offset = 0x0,
+ .post_div_offset = 0x0,
.cbcr_offset = 0x0,
.src_sel_mask = 0x3FF00,
.src_sel_shift = 8,
@@ -114,7 +111,6 @@ static const char *const gcc_debug_mux_parent_names[] = {
"disp_cc_debug_mux",
"gcc_ahb2phy_csi_clk",
"gcc_ahb2phy_usb_clk",
- "gcc_apc_vs_clk",
"gcc_bimc_gpu_axi_clk",
"gcc_boot_rom_ahb_clk",
"gcc_cam_throttle_nrt_clk",
@@ -159,8 +155,6 @@ static const char *const gcc_debug_mux_parent_names[] = {
"gcc_gpu_memnoc_gfx_clk",
"gcc_gpu_snoc_dvm_gfx_clk",
"gcc_gpu_throttle_core_clk",
- "gcc_gpu_throttle_xo_clk",
- "gcc_mss_vs_clk",
"gcc_pdm2_clk",
"gcc_pdm_ahb_clk",
"gcc_pdm_xo4_clk",
@@ -193,9 +187,6 @@ static const char *const gcc_debug_mux_parent_names[] = {
"gcc_usb3_prim_phy_com_aux_clk",
"gcc_usb3_prim_phy_pipe_clk",
"gcc_vcodec0_axi_clk",
- "gcc_vdda_vs_clk",
- "gcc_vddcx_vs_clk",
- "gcc_vddmx_vs_clk",
"gcc_venus_ahb_clk",
"gcc_venus_ctl_axi_clk",
"gcc_video_ahb_clk",
@@ -204,13 +195,17 @@ static const char *const gcc_debug_mux_parent_names[] = {
"gcc_video_vcodec0_sys_clk",
"gcc_video_venus_ctl_clk",
"gcc_video_xo_clk",
- "gcc_vs_ctrl_ahb_clk",
- "gcc_vs_ctrl_clk",
- "gcc_wcss_vs_clk",
"gpu_cc_debug_mux",
+ "mc_cc_debug_mux",
"measure_only_cnoc_clk",
"measure_only_ipa_2x_clk",
"measure_only_snoc_clk",
+ "measure_only_qpic_clk",
+ "measure_only_qpic_ahb_clk",
+ "measure_only_hwkm_km_core_clk",
+ "measure_only_hwkm_ahb_clk",
+ "measure_only_pka_core_clk",
+ "measure_only_pka_ahb_clk",
};
static int gcc_debug_mux_sels[] = {
@@ -218,7 +213,6 @@ static int gcc_debug_mux_sels[] = {
0x41, /* disp_cc_debug_mux */
0x62, /* gcc_ahb2phy_csi_clk */
0x63, /* gcc_ahb2phy_usb_clk */
- 0xBF, /* gcc_apc_vs_clk */
0x8D, /* gcc_bimc_gpu_axi_clk */
0x75, /* gcc_boot_rom_ahb_clk */
0x4B, /* gcc_cam_throttle_nrt_clk */
@@ -263,8 +257,6 @@ static int gcc_debug_mux_sels[] = {
0xE4, /* gcc_gpu_memnoc_gfx_clk */
0xE6, /* gcc_gpu_snoc_dvm_gfx_clk */
0xEB, /* gcc_gpu_throttle_core_clk */
- 0xEA, /* gcc_gpu_throttle_xo_clk */
- 0xBE, /* gcc_mss_vs_clk */
0x72, /* gcc_pdm2_clk */
0x70, /* gcc_pdm_ahb_clk */
0x71, /* gcc_pdm_xo4_clk */
@@ -297,9 +289,6 @@ static int gcc_debug_mux_sels[] = {
0x5E, /* gcc_usb3_prim_phy_com_aux_clk */
0x5F, /* gcc_usb3_prim_phy_pipe_clk */
0x12C, /* gcc_vcodec0_axi_clk */
- 0xBB, /* gcc_vdda_vs_clk */
- 0xB9, /* gcc_vddcx_vs_clk */
- 0xBA, /* gcc_vddmx_vs_clk */
0x12D, /* gcc_venus_ahb_clk */
0x12B, /* gcc_venus_ctl_axi_clk */
0x35, /* gcc_video_ahb_clk */
@@ -308,13 +297,17 @@ static int gcc_debug_mux_sels[] = {
0x129, /* gcc_video_vcodec0_sys_clk */
0x127, /* gcc_video_venus_ctl_clk */
0x3D, /* gcc_video_xo_clk */
- 0xBD, /* gcc_vs_ctrl_ahb_clk */
- 0xBC, /* gcc_vs_ctrl_clk */
- 0xC0, /* gcc_wcss_vs_clk */
0xE3, /* gpu_cc_debug_mux */
+ 0x9B, /* mc_cc_debug_mux */
0x19, /* measure_only_cnoc_clk */
0xC2, /* measure_only_ipa_2x_clk */
0x7, /* measure_only_snoc_clk */
+ 0x9C, /* measure_only_qpic_clk */
+ 0x9E, /* measure_only_qpic_ahb_clk */
+ 0xA0, /* measure_only_hwkm_km_core_clk */
+ 0xA2, /* measure_only_hwkm_ahb_clk */
+ 0xA3, /* measure_only_pka_core_clk */
+ 0xA4, /* measure_only_pka_ahb_clk */
};
static struct clk_debug_mux gcc_debug_mux = {
@@ -340,9 +333,7 @@ static struct clk_debug_mux gcc_debug_mux = {
static const char *const gpu_cc_debug_mux_parent_names[] = {
"gpu_cc_ahb_clk",
"gpu_cc_crc_ahb_clk",
- "gpu_cc_cx_apb_clk",
"gpu_cc_cx_gfx3d_clk",
- "gpu_cc_cx_gfx3d_slv_clk",
"gpu_cc_cx_gmu_clk",
"gpu_cc_cx_snoc_dvm_clk",
"gpu_cc_cxo_aon_clk",
@@ -355,9 +346,7 @@ static const char *const gpu_cc_debug_mux_parent_names[] = {
static int gpu_cc_debug_mux_sels[] = {
0x10, /* gpu_cc_ahb_clk */
0x11, /* gpu_cc_crc_ahb_clk */
- 0x14, /* gpu_cc_cx_apb_clk */
0x1A, /* gpu_cc_cx_gfx3d_clk */
- 0x1B, /* gpu_cc_cx_gfx3d_slv_clk */
0x18, /* gpu_cc_cx_gmu_clk */
0x15, /* gpu_cc_cx_snoc_dvm_clk */
0xA, /* gpu_cc_cxo_aon_clk */
@@ -403,7 +392,7 @@ static struct clk_debug_mux mc_cc_debug_mux = {
};
static struct mux_regmap_names mux_list[] = {
- { .mux = &apcs_debug_mux, .regmap_name = "qcom,apcs" },
+ { .mux = &apcs_debug_mux, .regmap_name = "qcom,cpucc" },
{ .mux = &disp_cc_debug_mux, .regmap_name = "qcom,dispcc" },
{ .mux = &gcc_debug_mux, .regmap_name = "qcom,gcc" },
{ .mux = &gpu_cc_debug_mux, .regmap_name = "qcom,gpucc" },
@@ -418,18 +407,10 @@ static struct clk_dummy measure_only_mccc_clk = {
},
};
-static struct clk_dummy measure_only_apcs_gold_post_acd_clk = {
+static struct clk_dummy pwrcl_clk = {
.rrate = 1000,
.hw.init = &(struct clk_init_data){
- .name = "measure_only_apcs_gold_post_acd_clk",
- .ops = &clk_dummy_ops,
- },
-};
-
-static struct clk_dummy measure_only_apcs_silver_post_acd_clk = {
- .rrate = 1000,
- .hw.init = &(struct clk_init_data){
- .name = "measure_only_apcs_silver_post_acd_clk",
+ .name = "pwrcl_clk",
.ops = &clk_dummy_ops,
},
};
@@ -458,13 +439,66 @@ static struct clk_dummy measure_only_snoc_clk = {
},
};
+static struct clk_dummy measure_only_qpic_clk = {
+ .rrate = 1000,
+ .hw.init = &(struct clk_init_data){
+ .name = "measure_only_qpic_clk",
+ .ops = &clk_dummy_ops,
+ },
+};
+
+static struct clk_dummy measure_only_qpic_ahb_clk = {
+ .rrate = 1000,
+ .hw.init = &(struct clk_init_data){
+ .name = "measure_only_qpic_ahb_clk",
+ .ops = &clk_dummy_ops,
+ },
+};
+
+static struct clk_dummy measure_only_hwkm_km_core_clk = {
+ .rrate = 1000,
+ .hw.init = &(struct clk_init_data){
+ .name = "measure_only_hwkm_km_core_clk",
+ .ops = &clk_dummy_ops,
+ },
+};
+
+static struct clk_dummy measure_only_hwkm_ahb_clk = {
+ .rrate = 1000,
+ .hw.init = &(struct clk_init_data){
+ .name = "measure_only_hwkm_ahb_clk",
+ .ops = &clk_dummy_ops,
+ },
+};
+
+static struct clk_dummy measure_only_pka_core_clk = {
+ .rrate = 1000,
+ .hw.init = &(struct clk_init_data){
+ .name = "measure_only_pka_core_clk",
+ .ops = &clk_dummy_ops,
+ },
+};
+
+static struct clk_dummy measure_only_pka_ahb_clk = {
+ .rrate = 1000,
+ .hw.init = &(struct clk_init_data){
+ .name = "measure_only_pka_ahb_clk",
+ .ops = &clk_dummy_ops,
+ },
+};
+
static struct clk_hw *debugcc_scuba_hws[] = {
- &measure_only_apcs_gold_post_acd_clk.hw,
- &measure_only_apcs_silver_post_acd_clk.hw,
+ &pwrcl_clk.hw,
&measure_only_cnoc_clk.hw,
&measure_only_ipa_2x_clk.hw,
&measure_only_mccc_clk.hw,
&measure_only_snoc_clk.hw,
+ &measure_only_qpic_clk.hw,
+ &measure_only_qpic_ahb_clk.hw,
+ &measure_only_hwkm_km_core_clk.hw,
+ &measure_only_hwkm_ahb_clk.hw,
+ &measure_only_pka_core_clk.hw,
+ &measure_only_pka_ahb_clk.hw,
};
static const struct of_device_id clk_debug_match_table[] = {
diff --git a/drivers/clk/qcom/dispcc-scuba.c b/drivers/clk/qcom/dispcc-scuba.c
index 12e4db7..4fcd457 100644
--- a/drivers/clk/qcom/dispcc-scuba.c
+++ b/drivers/clk/qcom/dispcc-scuba.c
@@ -320,7 +320,6 @@ static struct clk_rcg2 disp_cc_sleep_clk_src = {
.hid_width = 5,
.parent_map = disp_cc_parent_map_5,
.freq_tbl = ftbl_disp_cc_sleep_clk_src,
- .enable_safe_config = true,
.clkr.hw.init = &(struct clk_init_data){
.name = "disp_cc_sleep_clk_src",
.parent_names = disp_cc_parent_names_5,
diff --git a/drivers/clk/qcom/gcc-scuba.c b/drivers/clk/qcom/gcc-scuba.c
index 55c7147..b8bf0d6e 100644
--- a/drivers/clk/qcom/gcc-scuba.c
+++ b/drivers/clk/qcom/gcc-scuba.c
@@ -1488,7 +1488,7 @@ static const struct freq_tbl ftbl_gcc_sdcc1_apps_clk_src[] = {
F(50000000, P_GPLL0_OUT_AUX2, 6, 0, 0),
F(100000000, P_GPLL0_OUT_AUX2, 3, 0, 0),
F(192000000, P_GPLL6_OUT_MAIN, 2, 0, 0),
- F(200000000, P_GPLL0_OUT_EARLY, 3, 0, 0),
+ F(384000000, P_GPLL6_OUT_MAIN, 1, 0, 0),
{ }
};
@@ -1508,7 +1508,7 @@ static struct clk_rcg2 gcc_sdcc1_apps_clk_src = {
.num_rate_max = VDD_NUM,
.rate_max = (unsigned long[VDD_NUM]) {
[VDD_LOWER] = 100000000,
- [VDD_LOW_L1] = 200000000},
+ [VDD_LOW_L1] = 384000000},
},
};
@@ -2423,7 +2423,7 @@ static struct clk_branch gcc_gpu_iref_clk = {
static struct clk_branch gcc_gpu_memnoc_gfx_clk = {
.halt_reg = 0x3600c,
- .halt_check = BRANCH_HALT,
+ .halt_check = BRANCH_VOTED,
.hwcg_reg = 0x3600c,
.hwcg_bit = 1,
.clkr = {
@@ -2465,19 +2465,6 @@ static struct clk_branch gcc_gpu_throttle_core_clk = {
},
};
-static struct clk_branch gcc_gpu_throttle_xo_clk = {
- .halt_reg = 0x36044,
- .halt_check = BRANCH_HALT,
- .clkr = {
- .enable_reg = 0x36044,
- .enable_mask = BIT(0),
- .hw.init = &(struct clk_init_data){
- .name = "gcc_gpu_throttle_xo_clk",
- .ops = &clk_branch2_ops,
- },
- },
-};
-
static struct clk_branch gcc_pdm2_clk = {
.halt_reg = 0x2000c,
.halt_check = BRANCH_HALT,
@@ -3192,7 +3179,6 @@ static struct clk_regmap *gcc_scuba_clocks[] = {
[GCC_GPU_MEMNOC_GFX_CLK] = &gcc_gpu_memnoc_gfx_clk.clkr,
[GCC_GPU_SNOC_DVM_GFX_CLK] = &gcc_gpu_snoc_dvm_gfx_clk.clkr,
[GCC_GPU_THROTTLE_CORE_CLK] = &gcc_gpu_throttle_core_clk.clkr,
- [GCC_GPU_THROTTLE_XO_CLK] = &gcc_gpu_throttle_xo_clk.clkr,
[GCC_PDM2_CLK] = &gcc_pdm2_clk.clkr,
[GCC_PDM2_CLK_SRC] = &gcc_pdm2_clk_src.clkr,
[GCC_PDM_AHB_CLK] = &gcc_pdm_ahb_clk.clkr,
diff --git a/drivers/clk/qcom/gdsc-regulator.c b/drivers/clk/qcom/gdsc-regulator.c
index 1b95e82..e16a9e5 100644
--- a/drivers/clk/qcom/gdsc-regulator.c
+++ b/drivers/clk/qcom/gdsc-regulator.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/kernel.h>
@@ -74,6 +74,7 @@ struct gdsc {
int reset_count;
int root_clk_idx;
u32 gds_timeout;
+ bool skip_disable_before_enable;
};
enum gdscr_status {
@@ -366,6 +367,7 @@ static int gdsc_enable(struct regulator_dev *rdev)
clk_disable_unprepare(sc->clocks[sc->root_clk_idx]);
sc->is_gdsc_enabled = true;
+ sc->skip_disable_before_enable = false;
end:
if (ret && sc->bus_handle) {
msm_bus_scale_client_update_request(sc->bus_handle, 0);
@@ -384,6 +386,16 @@ static int gdsc_disable(struct regulator_dev *rdev)
uint32_t regval;
int i, ret = 0;
+ /*
+ * Protect GDSC against late_init disabling when the GDSC is enabled
+ * by an entity outside external to HLOS.
+ */
+ if (sc->skip_disable_before_enable) {
+ dev_dbg(&rdev->dev, "Skip Disabling: %s\n", sc->rdesc.name);
+ sc->skip_disable_before_enable = false;
+ return 0;
+ }
+
if (sc->force_root_en)
clk_prepare_enable(sc->clocks[sc->root_clk_idx]);
@@ -670,6 +682,8 @@ static int gdsc_parse_dt_data(struct gdsc *sc, struct device *dev,
"qcom,no-status-check-on-disable");
sc->retain_ff_enable = of_property_read_bool(dev->of_node,
"qcom,retain-regs");
+ sc->skip_disable_before_enable = of_property_read_bool(dev->of_node,
+ "qcom,skip-disable-before-sw-enable");
sc->toggle_logic = !of_property_read_bool(dev->of_node,
"qcom,skip-logic-collapse");
diff --git a/drivers/clk/qcom/gpucc-scuba.c b/drivers/clk/qcom/gpucc-scuba.c
index 1f6836b..87e11f8 100644
--- a/drivers/clk/qcom/gpucc-scuba.c
+++ b/drivers/clk/qcom/gpucc-scuba.c
@@ -110,9 +110,9 @@ static struct clk_alpha_pll gpu_cc_pll0 = {
.num_rate_max = VDD_NUM,
.rate_max = (unsigned long[VDD_NUM]) {
[VDD_MIN] = 1200000000,
- [VDD_LOWER] = 2400000000,
- [VDD_LOW] = 3000000000,
- [VDD_NOMINAL] = 3300000000},
+ [VDD_LOWER] = 2400000000UL,
+ [VDD_LOW] = 3000000000UL,
+ [VDD_NOMINAL] = 3300000000UL},
},
},
};
@@ -184,6 +184,7 @@ static struct clk_branch gpu_cc_ahb_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gpu_cc_ahb_clk",
+ .flags = CLK_IS_CRITICAL,
.ops = &clk_branch2_ops,
},
},
@@ -290,6 +291,7 @@ static struct clk_branch gpu_cc_gx_cxo_clk = {
.enable_mask = BIT(0),
.hw.init = &(struct clk_init_data){
.name = "gpu_cc_gx_cxo_clk",
+ .flags = CLK_IS_CRITICAL,
.ops = &clk_branch2_ops,
},
},
diff --git a/drivers/cpufreq/qcom-cpufreq-hw-debug.c b/drivers/cpufreq/qcom-cpufreq-hw-debug.c
index d51acca..5e247ae 100644
--- a/drivers/cpufreq/qcom-cpufreq-hw-debug.c
+++ b/drivers/cpufreq/qcom-cpufreq-hw-debug.c
@@ -489,6 +489,9 @@ static int cpufreq_get_hwregs(struct platform_device *pdev)
return -ENOMEM;
prop = of_find_property(pdev->dev.of_node, "qcom,freq-hw-domain", NULL);
+ if (!prop)
+ return -EINVAL;
+
hw_regs->domain_cnt = prop->length / (2 * sizeof(prop->length));
for (i = 0; i < hw_regs->domain_cnt; i++) {
@@ -520,7 +523,7 @@ static int enable_cpufreq_hw_trace_debug(struct platform_device *pdev,
{
struct resource *res;
void *base;
- int ret;
+ int ret, debug_only, epss_debug_only;
ret = cpufreq_get_hwregs(pdev);
if (ret < 0) {
@@ -538,8 +541,12 @@ static int enable_cpufreq_hw_trace_debug(struct platform_device *pdev,
hw_regs->debugfs_base, NULL, &cpufreq_debug_register_fops))
goto debugfs_fail;
- if (!is_secure || of_device_is_compatible(pdev->dev.of_node,
- "qcom,cpufreq-hw-epss-debug"))
+ debug_only = of_device_is_compatible(pdev->dev.of_node,
+ "qcom,cpufreq-hw-debug");
+ epss_debug_only = of_device_is_compatible(pdev->dev.of_node,
+ "qcom,cpufreq-hw-epss-debug");
+
+ if (!is_secure || epss_debug_only || debug_only)
return 0;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "domain-top");
@@ -594,6 +601,8 @@ static int qcom_cpufreq_hw_debug_remove(struct platform_device *pdev)
static const struct of_device_id qcom_cpufreq_hw_debug_trace_match[] = {
{ .compatible = "qcom,cpufreq-hw-debug-trace",
.data = &cpufreq_qcom_std_data },
+ { .compatible = "qcom,cpufreq-hw-debug",
+ .data = &cpufreq_qcom_std_data },
{ .compatible = "qcom,cpufreq-hw-epss-debug",
.data = &cpufreq_qcom_std_epss_data },
{}
diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index c65f2a8..20b245b 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -804,8 +804,4 @@
source "drivers/crypto/hisilicon/Kconfig"
-if ARCH_QCOM
-source drivers/crypto/msm/Kconfig
-endif
-
endif # CRYPTO_HW
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index e2ca339..c23396f 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -21,7 +21,6 @@
obj-$(CONFIG_CRYPTO_DEV_MXC_SCC) += mxc-scc.o
obj-$(CONFIG_CRYPTO_DEV_NIAGARA2) += n2_crypto.o
n2_crypto-y := n2_core.o n2_asm.o
-obj-$(CONFIG_CRYPTO_DEV_QCOM_ICE) += msm/
obj-$(CONFIG_CRYPTO_DEV_NX) += nx/
obj-$(CONFIG_CRYPTO_DEV_OMAP) += omap-crypto.o
obj-$(CONFIG_CRYPTO_DEV_OMAP_AES) += omap-aes-driver.o
diff --git a/drivers/crypto/msm/Kconfig b/drivers/crypto/msm/Kconfig
deleted file mode 100644
index cd4e519..0000000
--- a/drivers/crypto/msm/Kconfig
+++ /dev/null
@@ -1,10 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-config CRYPTO_DEV_QCOM_ICE
- tristate "Inline Crypto Module"
- default n
- depends on BLK_DEV_DM
- help
- This driver supports Inline Crypto Engine for QTI chipsets, MSM8994
- and later, to accelerate crypto operations for storage needs.
- To compile this driver as a module, choose M here: the
- module will be called ice.
diff --git a/drivers/crypto/msm/Makefile b/drivers/crypto/msm/Makefile
index 48a92b6..ba6763c 100644
--- a/drivers/crypto/msm/Makefile
+++ b/drivers/crypto/msm/Makefile
@@ -4,4 +4,3 @@
obj-$(CONFIG_CRYPTO_DEV_QCEDEV) += qcedev_smmu.o
obj-$(CONFIG_CRYPTO_DEV_QCRYPTO) += qcrypto.o
obj-$(CONFIG_CRYPTO_DEV_OTA_CRYPTO) += ota_crypto.o
-obj-$(CONFIG_CRYPTO_DEV_QCOM_ICE) += ice.o
diff --git a/drivers/crypto/msm/ice.c b/drivers/crypto/msm/ice.c
deleted file mode 100644
index 097e871..0000000
--- a/drivers/crypto/msm/ice.c
+++ /dev/null
@@ -1,1784 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * QTI Inline Crypto Engine (ICE) driver
- *
- * Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
- */
-
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/io.h>
-#include <linux/interrupt.h>
-#include <linux/delay.h>
-#include <linux/of.h>
-#include <linux/device-mapper.h>
-#include <linux/clk.h>
-#include <linux/regulator/consumer.h>
-#include <linux/msm-bus.h>
-#include <crypto/ice.h>
-#include <soc/qcom/scm.h>
-#include <soc/qcom/qseecomi.h>
-#include "iceregs.h"
-#include <linux/pfk.h>
-#include <linux/atomic.h>
-#include <linux/wait.h>
-
-#define TZ_SYSCALL_CREATE_SMC_ID(o, s, f) \
- ((uint32_t)((((o & 0x3f) << 24) | (s & 0xff) << 8) | (f & 0xff)))
-
-#define TZ_OWNER_QSEE_OS 50
-#define TZ_SVC_KEYSTORE 5 /* Keystore management */
-
-#define TZ_OS_KS_RESTORE_KEY_ID \
- TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_QSEE_OS, TZ_SVC_KEYSTORE, 0x06)
-
-#define TZ_SYSCALL_CREATE_PARAM_ID_0 0
-
-#define TZ_OS_KS_RESTORE_KEY_ID_PARAM_ID \
- TZ_SYSCALL_CREATE_PARAM_ID_0
-
-#define TZ_OS_KS_RESTORE_KEY_CONFIG_ID \
- TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_QSEE_OS, TZ_SVC_KEYSTORE, 0x06)
-
-#define TZ_OS_KS_RESTORE_KEY_CONFIG_ID_PARAM_ID \
- TZ_SYSCALL_CREATE_PARAM_ID_1(TZ_SYSCALL_PARAM_TYPE_VAL)
-
-
-#define ICE_REV(x, y) (((x) & ICE_CORE_##y##_REV_MASK) >> ICE_CORE_##y##_REV)
-#define QCOM_UFS_ICE_DEV "iceufs"
-#define QCOM_UFS_CARD_ICE_DEV "iceufscard"
-#define QCOM_SDCC_ICE_DEV "icesdcc"
-#define QCOM_ICE_MAX_BIST_CHECK_COUNT 100
-
-#define QCOM_ICE_ENCRYPT 0x1
-#define QCOM_ICE_DECRYPT 0x2
-#define QCOM_SECT_LEN_IN_BYTE 512
-#define QCOM_UD_FOOTER_SIZE 0x4000
-#define QCOM_UD_FOOTER_SECS (QCOM_UD_FOOTER_SIZE / QCOM_SECT_LEN_IN_BYTE)
-
-#define ICE_CRYPTO_CXT_FDE 1
-#define ICE_CRYPTO_CXT_FBE 2
-#define ICE_INSTANCE_TYPE_LENGTH 12
-
-static int ice_fde_flag;
-
-struct ice_clk_info {
- struct list_head list;
- struct clk *clk;
- const char *name;
- u32 max_freq;
- u32 min_freq;
- u32 curr_freq;
- bool enabled;
-};
-
-static LIST_HEAD(ice_devices);
-
-static int qti_ice_setting_config(struct request *req,
- struct ice_device *ice_dev,
- struct ice_crypto_setting *crypto_data,
- struct ice_data_setting *setting, uint32_t cxt)
-{
- if (ice_dev->is_ice_disable_fuse_blown) {
- pr_err("%s ICE disabled fuse is blown\n", __func__);
- return -EPERM;
- }
-
- if (!setting)
- return -EINVAL;
-
- if ((short)(crypto_data->key_index) >= 0) {
- memcpy(&setting->crypto_data, crypto_data,
- sizeof(setting->crypto_data));
-
- if (rq_data_dir(req) == WRITE) {
- if ((cxt == ICE_CRYPTO_CXT_FBE) ||
- ((cxt == ICE_CRYPTO_CXT_FDE) &&
- (ice_fde_flag & QCOM_ICE_ENCRYPT)))
- setting->encr_bypass = false;
- } else if (rq_data_dir(req) == READ) {
- if ((cxt == ICE_CRYPTO_CXT_FBE) ||
- ((cxt == ICE_CRYPTO_CXT_FDE) &&
- (ice_fde_flag & QCOM_ICE_DECRYPT)))
- setting->decr_bypass = false;
- } else {
- /* Should I say BUG_ON */
- setting->encr_bypass = true;
- setting->decr_bypass = true;
- }
- }
-
- return 0;
-}
-
-void qcom_ice_set_fde_flag(int flag)
-{
- ice_fde_flag = flag;
- pr_debug("%s read_write setting %d\n", __func__, ice_fde_flag);
-}
-EXPORT_SYMBOL(qcom_ice_set_fde_flag);
-
-static int qcom_ice_enable_clocks(struct ice_device *, bool);
-
-#ifdef CONFIG_MSM_BUS_SCALING
-
-static int qcom_ice_set_bus_vote(struct ice_device *ice_dev, int vote)
-{
- int err = 0;
-
- if (vote != ice_dev->bus_vote.curr_vote) {
- err = msm_bus_scale_client_update_request(
- ice_dev->bus_vote.client_handle, vote);
- if (err) {
- dev_err(ice_dev->pdev,
- "%s:failed:client_handle=0x%x, vote=%d, err=%d\n",
- __func__, ice_dev->bus_vote.client_handle,
- vote, err);
- goto out;
- }
- ice_dev->bus_vote.curr_vote = vote;
- }
-out:
- return err;
-}
-
-static int qcom_ice_get_bus_vote(struct ice_device *ice_dev,
- const char *speed_mode)
-{
- struct device *dev = ice_dev->pdev;
- struct device_node *np = dev->of_node;
- int err;
- const char *key = "qcom,bus-vector-names";
-
- if (!speed_mode) {
- err = -EINVAL;
- goto out;
- }
-
- if (ice_dev->bus_vote.is_max_bw_needed && !!strcmp(speed_mode, "MIN"))
- err = of_property_match_string(np, key, "MAX");
- else
- err = of_property_match_string(np, key, speed_mode);
-out:
- if (err < 0)
- dev_err(dev, "%s: Invalid %s mode %d\n",
- __func__, speed_mode, err);
- return err;
-}
-
-static int qcom_ice_bus_register(struct ice_device *ice_dev)
-{
- int err = 0;
- struct msm_bus_scale_pdata *bus_pdata;
- struct device *dev = ice_dev->pdev;
- struct platform_device *pdev = to_platform_device(dev);
- struct device_node *np = dev->of_node;
-
- bus_pdata = msm_bus_cl_get_pdata(pdev);
- if (!bus_pdata) {
- dev_err(dev, "%s: failed to get bus vectors\n", __func__);
- err = -ENODATA;
- goto out;
- }
-
- err = of_property_count_strings(np, "qcom,bus-vector-names");
- if (err < 0 || err != bus_pdata->num_usecases) {
- dev_err(dev, "%s: Error = %d with qcom,bus-vector-names\n",
- __func__, err);
- goto out;
- }
- err = 0;
-
- ice_dev->bus_vote.client_handle =
- msm_bus_scale_register_client(bus_pdata);
- if (!ice_dev->bus_vote.client_handle) {
- dev_err(dev, "%s: msm_bus_scale_register_client failed\n",
- __func__);
- err = -EFAULT;
- goto out;
- }
-
- /* cache the vote index for minimum and maximum bandwidth */
- ice_dev->bus_vote.min_bw_vote = qcom_ice_get_bus_vote(ice_dev, "MIN");
- ice_dev->bus_vote.max_bw_vote = qcom_ice_get_bus_vote(ice_dev, "MAX");
-out:
- return err;
-}
-
-#else
-
-static int qcom_ice_set_bus_vote(struct ice_device *ice_dev, int vote)
-{
- return 0;
-}
-
-static int qcom_ice_get_bus_vote(struct ice_device *ice_dev,
- const char *speed_mode)
-{
- return 0;
-}
-
-static int qcom_ice_bus_register(struct ice_device *ice_dev)
-{
- return 0;
-}
-#endif /* CONFIG_MSM_BUS_SCALING */
-
-static int qcom_ice_get_vreg(struct ice_device *ice_dev)
-{
- int ret = 0;
-
- if (!ice_dev->is_regulator_available)
- return 0;
-
- if (ice_dev->reg)
- return 0;
-
- ice_dev->reg = devm_regulator_get(ice_dev->pdev, "vdd-hba");
- if (IS_ERR(ice_dev->reg)) {
- ret = PTR_ERR(ice_dev->reg);
- dev_err(ice_dev->pdev, "%s: %s get failed, err=%d\n",
- __func__, "vdd-hba-supply", ret);
- }
- return ret;
-}
-
-static void qcom_ice_config_proc_ignore(struct ice_device *ice_dev)
-{
- u32 regval;
-
- if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 2 &&
- ICE_REV(ice_dev->ice_hw_version, MINOR) == 0 &&
- ICE_REV(ice_dev->ice_hw_version, STEP) == 0) {
- regval = qcom_ice_readl(ice_dev,
- QCOM_ICE_REGS_ADVANCED_CONTROL);
- regval |= 0x800;
- qcom_ice_writel(ice_dev, regval,
- QCOM_ICE_REGS_ADVANCED_CONTROL);
- /* Ensure register is updated */
- mb();
- }
-}
-
-static void qcom_ice_low_power_mode_enable(struct ice_device *ice_dev)
-{
- u32 regval;
-
- regval = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_ADVANCED_CONTROL);
- /*
- * Enable low power mode sequence
- * [0]-0, [1]-0, [2]-0, [3]-E, [4]-0, [5]-0, [6]-0, [7]-0
- */
- regval |= 0x7000;
- qcom_ice_writel(ice_dev, regval, QCOM_ICE_REGS_ADVANCED_CONTROL);
- /*
- * Ensure previous instructions was completed before issuing next
- * ICE initialization/optimization instruction
- */
- mb();
-}
-
-static void qcom_ice_enable_test_bus_config(struct ice_device *ice_dev)
-{
- /*
- * Configure & enable ICE_TEST_BUS_REG to reflect ICE intr lines
- * MAIN_TEST_BUS_SELECTOR = 0 (ICE_CONFIG)
- * TEST_BUS_REG_EN = 1 (ENABLE)
- */
- u32 regval;
-
- if (ICE_REV(ice_dev->ice_hw_version, MAJOR) >= 2)
- return;
-
- regval = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_TEST_BUS_CONTROL);
- regval &= 0x0FFFFFFF;
- /* TBD: replace 0x2 with define in iceregs.h */
- regval |= 0x2;
- qcom_ice_writel(ice_dev, regval, QCOM_ICE_REGS_TEST_BUS_CONTROL);
-
- /*
- * Ensure previous instructions was completed before issuing next
- * ICE initialization/optimization instruction
- */
- mb();
-}
-
-static void qcom_ice_optimization_enable(struct ice_device *ice_dev)
-{
- u32 regval;
-
- regval = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_ADVANCED_CONTROL);
- if (ICE_REV(ice_dev->ice_hw_version, MAJOR) >= 2)
- regval |= 0xD807100;
- else if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 1)
- regval |= 0x3F007100;
-
- /* ICE Optimizations Enable Sequence */
- udelay(5);
- /* [0]-0, [1]-0, [2]-8, [3]-E, [4]-0, [5]-0, [6]-F, [7]-A */
- qcom_ice_writel(ice_dev, regval, QCOM_ICE_REGS_ADVANCED_CONTROL);
- /*
- * Ensure previous instructions was completed before issuing next
- * ICE initialization/optimization instruction
- */
- mb();
-
- /* ICE HPG requires sleep before writing */
- udelay(5);
- if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 1) {
- regval = 0;
- regval = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_ENDIAN_SWAP);
- regval |= 0xF;
- qcom_ice_writel(ice_dev, regval, QCOM_ICE_REGS_ENDIAN_SWAP);
- /*
- * Ensure previous instructions were completed before issue
- * next ICE commands
- */
- mb();
- }
-}
-
-static int qcom_ice_wait_bist_status(struct ice_device *ice_dev)
-{
- int count;
- u32 reg;
-
- /* Poll until all BIST bits are reset */
- for (count = 0; count < QCOM_ICE_MAX_BIST_CHECK_COUNT; count++) {
- reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_BIST_STATUS);
- if (!(reg & ICE_BIST_STATUS_MASK))
- break;
- udelay(50);
- }
-
- if (reg)
- return -ETIMEDOUT;
-
- return 0;
-}
-
-static int qcom_ice_enable(struct ice_device *ice_dev)
-{
- unsigned int reg;
- int ret = 0;
-
- if ((ICE_REV(ice_dev->ice_hw_version, MAJOR) > 2) ||
- ((ICE_REV(ice_dev->ice_hw_version, MAJOR) == 2) &&
- (ICE_REV(ice_dev->ice_hw_version, MINOR) >= 1)))
- ret = qcom_ice_wait_bist_status(ice_dev);
- if (ret) {
- dev_err(ice_dev->pdev, "BIST status error (%d)\n", ret);
- return ret;
- }
-
- /* Starting ICE v3 enabling is done at storage controller (UFS/SDCC) */
- if (ICE_REV(ice_dev->ice_hw_version, MAJOR) >= 3)
- return 0;
-
- /*
- * To enable ICE, perform following
- * 1. Set IGNORE_CONTROLLER_RESET to USE in ICE_RESET register
- * 2. Disable GLOBAL_BYPASS bit in ICE_CONTROL register
- */
- reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_RESET);
-
- if (ICE_REV(ice_dev->ice_hw_version, MAJOR) >= 2)
- reg &= 0x0;
- else if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 1)
- reg &= ~0x100;
-
- qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_RESET);
-
- /*
- * Ensure previous instructions was completed before issuing next
- * ICE initialization/optimization instruction
- */
- mb();
-
- reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_CONTROL);
-
- if (ICE_REV(ice_dev->ice_hw_version, MAJOR) >= 2)
- reg &= 0xFFFE;
- else if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 1)
- reg &= ~0x7;
- qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_CONTROL);
-
- /*
- * Ensure previous instructions was completed before issuing next
- * ICE initialization/optimization instruction
- */
- mb();
-
- if ((ICE_REV(ice_dev->ice_hw_version, MAJOR) > 2) ||
- ((ICE_REV(ice_dev->ice_hw_version, MAJOR) == 2) &&
- (ICE_REV(ice_dev->ice_hw_version, MINOR) >= 1))) {
- reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_BYPASS_STATUS);
- if ((reg & 0x80000000) != 0x0) {
- pr_err("%s: Bypass failed for ice = %pK\n",
- __func__, (void *)ice_dev);
- WARN_ON(1);
- }
- }
- return 0;
-}
-
-static int qcom_ice_verify_ice(struct ice_device *ice_dev)
-{
- unsigned int rev;
- unsigned int maj_rev, min_rev, step_rev;
-
- rev = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_VERSION);
- maj_rev = (rev & ICE_CORE_MAJOR_REV_MASK) >> ICE_CORE_MAJOR_REV;
- min_rev = (rev & ICE_CORE_MINOR_REV_MASK) >> ICE_CORE_MINOR_REV;
- step_rev = (rev & ICE_CORE_STEP_REV_MASK) >> ICE_CORE_STEP_REV;
-
- if (maj_rev > ICE_CORE_CURRENT_MAJOR_VERSION) {
- pr_err("%s: Unknown QC ICE device at %lu, rev %d.%d.%d\n",
- __func__, (unsigned long)ice_dev->mmio,
- maj_rev, min_rev, step_rev);
- return -ENODEV;
- }
- ice_dev->ice_hw_version = rev;
-
- dev_info(ice_dev->pdev, "QC ICE %d.%d.%d device found @0x%pK\n",
- maj_rev, min_rev, step_rev,
- ice_dev->mmio);
-
- return 0;
-}
-
-static void qcom_ice_enable_intr(struct ice_device *ice_dev)
-{
- unsigned int reg;
-
- reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_MASK);
- reg &= ~QCOM_ICE_NON_SEC_IRQ_MASK;
- qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_NON_SEC_IRQ_MASK);
- /*
- * Ensure previous instructions was completed before issuing next
- * ICE initialization/optimization instruction
- */
- mb();
-}
-
-static void qcom_ice_disable_intr(struct ice_device *ice_dev)
-{
- unsigned int reg;
-
- reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_MASK);
- reg |= QCOM_ICE_NON_SEC_IRQ_MASK;
- qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_NON_SEC_IRQ_MASK);
- /*
- * Ensure previous instructions was completed before issuing next
- * ICE initialization/optimization instruction
- */
- mb();
-}
-
-static irqreturn_t qcom_ice_isr(int isr, void *data)
-{
- irqreturn_t retval = IRQ_NONE;
- u32 status;
- struct ice_device *ice_dev = data;
-
- status = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_STTS);
- if (status) {
- ice_dev->error_cb(ice_dev->host_controller_data, status);
-
- /* Interrupt has been handled. Clear the IRQ */
- qcom_ice_writel(ice_dev, status, QCOM_ICE_REGS_NON_SEC_IRQ_CLR);
- /* Ensure instruction is completed */
- mb();
- retval = IRQ_HANDLED;
- }
- return retval;
-}
-
-static void qcom_ice_parse_ice_instance_type(struct platform_device *pdev,
- struct ice_device *ice_dev)
-{
- int ret = -1;
- struct device *dev = &pdev->dev;
- struct device_node *np = dev->of_node;
- const char *type;
-
- ret = of_property_read_string_index(np, "qcom,instance-type", 0, &type);
- if (ret) {
- pr_err("%s: Could not get ICE instance type\n", __func__);
- goto out;
- }
- strlcpy(ice_dev->ice_instance_type, type, QCOM_ICE_TYPE_NAME_LEN);
-out:
- return;
-}
-
-static int qcom_ice_parse_clock_info(struct platform_device *pdev,
- struct ice_device *ice_dev)
-{
- int ret = -1, cnt, i, len;
- struct device *dev = &pdev->dev;
- struct device_node *np = dev->of_node;
- char *name;
- struct ice_clk_info *clki;
- u32 *clkfreq = NULL;
-
- if (!np)
- goto out;
-
- cnt = of_property_count_strings(np, "clock-names");
- if (cnt <= 0) {
- dev_info(dev, "%s: Unable to find clocks, assuming enabled\n",
- __func__);
- ret = cnt;
- goto out;
- }
-
- if (!of_get_property(np, "qcom,op-freq-hz", &len)) {
- dev_info(dev, "qcom,op-freq-hz property not specified\n");
- goto out;
- }
-
- len = len/sizeof(*clkfreq);
- if (len != cnt)
- goto out;
-
- clkfreq = devm_kzalloc(dev, len * sizeof(*clkfreq), GFP_KERNEL);
- if (!clkfreq) {
- ret = -ENOMEM;
- goto out;
- }
- ret = of_property_read_u32_array(np, "qcom,op-freq-hz", clkfreq, len);
-
- INIT_LIST_HEAD(&ice_dev->clk_list_head);
-
- for (i = 0; i < cnt; i++) {
- ret = of_property_read_string_index(np,
- "clock-names", i, (const char **)&name);
- if (ret)
- goto out;
-
- clki = devm_kzalloc(dev, sizeof(*clki), GFP_KERNEL);
- if (!clki) {
- ret = -ENOMEM;
- goto out;
- }
- clki->max_freq = clkfreq[i];
- clki->name = kstrdup(name, GFP_KERNEL);
- list_add_tail(&clki->list, &ice_dev->clk_list_head);
- }
-out:
- if (clkfreq)
- devm_kfree(dev, (void *)clkfreq);
- return ret;
-}
-
-static int qcom_ice_get_device_tree_data(struct platform_device *pdev,
- struct ice_device *ice_dev)
-{
- struct device *dev = &pdev->dev;
- int rc = -1;
- int irq;
-
- ice_dev->res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (!ice_dev->res) {
- pr_err("%s: No memory available for IORESOURCE\n", __func__);
- return -ENOMEM;
- }
-
- ice_dev->mmio = devm_ioremap_resource(dev, ice_dev->res);
- if (IS_ERR(ice_dev->mmio)) {
- rc = PTR_ERR(ice_dev->mmio);
- pr_err("%s: Error = %d mapping ICE io memory\n", __func__, rc);
- goto out;
- }
-
- if (!of_parse_phandle(pdev->dev.of_node, "vdd-hba-supply", 0)) {
- pr_err("%s: No vdd-hba-supply regulator, assuming not needed\n",
- __func__);
- ice_dev->is_regulator_available = false;
- } else {
- ice_dev->is_regulator_available = true;
- }
- ice_dev->is_ice_clk_available = of_property_read_bool(
- (&pdev->dev)->of_node,
- "qcom,enable-ice-clk");
-
- if (ice_dev->is_ice_clk_available) {
- rc = qcom_ice_parse_clock_info(pdev, ice_dev);
- if (rc) {
- pr_err("%s: qcom_ice_parse_clock_info failed (%d)\n",
- __func__, rc);
- goto err_dev;
- }
- }
-
- /* ICE interrupts is only relevant for v2.x */
- irq = platform_get_irq(pdev, 0);
- if (irq >= 0) {
- rc = devm_request_irq(dev, irq, qcom_ice_isr, 0, dev_name(dev),
- ice_dev);
- if (rc) {
- pr_err("%s: devm_request_irq irq=%d failed (%d)\n",
- __func__, irq, rc);
- goto err_dev;
- }
- ice_dev->irq = irq;
- pr_info("ICE IRQ = %d\n", ice_dev->irq);
- } else {
- dev_dbg(dev, "IRQ resource not available\n");
- }
-
- qcom_ice_parse_ice_instance_type(pdev, ice_dev);
-
- return 0;
-err_dev:
- if (rc && ice_dev->mmio)
- devm_iounmap(dev, ice_dev->mmio);
-out:
- return rc;
-}
-
-/*
- * ICE HW instance can exist in UFS or eMMC based storage HW
- * Userspace does not know what kind of ICE it is dealing with.
- * Though userspace can find which storage device it is booting
- * from but all kind of storage types dont support ICE from
- * beginning. So ICE device is created for user space to ping
- * if ICE exist for that kind of storage
- */
-static const struct file_operations qcom_ice_fops = {
- .owner = THIS_MODULE,
-};
-
-static int register_ice_device(struct ice_device *ice_dev)
-{
- int rc = 0;
- unsigned int baseminor = 0;
- unsigned int count = 1;
- struct device *class_dev;
- char ice_type[ICE_INSTANCE_TYPE_LENGTH];
-
- if (!strcmp(ice_dev->ice_instance_type, "sdcc"))
- strlcpy(ice_type, QCOM_SDCC_ICE_DEV, sizeof(ice_type));
- else if (!strcmp(ice_dev->ice_instance_type, "ufscard"))
- strlcpy(ice_type, QCOM_UFS_CARD_ICE_DEV, sizeof(ice_type));
- else
- strlcpy(ice_type, QCOM_UFS_ICE_DEV, sizeof(ice_type));
-
- rc = alloc_chrdev_region(&ice_dev->device_no, baseminor, count,
- ice_type);
- if (rc < 0) {
- pr_err("alloc_chrdev_region failed %d for %s\n", rc,
- ice_type);
- return rc;
- }
- ice_dev->driver_class = class_create(THIS_MODULE, ice_type);
- if (IS_ERR(ice_dev->driver_class)) {
- rc = -ENOMEM;
- pr_err("class_create failed %d for %s\n", rc, ice_type);
- goto exit_unreg_chrdev_region;
- }
- class_dev = device_create(ice_dev->driver_class, NULL,
- ice_dev->device_no, NULL, ice_type);
-
- if (!class_dev) {
- pr_err("class_device_create failed %d for %s\n", rc, ice_type);
- rc = -ENOMEM;
- goto exit_destroy_class;
- }
-
- cdev_init(&ice_dev->cdev, &qcom_ice_fops);
- ice_dev->cdev.owner = THIS_MODULE;
-
- rc = cdev_add(&ice_dev->cdev, MKDEV(MAJOR(ice_dev->device_no), 0), 1);
- if (rc < 0) {
- pr_err("cdev_add failed %d for %s\n", rc, ice_type);
- goto exit_destroy_device;
- }
- return 0;
-
-exit_destroy_device:
- device_destroy(ice_dev->driver_class, ice_dev->device_no);
-
-exit_destroy_class:
- class_destroy(ice_dev->driver_class);
-
-exit_unreg_chrdev_region:
- unregister_chrdev_region(ice_dev->device_no, 1);
- return rc;
-}
-
-static int qcom_ice_probe(struct platform_device *pdev)
-{
- struct ice_device *ice_dev;
- int rc = 0;
-
- if (!pdev) {
- pr_err("%s: Invalid platform_device passed\n",
- __func__);
- return -EINVAL;
- }
-
- ice_dev = kzalloc(sizeof(struct ice_device), GFP_KERNEL);
-
- if (!ice_dev) {
- rc = -ENOMEM;
- pr_err("%s: Error %d allocating memory for ICE device:\n",
- __func__, rc);
- goto out;
- }
-
- ice_dev->pdev = &pdev->dev;
- if (!ice_dev->pdev) {
- rc = -EINVAL;
- pr_err("%s: Invalid device passed in platform_device\n",
- __func__);
- goto err_ice_dev;
- }
-
- if (pdev->dev.of_node)
- rc = qcom_ice_get_device_tree_data(pdev, ice_dev);
- else {
- rc = -EINVAL;
- pr_err("%s: ICE device node not found\n", __func__);
- }
-
- if (rc)
- goto err_ice_dev;
-
- pr_debug("%s: Registering ICE device\n", __func__);
- rc = register_ice_device(ice_dev);
- if (rc) {
- pr_err("create character device failed.\n");
- goto err_ice_dev;
- }
-
- /*
- * If ICE is enabled here, it would be waste of power.
- * We would enable ICE when first request for crypto
- * operation arrives.
- */
- ice_dev->is_ice_enabled = false;
-
- rc = pfk_initialize_key_table(ice_dev);
- if (rc) {
- pr_err("Failed to initialize key table\n");
- goto err_ice_dev;
- }
-
- platform_set_drvdata(pdev, ice_dev);
- list_add_tail(&ice_dev->list, &ice_devices);
-
- goto out;
-
-err_ice_dev:
- kfree(ice_dev);
-out:
- return rc;
-}
-
-static int qcom_ice_remove(struct platform_device *pdev)
-{
- struct ice_device *ice_dev;
-
- ice_dev = (struct ice_device *)platform_get_drvdata(pdev);
-
- if (!ice_dev)
- return 0;
-
- pfk_remove(ice_dev);
- qcom_ice_disable_intr(ice_dev);
-
- device_init_wakeup(&pdev->dev, false);
- if (ice_dev->mmio)
- iounmap(ice_dev->mmio);
-
- list_del_init(&ice_dev->list);
- kfree(ice_dev);
-
- return 1;
-}
-
-static int qcom_ice_suspend(struct platform_device *pdev)
-{
- struct ice_device *ice_dev;
- int ret = 0;
-
- ice_dev = (struct ice_device *)platform_get_drvdata(pdev);
-
- if (!ice_dev)
- return -EINVAL;
- if (atomic_read(&ice_dev->is_ice_busy) != 0) {
- ret = wait_event_interruptible_timeout(
- ice_dev->block_suspend_ice_queue,
- atomic_read(&ice_dev->is_ice_busy) == 0,
- msecs_to_jiffies(1000));
-
- if (!ret) {
- pr_err("%s: Suspend ICE during an ongoing operation\n",
- __func__);
- atomic_set(&ice_dev->is_ice_suspended, 0);
- return -ETIME;
- }
- }
-
- atomic_set(&ice_dev->is_ice_suspended, 1);
- return 0;
-}
-
-static int qcom_ice_restore_config(void)
-{
- struct scm_desc desc = {0};
- int ret;
-
- /*
- * TZ would check KEYS_RAM_RESET_COMPLETED status bit before processing
- * restore config command. This would prevent two calls from HLOS to TZ
- * One to check KEYS_RAM_RESET_COMPLETED status bit second to restore
- * config
- */
-
- desc.arginfo = TZ_OS_KS_RESTORE_KEY_ID_PARAM_ID;
-
- ret = scm_call2(TZ_OS_KS_RESTORE_KEY_ID, &desc);
-
- if (ret)
- pr_err("%s: Error: 0x%x\n", __func__, ret);
-
- return ret;
-}
-
-static int qcom_ice_init_clocks(struct ice_device *ice)
-{
- int ret = -EINVAL;
- struct ice_clk_info *clki = NULL;
- struct device *dev = ice->pdev;
- struct list_head *head = &ice->clk_list_head;
-
- if (!head || list_empty(head)) {
- dev_err(dev, "%s:ICE Clock list null/empty\n", __func__);
- goto out;
- }
-
- list_for_each_entry(clki, head, list) {
- if (!clki->name)
- continue;
-
- clki->clk = devm_clk_get(dev, clki->name);
- if (IS_ERR(clki->clk)) {
- ret = PTR_ERR(clki->clk);
- dev_err(dev, "%s: %s clk get failed, %d\n",
- __func__, clki->name, ret);
- goto out;
- }
-
- /* Not all clocks would have a rate to be set */
- ret = 0;
- if (clki->max_freq) {
- ret = clk_set_rate(clki->clk, clki->max_freq);
- if (ret) {
- dev_err(dev,
- "%s: %s clk set rate(%dHz) failed, %d\n",
- __func__, clki->name,
- clki->max_freq, ret);
- goto out;
- }
- clki->curr_freq = clki->max_freq;
- dev_dbg(dev, "%s: clk: %s, rate: %lu\n", __func__,
- clki->name, clk_get_rate(clki->clk));
- }
- }
-out:
- return ret;
-}
-
-static int qcom_ice_enable_clocks(struct ice_device *ice, bool enable)
-{
- int ret = 0;
- struct ice_clk_info *clki = NULL;
- struct device *dev = ice->pdev;
- struct list_head *head = &ice->clk_list_head;
-
- if (!head || list_empty(head)) {
- dev_err(dev, "%s:ICE Clock list null/empty\n", __func__);
- ret = -EINVAL;
- goto out;
- }
-
- if (!ice->is_ice_clk_available) {
- dev_err(dev, "%s:ICE Clock not available\n", __func__);
- ret = -EINVAL;
- goto out;
- }
-
- list_for_each_entry(clki, head, list) {
- if (!clki->name)
- continue;
-
- if (enable)
- ret = clk_prepare_enable(clki->clk);
- else
- clk_disable_unprepare(clki->clk);
-
- if (ret) {
- dev_err(dev, "Unable to %s ICE core clk\n",
- enable?"enable":"disable");
- goto out;
- }
- }
-out:
- return ret;
-}
-
-static int qcom_ice_secure_ice_init(struct ice_device *ice_dev)
-{
- /* We need to enable source for ICE secure interrupts */
- int ret = 0;
- u32 regval;
-
- regval = scm_io_read((unsigned long)ice_dev->res +
- QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_MASK);
-
- regval &= ~QCOM_ICE_SEC_IRQ_MASK;
- ret = scm_io_write((unsigned long)ice_dev->res +
- QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_MASK, regval);
-
- /*
- * Ensure previous instructions was completed before issuing next
- * ICE initialization/optimization instruction
- */
- mb();
-
- if (!ret)
- pr_err("%s: failed(0x%x) to init secure ICE config\n",
- __func__, ret);
- return ret;
-}
-
-static int qcom_ice_update_sec_cfg(struct ice_device *ice_dev)
-{
- int ret = 0, scm_ret = 0;
-
- /* scm command buffer structure */
- struct qcom_scm_cmd_buf {
- unsigned int device_id;
- unsigned int spare;
- } cbuf = {0};
-
- /*
- * Ideally, we should check ICE version to decide whether to proceed or
- * or not. Since version wont be available when this function is called
- * we need to depend upon is_ice_clk_available to decide
- */
- if (ice_dev->is_ice_clk_available)
- goto out;
-
- /*
- * Store dev_id in ice_device structure so that emmc/ufs cases can be
- * handled properly
- */
- #define RESTORE_SEC_CFG_CMD 0x2
- #define ICE_TZ_DEV_ID 20
-
- cbuf.device_id = ICE_TZ_DEV_ID;
- ret = scm_restore_sec_cfg(cbuf.device_id, cbuf.spare, &scm_ret);
- if (ret || scm_ret) {
- pr_err("%s: failed, ret %d scm_ret %d\n",
- __func__, ret, scm_ret);
- if (!ret)
- ret = scm_ret;
- }
-out:
-
- return ret;
-}
-
-static int qcom_ice_finish_init(struct ice_device *ice_dev)
-{
- unsigned int reg;
- int err = 0;
-
- if (!ice_dev) {
- pr_err("%s: Null data received\n", __func__);
- err = -ENODEV;
- goto out;
- }
-
- if (ice_dev->is_ice_clk_available) {
- err = qcom_ice_init_clocks(ice_dev);
- if (err)
- goto out;
-
- err = qcom_ice_bus_register(ice_dev);
- if (err)
- goto out;
- }
-
- /*
- * It is possible that ICE device is not probed when host is probed
- * This would cause host probe to be deferred. When probe for host is
- * deferred, it can cause power collapse for host and that can wipe
- * configurations of host & ice. It is prudent to restore the config
- */
- err = qcom_ice_update_sec_cfg(ice_dev);
- if (err)
- goto out;
-
- err = qcom_ice_verify_ice(ice_dev);
- if (err)
- goto out;
-
- /* if ICE_DISABLE_FUSE is blown, return immediately
- * Currently, FORCE HW Keys are also disabled, since
- * there is no use case for their usage neither in FDE
- * nor in PFE
- */
- reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_FUSE_SETTING);
- reg &= (ICE_FUSE_SETTING_MASK |
- ICE_FORCE_HW_KEY0_SETTING_MASK |
- ICE_FORCE_HW_KEY1_SETTING_MASK);
-
- if (reg) {
- ice_dev->is_ice_disable_fuse_blown = true;
- pr_err("%s: Error: ICE_ERROR_HW_DISABLE_FUSE_BLOWN\n",
- __func__);
- err = -EPERM;
- goto out;
- }
-
- /* TZ side of ICE driver would handle secure init of ICE HW from v2 */
- if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 1 &&
- !qcom_ice_secure_ice_init(ice_dev)) {
- pr_err("%s: Error: ICE_ERROR_ICE_TZ_INIT_FAILED\n", __func__);
- err = -EFAULT;
- goto out;
- }
- init_waitqueue_head(&ice_dev->block_suspend_ice_queue);
- qcom_ice_low_power_mode_enable(ice_dev);
- qcom_ice_optimization_enable(ice_dev);
- qcom_ice_config_proc_ignore(ice_dev);
- qcom_ice_enable_test_bus_config(ice_dev);
- qcom_ice_enable(ice_dev);
- ice_dev->is_ice_enabled = true;
- qcom_ice_enable_intr(ice_dev);
- atomic_set(&ice_dev->is_ice_suspended, 0);
- atomic_set(&ice_dev->is_ice_busy, 0);
-out:
- return err;
-}
-
-static int qcom_ice_init(struct platform_device *pdev,
- void *host_controller_data,
- ice_error_cb error_cb)
-{
- /*
- * A completion event for host controller would be triggered upon
- * initialization completion
- * When ICE is initialized, it would put ICE into Global Bypass mode
- * When any request for data transfer is received, it would enable
- * the ICE for that particular request
- */
- struct ice_device *ice_dev;
-
- ice_dev = platform_get_drvdata(pdev);
- if (!ice_dev) {
- pr_err("%s: invalid device\n", __func__);
- return -EINVAL;
- }
-
- ice_dev->error_cb = error_cb;
- ice_dev->host_controller_data = host_controller_data;
-
- return qcom_ice_finish_init(ice_dev);
-}
-
-static int qcom_ice_finish_power_collapse(struct ice_device *ice_dev)
-{
- int err = 0;
-
- if (ice_dev->is_ice_disable_fuse_blown) {
- err = -EPERM;
- goto out;
- }
-
- if (ice_dev->is_ice_enabled) {
- /*
- * ICE resets into global bypass mode with optimization and
- * low power mode disabled. Hence we need to redo those seq's.
- */
- qcom_ice_low_power_mode_enable(ice_dev);
-
- qcom_ice_enable_test_bus_config(ice_dev);
-
- qcom_ice_optimization_enable(ice_dev);
- qcom_ice_enable(ice_dev);
-
- if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 1) {
- /*
- * When ICE resets, it wipes all of keys from LUTs
- * ICE driver should call TZ to restore keys
- */
- if (qcom_ice_restore_config()) {
- err = -EFAULT;
- goto out;
- }
-
- /*
- * ICE looses its key configuration when UFS is reset,
- * restore it
- */
- } else if (ICE_REV(ice_dev->ice_hw_version, MAJOR) > 2) {
- /*
- * for PFE case, clear the cached ICE key table,
- * this will force keys to be reconfigured
- * per each next transaction
- */
- pfk_clear_on_reset(ice_dev);
- }
- }
-
- ice_dev->ice_reset_complete_time = ktime_get();
-out:
- return err;
-}
-
-static int qcom_ice_resume(struct platform_device *pdev)
-{
- /*
- * ICE is power collapsed when storage controller is power collapsed
- * ICE resume function is responsible for:
- * ICE HW enabling sequence
- * Key restoration
- * A completion event should be triggered
- * upon resume completion
- * Storage driver will be fully operational only
- * after receiving this event
- */
- struct ice_device *ice_dev;
-
- ice_dev = platform_get_drvdata(pdev);
-
- if (!ice_dev)
- return -EINVAL;
-
- if (ice_dev->is_ice_clk_available) {
- /*
- * Storage is calling this function after power collapse which
- * would put ICE into GLOBAL_BYPASS mode. Make sure to enable
- * ICE
- */
- qcom_ice_enable(ice_dev);
- }
- atomic_set(&ice_dev->is_ice_suspended, 0);
- return 0;
-}
-
-static void qcom_ice_dump_test_bus(struct ice_device *ice_dev)
-{
- u32 reg = 0x1;
- u32 val;
- u8 bus_selector;
- u8 stream_selector;
-
- pr_err("ICE TEST BUS DUMP:\n");
-
- for (bus_selector = 0; bus_selector <= 0xF; bus_selector++) {
- reg = 0x1; /* enable test bus */
- reg |= bus_selector << 28;
- if (bus_selector == 0xD)
- continue;
- qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_TEST_BUS_CONTROL);
- /*
- * make sure test bus selector is written before reading
- * the test bus register
- */
- mb();
- val = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_TEST_BUS_REG);
- pr_err("ICE_TEST_BUS_CONTROL: 0x%08x | ICE_TEST_BUS_REG: 0x%08x\n",
- reg, val);
- }
-
- pr_err("ICE TEST BUS DUMP (ICE_STREAM1_DATAPATH_TEST_BUS):\n");
- for (stream_selector = 0; stream_selector <= 0xF; stream_selector++) {
- reg = 0xD0000001; /* enable stream test bus */
- reg |= stream_selector << 16;
- qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_TEST_BUS_CONTROL);
- /*
- * make sure test bus selector is written before reading
- * the test bus register
- */
- mb();
- val = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_TEST_BUS_REG);
- pr_err("ICE_TEST_BUS_CONTROL: 0x%08x | ICE_TEST_BUS_REG: 0x%08x\n",
- reg, val);
- }
-}
-
-static void qcom_ice_debug(struct platform_device *pdev)
-{
- struct ice_device *ice_dev;
-
- if (!pdev) {
- pr_err("%s: Invalid params passed\n", __func__);
- goto out;
- }
-
- ice_dev = platform_get_drvdata(pdev);
-
- if (!ice_dev) {
- pr_err("%s: No ICE device available\n", __func__);
- goto out;
- }
-
- if (!ice_dev->is_ice_enabled) {
- pr_err("%s: ICE device is not enabled\n", __func__);
- goto out;
- }
-
- pr_err("%s: =========== REGISTER DUMP (%pK)===========\n",
- ice_dev->ice_instance_type, ice_dev);
-
- pr_err("%s: ICE Control: 0x%08x | ICE Reset: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_CONTROL),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_RESET));
-
- pr_err("%s: ICE Version: 0x%08x | ICE FUSE: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_VERSION),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_FUSE_SETTING));
-
- pr_err("%s: ICE Param1: 0x%08x | ICE Param2: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_PARAMETERS_1),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_PARAMETERS_2));
-
- pr_err("%s: ICE Param3: 0x%08x | ICE Param4: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_PARAMETERS_3),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_PARAMETERS_4));
-
- pr_err("%s: ICE Param5: 0x%08x | ICE IRQ STTS: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_PARAMETERS_5),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_STTS));
-
- pr_err("%s: ICE IRQ MASK: 0x%08x | ICE IRQ CLR: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_MASK),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_CLR));
-
- if (ICE_REV(ice_dev->ice_hw_version, MAJOR) > 2) {
- pr_err("%s: ICE INVALID CCFG ERR STTS: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev,
- QCOM_ICE_INVALID_CCFG_ERR_STTS));
- }
-
- if ((ICE_REV(ice_dev->ice_hw_version, MAJOR) > 2) ||
- ((ICE_REV(ice_dev->ice_hw_version, MAJOR) == 2) &&
- (ICE_REV(ice_dev->ice_hw_version, MINOR) >= 1))) {
- pr_err("%s: ICE BIST Sts: 0x%08x | ICE Bypass Sts: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_BIST_STATUS),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_BYPASS_STATUS));
- }
-
- pr_err("%s: ICE ADV CTRL: 0x%08x | ICE ENDIAN SWAP: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_ADVANCED_CONTROL),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_ENDIAN_SWAP));
-
- pr_err("%s: ICE_STM1_ERR_SYND1: 0x%08x | ICE_STM1_ERR_SYND2: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_ERROR_SYNDROME1),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_ERROR_SYNDROME2));
-
- pr_err("%s: ICE_STM2_ERR_SYND1: 0x%08x | ICE_STM2_ERR_SYND2: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_ERROR_SYNDROME1),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_ERROR_SYNDROME2));
-
- pr_err("%s: ICE_STM1_COUNTER1: 0x%08x | ICE_STM1_COUNTER2: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS1),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS2));
-
- pr_err("%s: ICE_STM1_COUNTER3: 0x%08x | ICE_STM1_COUNTER4: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS3),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS4));
-
- pr_err("%s: ICE_STM2_COUNTER1: 0x%08x | ICE_STM2_COUNTER2: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS1),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS2));
-
- pr_err("%s: ICE_STM2_COUNTER3: 0x%08x | ICE_STM2_COUNTER4: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS3),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS4));
-
- pr_err("%s: ICE_STM1_CTR5_MSB: 0x%08x | ICE_STM1_CTR5_LSB: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS5_MSB),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS5_LSB));
-
- pr_err("%s: ICE_STM1_CTR6_MSB: 0x%08x | ICE_STM1_CTR6_LSB: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS6_MSB),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS6_LSB));
-
- pr_err("%s: ICE_STM1_CTR7_MSB: 0x%08x | ICE_STM1_CTR7_LSB: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS7_MSB),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS7_LSB));
-
- pr_err("%s: ICE_STM1_CTR8_MSB: 0x%08x | ICE_STM1_CTR8_LSB: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS8_MSB),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS8_LSB));
-
- pr_err("%s: ICE_STM1_CTR9_MSB: 0x%08x | ICE_STM1_CTR9_LSB: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS9_MSB),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS9_LSB));
-
- pr_err("%s: ICE_STM2_CTR5_MSB: 0x%08x | ICE_STM2_CTR5_LSB: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS5_MSB),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS5_LSB));
-
- pr_err("%s: ICE_STM2_CTR6_MSB: 0x%08x | ICE_STM2_CTR6_LSB: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS6_MSB),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS6_LSB));
-
- pr_err("%s: ICE_STM2_CTR7_MSB: 0x%08x | ICE_STM2_CTR7_LSB: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS7_MSB),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS7_LSB));
-
- pr_err("%s: ICE_STM2_CTR8_MSB: 0x%08x | ICE_STM2_CTR8_LSB: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS8_MSB),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS8_LSB));
-
- pr_err("%s: ICE_STM2_CTR9_MSB: 0x%08x | ICE_STM2_CTR9_LSB: 0x%08x\n",
- ice_dev->ice_instance_type,
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS9_MSB),
- qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS9_LSB));
-
- qcom_ice_dump_test_bus(ice_dev);
- pr_err("%s: ICE reset start time: %llu ICE reset done time: %llu\n",
- ice_dev->ice_instance_type,
- (unsigned long long)ice_dev->ice_reset_start_time,
- (unsigned long long)ice_dev->ice_reset_complete_time);
-
- if (ktime_to_us(ktime_sub(ice_dev->ice_reset_complete_time,
- ice_dev->ice_reset_start_time)) > 0)
- pr_err("%s: Time taken for reset: %lu\n",
- ice_dev->ice_instance_type,
- (unsigned long)ktime_to_us(ktime_sub(
- ice_dev->ice_reset_complete_time,
- ice_dev->ice_reset_start_time)));
-out:
- return;
-}
-
-static int qcom_ice_reset(struct platform_device *pdev)
-{
- struct ice_device *ice_dev;
-
- ice_dev = platform_get_drvdata(pdev);
- if (!ice_dev) {
- pr_err("%s: INVALID ice_dev\n", __func__);
- return -EINVAL;
- }
-
- ice_dev->ice_reset_start_time = ktime_get();
-
- return qcom_ice_finish_power_collapse(ice_dev);
-}
-
-static int qcom_ice_config_start(struct platform_device *pdev,
- struct request *req,
- struct ice_data_setting *setting, bool async)
-{
- struct ice_crypto_setting pfk_crypto_data = {0};
- struct ice_crypto_setting ice_data = {0};
- int ret = 0;
- bool is_pfe = false;
- unsigned long sec_end = 0;
- sector_t data_size;
- struct ice_device *ice_dev;
-
- if (!pdev || !req) {
- pr_err("%s: Invalid params passed\n", __func__);
- return -EINVAL;
- }
- ice_dev = platform_get_drvdata(pdev);
- if (!ice_dev) {
- pr_debug("%s no ICE device\n", __func__);
- /* make the caller finish peacefully */
- return 0;
- }
-
- /*
- * It is not an error to have a request with no bio
- * Such requests must bypass ICE. So first set bypass and then
- * return if bio is not available in request
- */
- if (setting) {
- setting->encr_bypass = true;
- setting->decr_bypass = true;
- }
-
- if (!req->bio) {
- /* It is not an error to have a request with no bio */
- return 0;
- }
-
- if (atomic_read(&ice_dev->is_ice_suspended) == 1)
- return -EINVAL;
-
- if (async)
- atomic_set(&ice_dev->is_ice_busy, 1);
-
- ret = pfk_load_key_start(req->bio, ice_dev, &pfk_crypto_data,
- &is_pfe, async);
-
- if (async) {
- atomic_set(&ice_dev->is_ice_busy, 0);
- wake_up_interruptible(&ice_dev->block_suspend_ice_queue);
- }
-
- if (is_pfe) {
- if (ret) {
- if (ret != -EBUSY && ret != -EAGAIN)
- pr_err("%s error %d while configuring ice key for PFE\n",
- __func__, ret);
- return ret;
- }
-
- return qti_ice_setting_config(req, ice_dev,
- &pfk_crypto_data, setting, ICE_CRYPTO_CXT_FBE);
- }
-
- if (ice_fde_flag && req->part && req->part->info
- && req->part->info->volname[0]) {
- if (!strcmp(req->part->info->volname, "userdata")) {
- sec_end = req->part->start_sect + req->part->nr_sects -
- QCOM_UD_FOOTER_SECS;
- if ((req->__sector >= req->part->start_sect) &&
- (req->__sector < sec_end)) {
- /*
- * Ugly hack to address non-block-size aligned
- * userdata end address in eMMC based devices.
- * for eMMC based devices, since sector and
- * block sizes are not same i.e. 4K, it is
- * possible that partition is not a multiple of
- * block size. For UFS based devices sector
- * size and block size are same. Hence ensure
- * that data is within userdata partition using
- * sector based calculation
- */
- data_size = req->__data_len /
- QCOM_SECT_LEN_IN_BYTE;
-
- if ((req->__sector + data_size) > sec_end)
- return 0;
- else
- return qti_ice_setting_config(req,
- ice_dev, &ice_data, setting,
- ICE_CRYPTO_CXT_FDE);
- }
- }
- }
-
- /*
- * It is not an error. If target is not req-crypt based, all request
- * from storage driver would come here to check if there is any ICE
- * setting required
- */
- return 0;
-}
-EXPORT_SYMBOL(qcom_ice_config_start);
-
-static int qcom_ice_config_end(struct platform_device *pdev,
- struct request *req)
-{
- int ret = 0;
- bool is_pfe = false;
- struct ice_device *ice_dev;
-
- if (!req || !pdev) {
- pr_err("%s: Invalid params passed\n", __func__);
- return -EINVAL;
- }
-
- if (!req->bio) {
- /* It is not an error to have a request with no bio */
- return 0;
- }
-
- ice_dev = platform_get_drvdata(pdev);
- if (!ice_dev) {
- pr_debug("%s no ICE device\n", __func__);
- /* make the caller finish peacefully */
- return 0;
- }
-
- ret = pfk_load_key_end(req->bio, ice_dev, &is_pfe);
- if (is_pfe) {
- if (ret != 0)
- pr_err("%s error %d while end configuring ice key for PFE\n",
- __func__, ret);
- return ret;
- }
-
-
- return 0;
-}
-EXPORT_SYMBOL(qcom_ice_config_end);
-
-
-static int qcom_ice_status(struct platform_device *pdev)
-{
- struct ice_device *ice_dev;
- unsigned int test_bus_reg_status;
-
- if (!pdev) {
- pr_err("%s: Invalid params passed\n", __func__);
- return -EINVAL;
- }
-
- ice_dev = platform_get_drvdata(pdev);
-
- if (!ice_dev)
- return -ENODEV;
-
- if (!ice_dev->is_ice_enabled)
- return -ENODEV;
-
- test_bus_reg_status = qcom_ice_readl(ice_dev,
- QCOM_ICE_REGS_TEST_BUS_REG);
-
- return !!(test_bus_reg_status & QCOM_ICE_TEST_BUS_REG_NON_SECURE_INTR);
-
-}
-
-struct qcom_ice_variant_ops qcom_ice_ops = {
- .name = "qcom",
- .init = qcom_ice_init,
- .reset = qcom_ice_reset,
- .resume = qcom_ice_resume,
- .suspend = qcom_ice_suspend,
- .config_start = qcom_ice_config_start,
- .config_end = qcom_ice_config_end,
- .status = qcom_ice_status,
- .debug = qcom_ice_debug,
-};
-
-struct platform_device *qcom_ice_get_pdevice(struct device_node *node)
-{
- struct platform_device *ice_pdev = NULL;
- struct ice_device *ice_dev = NULL;
-
- if (!node) {
- pr_err("%s: invalid node %pK\n", __func__, node);
- goto out;
- }
-
- if (!of_device_is_available(node)) {
- pr_err("%s: device unavailable\n", __func__);
- goto out;
- }
-
- if (list_empty(&ice_devices)) {
- pr_err("%s: invalid device list\n", __func__);
- ice_pdev = ERR_PTR(-EPROBE_DEFER);
- goto out;
- }
-
- list_for_each_entry(ice_dev, &ice_devices, list) {
- if (ice_dev->pdev->of_node == node) {
- pr_info("%s: found ice device %pK\n", __func__,
- ice_dev);
- ice_pdev = to_platform_device(ice_dev->pdev);
- break;
- }
- }
-
- if (ice_pdev)
- pr_info("%s: matching platform device %pK\n", __func__,
- ice_pdev);
-out:
- return ice_pdev;
-}
-
-static struct ice_device *get_ice_device_from_storage_type
- (const char *storage_type)
-{
- struct ice_device *ice_dev = NULL;
-
- if (list_empty(&ice_devices)) {
- pr_err("%s: invalid device list\n", __func__);
- ice_dev = ERR_PTR(-EPROBE_DEFER);
- goto out;
- }
-
- list_for_each_entry(ice_dev, &ice_devices, list) {
- if (!strcmp(ice_dev->ice_instance_type, storage_type)) {
- pr_debug("%s: ice device %pK\n", __func__, ice_dev);
- return ice_dev;
- }
- }
-out:
- return NULL;
-}
-
-int enable_ice_setup(struct ice_device *ice_dev)
-{
- int ret = -1, vote;
-
- /* Setup Regulator */
- if (ice_dev->is_regulator_available) {
- if (qcom_ice_get_vreg(ice_dev)) {
- pr_err("%s: Could not get regulator\n", __func__);
- goto out;
- }
- ret = regulator_enable(ice_dev->reg);
- if (ret) {
- pr_err("%s:%pK: Could not enable regulator\n",
- __func__, ice_dev);
- goto out;
- }
- }
-
- /* Setup Clocks */
- if (qcom_ice_enable_clocks(ice_dev, true)) {
- pr_err("%s:%pK:%s Could not enable clocks\n", __func__,
- ice_dev, ice_dev->ice_instance_type);
- goto out_reg;
- }
-
- /* Setup Bus Vote */
- vote = qcom_ice_get_bus_vote(ice_dev, "MAX");
- if (vote < 0)
- goto out_clocks;
-
- ret = qcom_ice_set_bus_vote(ice_dev, vote);
- if (ret) {
- pr_err("%s:%pK: failed %d\n", __func__, ice_dev, ret);
- goto out_clocks;
- }
-
- return ret;
-
-out_clocks:
- qcom_ice_enable_clocks(ice_dev, false);
-out_reg:
- if (ice_dev->is_regulator_available) {
- if (qcom_ice_get_vreg(ice_dev)) {
- pr_err("%s: Could not get regulator\n", __func__);
- goto out;
- }
- ret = regulator_disable(ice_dev->reg);
- if (ret) {
- pr_err("%s:%pK: Could not disable regulator\n",
- __func__, ice_dev);
- goto out;
- }
- }
-out:
- return ret;
-}
-
-int disable_ice_setup(struct ice_device *ice_dev)
-{
- int ret = -1, vote;
-
- /* Setup Bus Vote */
- vote = qcom_ice_get_bus_vote(ice_dev, "MIN");
- if (vote < 0) {
- pr_err("%s:%pK: Unable to get bus vote\n", __func__, ice_dev);
- goto out_disable_clocks;
- }
-
- ret = qcom_ice_set_bus_vote(ice_dev, vote);
- if (ret)
- pr_err("%s:%pK: failed %d\n", __func__, ice_dev, ret);
-
-out_disable_clocks:
-
- /* Setup Clocks */
- if (qcom_ice_enable_clocks(ice_dev, false))
- pr_err("%s:%pK:%s Could not disable clocks\n", __func__,
- ice_dev, ice_dev->ice_instance_type);
-
- /* Setup Regulator */
- if (ice_dev->is_regulator_available) {
- if (qcom_ice_get_vreg(ice_dev)) {
- pr_err("%s: Could not get regulator\n", __func__);
- goto out;
- }
- ret = regulator_disable(ice_dev->reg);
- if (ret) {
- pr_err("%s:%pK: Could not disable regulator\n",
- __func__, ice_dev);
- goto out;
- }
- }
-out:
- return ret;
-}
-
-int qcom_ice_setup_ice_hw(const char *storage_type, int enable)
-{
- int ret = -1;
- struct ice_device *ice_dev = NULL;
-
- ice_dev = get_ice_device_from_storage_type(storage_type);
- if (ice_dev == ERR_PTR(-EPROBE_DEFER))
- return -EPROBE_DEFER;
-
- if (!ice_dev || !(ice_dev->is_ice_enabled))
- return ret;
-
- if (enable)
- return enable_ice_setup(ice_dev);
- else
- return disable_ice_setup(ice_dev);
-}
-
-struct list_head *get_ice_dev_list(void)
-{
- return &ice_devices;
-}
-
-struct qcom_ice_variant_ops *qcom_ice_get_variant_ops(struct device_node *node)
-{
- return &qcom_ice_ops;
-}
-EXPORT_SYMBOL(qcom_ice_get_variant_ops);
-
-/* Following struct is required to match device with driver from dts file */
-static const struct of_device_id qcom_ice_match[] = {
- { .compatible = "qcom,ice" },
- {},
-};
-MODULE_DEVICE_TABLE(of, qcom_ice_match);
-
-static struct platform_driver qcom_ice_driver = {
- .probe = qcom_ice_probe,
- .remove = qcom_ice_remove,
- .driver = {
- .name = "qcom_ice",
- .of_match_table = qcom_ice_match,
- },
-};
-module_platform_driver(qcom_ice_driver);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("QTI Inline Crypto Engine driver");
diff --git a/drivers/crypto/msm/iceregs.h b/drivers/crypto/msm/iceregs.h
deleted file mode 100644
index c3b5718..0000000
--- a/drivers/crypto/msm/iceregs.h
+++ /dev/null
@@ -1,151 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef _QCOM_INLINE_CRYPTO_ENGINE_REGS_H_
-#define _QCOM_INLINE_CRYPTO_ENGINE_REGS_H_
-
-/* Register bits for ICE version */
-#define ICE_CORE_CURRENT_MAJOR_VERSION 0x03
-
-#define ICE_CORE_STEP_REV_MASK 0xFFFF
-#define ICE_CORE_STEP_REV 0 /* bit 15-0 */
-#define ICE_CORE_MAJOR_REV_MASK 0xFF000000
-#define ICE_CORE_MAJOR_REV 24 /* bit 31-24 */
-#define ICE_CORE_MINOR_REV_MASK 0xFF0000
-#define ICE_CORE_MINOR_REV 16 /* bit 23-16 */
-
-#define ICE_BIST_STATUS_MASK (0xF0000000) /* bits 28-31 */
-
-#define ICE_FUSE_SETTING_MASK 0x1
-#define ICE_FORCE_HW_KEY0_SETTING_MASK 0x2
-#define ICE_FORCE_HW_KEY1_SETTING_MASK 0x4
-
-/* QCOM ICE Registers from SWI */
-#define QCOM_ICE_REGS_CONTROL 0x0000
-#define QCOM_ICE_REGS_RESET 0x0004
-#define QCOM_ICE_REGS_VERSION 0x0008
-#define QCOM_ICE_REGS_FUSE_SETTING 0x0010
-#define QCOM_ICE_REGS_PARAMETERS_1 0x0014
-#define QCOM_ICE_REGS_PARAMETERS_2 0x0018
-#define QCOM_ICE_REGS_PARAMETERS_3 0x001C
-#define QCOM_ICE_REGS_PARAMETERS_4 0x0020
-#define QCOM_ICE_REGS_PARAMETERS_5 0x0024
-
-
-/* QCOM ICE v3.X only */
-#define QCOM_ICE_GENERAL_ERR_STTS 0x0040
-#define QCOM_ICE_INVALID_CCFG_ERR_STTS 0x0030
-#define QCOM_ICE_GENERAL_ERR_MASK 0x0044
-
-
-/* QCOM ICE v2.X only */
-#define QCOM_ICE_REGS_NON_SEC_IRQ_STTS 0x0040
-#define QCOM_ICE_REGS_NON_SEC_IRQ_MASK 0x0044
-
-
-#define QCOM_ICE_REGS_NON_SEC_IRQ_CLR 0x0048
-#define QCOM_ICE_REGS_STREAM1_ERROR_SYNDROME1 0x0050
-#define QCOM_ICE_REGS_STREAM1_ERROR_SYNDROME2 0x0054
-#define QCOM_ICE_REGS_STREAM2_ERROR_SYNDROME1 0x0058
-#define QCOM_ICE_REGS_STREAM2_ERROR_SYNDROME2 0x005C
-#define QCOM_ICE_REGS_STREAM1_BIST_ERROR_VEC 0x0060
-#define QCOM_ICE_REGS_STREAM2_BIST_ERROR_VEC 0x0064
-#define QCOM_ICE_REGS_STREAM1_BIST_FINISH_VEC 0x0068
-#define QCOM_ICE_REGS_STREAM2_BIST_FINISH_VEC 0x006C
-#define QCOM_ICE_REGS_BIST_STATUS 0x0070
-#define QCOM_ICE_REGS_BYPASS_STATUS 0x0074
-#define QCOM_ICE_REGS_ADVANCED_CONTROL 0x1000
-#define QCOM_ICE_REGS_ENDIAN_SWAP 0x1004
-#define QCOM_ICE_REGS_TEST_BUS_CONTROL 0x1010
-#define QCOM_ICE_REGS_TEST_BUS_REG 0x1014
-#define QCOM_ICE_REGS_STREAM1_COUNTERS1 0x1100
-#define QCOM_ICE_REGS_STREAM1_COUNTERS2 0x1104
-#define QCOM_ICE_REGS_STREAM1_COUNTERS3 0x1108
-#define QCOM_ICE_REGS_STREAM1_COUNTERS4 0x110C
-#define QCOM_ICE_REGS_STREAM1_COUNTERS5_MSB 0x1110
-#define QCOM_ICE_REGS_STREAM1_COUNTERS5_LSB 0x1114
-#define QCOM_ICE_REGS_STREAM1_COUNTERS6_MSB 0x1118
-#define QCOM_ICE_REGS_STREAM1_COUNTERS6_LSB 0x111C
-#define QCOM_ICE_REGS_STREAM1_COUNTERS7_MSB 0x1120
-#define QCOM_ICE_REGS_STREAM1_COUNTERS7_LSB 0x1124
-#define QCOM_ICE_REGS_STREAM1_COUNTERS8_MSB 0x1128
-#define QCOM_ICE_REGS_STREAM1_COUNTERS8_LSB 0x112C
-#define QCOM_ICE_REGS_STREAM1_COUNTERS9_MSB 0x1130
-#define QCOM_ICE_REGS_STREAM1_COUNTERS9_LSB 0x1134
-#define QCOM_ICE_REGS_STREAM2_COUNTERS1 0x1200
-#define QCOM_ICE_REGS_STREAM2_COUNTERS2 0x1204
-#define QCOM_ICE_REGS_STREAM2_COUNTERS3 0x1208
-#define QCOM_ICE_REGS_STREAM2_COUNTERS4 0x120C
-#define QCOM_ICE_REGS_STREAM2_COUNTERS5_MSB 0x1210
-#define QCOM_ICE_REGS_STREAM2_COUNTERS5_LSB 0x1214
-#define QCOM_ICE_REGS_STREAM2_COUNTERS6_MSB 0x1218
-#define QCOM_ICE_REGS_STREAM2_COUNTERS6_LSB 0x121C
-#define QCOM_ICE_REGS_STREAM2_COUNTERS7_MSB 0x1220
-#define QCOM_ICE_REGS_STREAM2_COUNTERS7_LSB 0x1224
-#define QCOM_ICE_REGS_STREAM2_COUNTERS8_MSB 0x1228
-#define QCOM_ICE_REGS_STREAM2_COUNTERS8_LSB 0x122C
-#define QCOM_ICE_REGS_STREAM2_COUNTERS9_MSB 0x1230
-#define QCOM_ICE_REGS_STREAM2_COUNTERS9_LSB 0x1234
-
-#define QCOM_ICE_STREAM1_PREMATURE_LBA_CHANGE (1L << 0)
-#define QCOM_ICE_STREAM2_PREMATURE_LBA_CHANGE (1L << 1)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_LBO (1L << 2)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_LBO (1L << 3)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_DUN (1L << 4)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_DUN (1L << 5)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_DUS (1L << 6)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_DUS (1L << 7)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_DBO (1L << 8)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_DBO (1L << 9)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_ENC_SEL (1L << 10)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_ENC_SEL (1L << 11)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_CONF_IDX (1L << 12)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_CONF_IDX (1L << 13)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_NEW_TRNS (1L << 14)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_NEW_TRNS (1L << 15)
-
-#define QCOM_ICE_NON_SEC_IRQ_MASK \
- (QCOM_ICE_STREAM1_PREMATURE_LBA_CHANGE |\
- QCOM_ICE_STREAM2_PREMATURE_LBA_CHANGE |\
- QCOM_ICE_STREAM1_NOT_EXPECTED_LBO |\
- QCOM_ICE_STREAM2_NOT_EXPECTED_LBO |\
- QCOM_ICE_STREAM1_NOT_EXPECTED_DUN |\
- QCOM_ICE_STREAM2_NOT_EXPECTED_DUN |\
- QCOM_ICE_STREAM2_NOT_EXPECTED_DUS |\
- QCOM_ICE_STREAM1_NOT_EXPECTED_DBO |\
- QCOM_ICE_STREAM2_NOT_EXPECTED_DBO |\
- QCOM_ICE_STREAM1_NOT_EXPECTED_ENC_SEL |\
- QCOM_ICE_STREAM2_NOT_EXPECTED_ENC_SEL |\
- QCOM_ICE_STREAM1_NOT_EXPECTED_CONF_IDX |\
- QCOM_ICE_STREAM1_NOT_EXPECTED_NEW_TRNS |\
- QCOM_ICE_STREAM2_NOT_EXPECTED_NEW_TRNS)
-
-/* QCOM ICE registers from secure side */
-#define QCOM_ICE_TEST_BUS_REG_SECURE_INTR (1L << 28)
-#define QCOM_ICE_TEST_BUS_REG_NON_SECURE_INTR (1L << 2)
-
-#define QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_STTS 0x2050
-#define QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_MASK 0x2054
-#define QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_CLR 0x2058
-
-#define QCOM_ICE_STREAM1_PARTIALLY_SET_KEY_USED (1L << 0)
-#define QCOM_ICE_STREAM2_PARTIALLY_SET_KEY_USED (1L << 1)
-#define QCOM_ICE_QCOMC_DBG_OPEN_EVENT (1L << 30)
-#define QCOM_ICE_KEYS_RAM_RESET_COMPLETED (1L << 31)
-
-#define QCOM_ICE_SEC_IRQ_MASK \
- (QCOM_ICE_STREAM1_PARTIALLY_SET_KEY_USED |\
- QCOM_ICE_STREAM2_PARTIALLY_SET_KEY_USED |\
- QCOM_ICE_QCOMC_DBG_OPEN_EVENT | \
- QCOM_ICE_KEYS_RAM_RESET_COMPLETED)
-
-
-#define qcom_ice_writel(ice, val, reg) \
- writel_relaxed((val), (ice)->mmio + (reg))
-#define qcom_ice_readl(ice, reg) \
- readl_relaxed((ice)->mmio + (reg))
-
-
-#endif /* _QCOM_INLINE_CRYPTO_ENGINE_REGS_H_ */
diff --git a/drivers/crypto/msm/qcedev.c b/drivers/crypto/msm/qcedev.c
index f8a29ae3..812ba67 100644
--- a/drivers/crypto/msm/qcedev.c
+++ b/drivers/crypto/msm/qcedev.c
@@ -2,7 +2,7 @@
/*
* QTI CE device driver.
*
- * Copyright (c) 2010-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2010-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/mman.h>
@@ -259,8 +259,6 @@ static int qcedev_open(struct inode *inode, struct file *file)
handle->cntl = podev;
file->private_data = handle;
- if (podev->platform_support.bus_scale_table != NULL)
- qcedev_ce_high_bw_req(podev, true);
mutex_init(&handle->registeredbufs.lock);
INIT_LIST_HEAD(&handle->registeredbufs.list);
@@ -284,8 +282,6 @@ static int qcedev_release(struct inode *inode, struct file *file)
kzfree(handle);
file->private_data = NULL;
- if (podev != NULL && podev->platform_support.bus_scale_table != NULL)
- qcedev_ce_high_bw_req(podev, false);
return 0;
}
@@ -1711,6 +1707,11 @@ static inline long qcedev_ioctl(struct file *file,
init_completion(&qcedev_areq->complete);
pstat = &_qcedev_stat;
+ if (podev->platform_support.bus_scale_table != NULL &&
+ cmd != QCEDEV_IOCTL_MAP_BUF_REQ &&
+ cmd != QCEDEV_IOCTL_UNMAP_BUF_REQ)
+ qcedev_ce_high_bw_req(podev, true);
+
switch (cmd) {
case QCEDEV_IOCTL_ENC_REQ:
case QCEDEV_IOCTL_DEC_REQ:
@@ -1935,6 +1936,11 @@ static inline long qcedev_ioctl(struct file *file,
goto exit_free_qcedev_areq;
}
+ if (map_buf.num_fds > QCEDEV_MAX_BUFFERS) {
+ err = -EINVAL;
+ goto exit_free_qcedev_areq;
+ }
+
for (i = 0; i < map_buf.num_fds; i++) {
err = qcedev_check_and_map_buffer(handle,
map_buf.fd[i],
@@ -1991,6 +1997,10 @@ static inline long qcedev_ioctl(struct file *file,
}
exit_free_qcedev_areq:
+ if (podev->platform_support.bus_scale_table != NULL &&
+ cmd != QCEDEV_IOCTL_MAP_BUF_REQ &&
+ cmd != QCEDEV_IOCTL_UNMAP_BUF_REQ)
+ qcedev_ce_high_bw_req(podev, false);
kfree(qcedev_areq);
return err;
}
@@ -2321,11 +2331,8 @@ static int _qcedev_debug_init(void)
static int qcedev_init(void)
{
- int rc;
+ _qcedev_debug_init();
- rc = _qcedev_debug_init();
- if (rc)
- return rc;
return platform_driver_register(&qcedev_plat_driver);
}
diff --git a/drivers/crypto/msm/qcrypto.c b/drivers/crypto/msm/qcrypto.c
index d00c6f5..6a8e0d2 100644
--- a/drivers/crypto/msm/qcrypto.c
+++ b/drivers/crypto/msm/qcrypto.c
@@ -2,7 +2,7 @@
/*
* QTI Crypto driver
*
- * Copyright (c) 2010-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2010-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/module.h>
@@ -5546,12 +5546,9 @@ static int _qcrypto_debug_init(void)
static int __init _qcrypto_init(void)
{
- int rc;
struct crypto_priv *pcp = &qcrypto_dev;
- rc = _qcrypto_debug_init();
- if (rc)
- return rc;
+ _qcrypto_debug_init();
INIT_LIST_HEAD(&pcp->alg_list);
INIT_LIST_HEAD(&pcp->engine_list);
init_llist_head(&pcp->ordered_resp_list);
diff --git a/drivers/devfreq/governor_bw_hwmon.c b/drivers/devfreq/governor_bw_hwmon.c
index f20b6cf..1076d3c 100644
--- a/drivers/devfreq/governor_bw_hwmon.c
+++ b/drivers/devfreq/governor_bw_hwmon.c
@@ -713,8 +713,11 @@ static int devfreq_bw_hwmon_get_freq(struct devfreq *df,
{
struct hwmon_node *node = df->data;
+ if (!node)
+ return -EINVAL;
+
/* Suspend/resume sequence */
- if ((node && !node->mon_started) || df->dev_suspended) {
+ if (!node->mon_started || df->dev_suspended) {
*freq = node->resume_freq;
*node->dev_ab = node->resume_ab;
return 0;
diff --git a/drivers/devfreq/governor_msm_adreno_tz.c b/drivers/devfreq/governor_msm_adreno_tz.c
index 39a58ac..3cd103b 100644
--- a/drivers/devfreq/governor_msm_adreno_tz.c
+++ b/drivers/devfreq/governor_msm_adreno_tz.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2010-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2010-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/errno.h>
#include <linux/module.h>
@@ -470,11 +470,14 @@ static int tz_start(struct devfreq *devfreq)
unsigned int tz_pwrlevels[MSM_ADRENO_MAX_PWRLEVELS + 1];
int i, out, ret;
unsigned int version;
+ struct msm_adreno_extended_profile *gpu_profile;
- struct msm_adreno_extended_profile *gpu_profile = container_of(
- (devfreq->profile),
- struct msm_adreno_extended_profile,
- profile);
+ if (partner_gpu_profile)
+ return -EEXIST;
+
+ gpu_profile = container_of(devfreq->profile,
+ struct msm_adreno_extended_profile,
+ profile);
/*
* Assuming that we have only one instance of the adreno device
@@ -495,6 +498,7 @@ static int tz_start(struct devfreq *devfreq)
tz_pwrlevels[0] = i;
} else {
pr_err(TAG "tz_pwrlevels[] is too short\n");
+ partner_gpu_profile = NULL;
return -EINVAL;
}
@@ -511,6 +515,7 @@ static int tz_start(struct devfreq *devfreq)
sizeof(version));
if (ret != 0 || version > MAX_TZ_VERSION) {
pr_err(TAG "tz_init failed\n");
+ partner_gpu_profile = NULL;
return ret;
}
@@ -606,7 +611,7 @@ static int tz_handler(struct devfreq *devfreq, unsigned int event, void *data)
break;
}
- if (partner_gpu_profile && partner_gpu_profile->bus_devfreq)
+ if (!result && partner_gpu_profile && partner_gpu_profile->bus_devfreq)
switch (event) {
case DEVFREQ_GOV_START:
queue_work(workqueue,
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 75b024c..65f5613 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -41,6 +41,7 @@
#include <linux/list_sort.h>
#include <linux/hashtable.h>
#include <linux/mount.h>
+#include <linux/dcache.h>
#include <uapi/linux/dma-buf.h>
#include <uapi/linux/magic.h>
@@ -69,18 +70,34 @@ struct dma_proc {
static struct dma_buf_list db_list;
+static void dmabuf_dent_put(struct dma_buf *dmabuf)
+{
+ if (atomic_dec_and_test(&dmabuf->dent_count)) {
+ kfree(dmabuf->name);
+ kfree(dmabuf);
+ }
+}
+
+
static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
{
struct dma_buf *dmabuf;
char name[DMA_BUF_NAME_LEN];
size_t ret = 0;
+ spin_lock(&dentry->d_lock);
dmabuf = dentry->d_fsdata;
+ if (!dmabuf || !atomic_add_unless(&dmabuf->dent_count, 1, 0)) {
+ spin_unlock(&dentry->d_lock);
+ goto out;
+ }
+ spin_unlock(&dentry->d_lock);
mutex_lock(&dmabuf->lock);
if (dmabuf->name)
ret = strlcpy(name, dmabuf->name, DMA_BUF_NAME_LEN);
mutex_unlock(&dmabuf->lock);
-
+ dmabuf_dent_put(dmabuf);
+out:
return dynamic_dname(dentry, buffer, buflen, "/%s:%s",
dentry->d_name.name, ret > 0 ? name : "");
}
@@ -107,6 +124,7 @@ static struct file_system_type dma_buf_fs_type = {
static int dma_buf_release(struct inode *inode, struct file *file)
{
struct dma_buf *dmabuf;
+ struct dentry *dentry = file->f_path.dentry;
int dtor_ret = 0;
if (!is_dma_buf_file(file))
@@ -114,6 +132,9 @@ static int dma_buf_release(struct inode *inode, struct file *file)
dmabuf = file->private_data;
+ spin_lock(&dentry->d_lock);
+ dentry->d_fsdata = NULL;
+ spin_unlock(&dentry->d_lock);
BUG_ON(dmabuf->vmapping_counter);
/*
@@ -145,8 +166,7 @@ static int dma_buf_release(struct inode *inode, struct file *file)
reservation_object_fini(dmabuf->resv);
module_put(dmabuf->owner);
- kfree(dmabuf->buf_name);
- kfree(dmabuf);
+ dmabuf_dent_put(dmabuf);
return 0;
}
@@ -604,6 +624,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
dmabuf->cb_excl.active = dmabuf->cb_shared.active = 0;
dmabuf->buf_name = bufname;
dmabuf->ktime = ktime_get();
+ atomic_set(&dmabuf->dent_count, 1);
if (!resv) {
resv = (struct reservation_object *)&dmabuf[1];
diff --git a/drivers/dma/qcom/gpi.c b/drivers/dma/qcom/gpi.c
index 811e309..22cbca7 100644
--- a/drivers/dma/qcom/gpi.c
+++ b/drivers/dma/qcom/gpi.c
@@ -582,7 +582,7 @@ struct gpii {
struct gpi_reg_table dbg_reg_table;
bool reg_table_dump;
u32 dbg_gpi_irq_cnt;
- bool ieob_set;
+ bool unlock_tre_set;
};
struct gpi_desc {
@@ -1449,6 +1449,22 @@ static void gpi_process_qup_notif_event(struct gpii_chan *gpii_chan,
client_info->cb_param);
}
+/* free gpi_desc for the specified channel */
+static void gpi_free_chan_desc(struct gpii_chan *gpii_chan)
+{
+ struct virt_dma_desc *vd;
+ struct gpi_desc *gpi_desc;
+ unsigned long flags;
+
+ spin_lock_irqsave(&gpii_chan->vc.lock, flags);
+ vd = vchan_next_desc(&gpii_chan->vc);
+ gpi_desc = to_gpi_desc(vd);
+ list_del(&vd->node);
+ spin_unlock_irqrestore(&gpii_chan->vc.lock, flags);
+ kfree(gpi_desc);
+ gpi_desc = NULL;
+}
+
/* process DMA Immediate completion data events */
static void gpi_process_imed_data_event(struct gpii_chan *gpii_chan,
struct immediate_data_event *imed_event)
@@ -1462,6 +1478,7 @@ static void gpi_process_imed_data_event(struct gpii_chan *gpii_chan,
struct msm_gpi_dma_async_tx_cb_param *tx_cb_param;
unsigned long flags;
u32 chid;
+ struct gpii_chan *gpii_tx_chan = &gpii->gpii_chan[GPI_TX_CHAN];
/*
* If channel not active don't process event but let
@@ -1514,12 +1531,33 @@ static void gpi_process_imed_data_event(struct gpii_chan *gpii_chan,
/* make sure rp updates are immediately visible to all cores */
smp_wmb();
+ /*
+ * If unlock tre is present, don't send transfer callback on
+ * on IEOT, wait for unlock IEOB. Free the respective channel
+ * descriptors.
+ * If unlock is not present, IEOB indicates freeing the descriptor
+ * and IEOT indicates channel transfer completion.
+ */
chid = imed_event->chid;
- if (imed_event->code == MSM_GPI_TCE_EOT && gpii->ieob_set) {
- if (chid == GPI_RX_CHAN)
- goto gpi_free_desc;
- else
+ if (gpii->unlock_tre_set) {
+ if (chid == GPI_RX_CHAN) {
+ if (imed_event->code == MSM_GPI_TCE_EOT)
+ goto gpi_free_desc;
+ else if (imed_event->code == MSM_GPI_TCE_UNEXP_ERR)
+ /*
+ * In case of an error in a read transfer on a
+ * shared se, unlock tre will not be processed
+ * as channels go to bad state so tx desc should
+ * be freed manually.
+ */
+ gpi_free_chan_desc(gpii_tx_chan);
+ else
+ return;
+ } else if (imed_event->code == MSM_GPI_TCE_EOT) {
return;
+ }
+ } else if (imed_event->code == MSM_GPI_TCE_EOB) {
+ goto gpi_free_desc;
}
tx_cb_param = vd->tx.callback_param;
@@ -1539,11 +1577,7 @@ static void gpi_process_imed_data_event(struct gpii_chan *gpii_chan,
}
gpi_free_desc:
- spin_lock_irqsave(&gpii_chan->vc.lock, flags);
- list_del(&vd->node);
- spin_unlock_irqrestore(&gpii_chan->vc.lock, flags);
- kfree(gpi_desc);
- gpi_desc = NULL;
+ gpi_free_chan_desc(gpii_chan);
}
/* processing transfer completion events */
@@ -1558,6 +1592,7 @@ static void gpi_process_xfer_compl_event(struct gpii_chan *gpii_chan,
struct gpi_desc *gpi_desc;
unsigned long flags;
u32 chid;
+ struct gpii_chan *gpii_tx_chan = &gpii->gpii_chan[GPI_TX_CHAN];
/* only process events on active channel */
if (unlikely(gpii_chan->pm_state != ACTIVE_STATE)) {
@@ -1602,12 +1637,33 @@ static void gpi_process_xfer_compl_event(struct gpii_chan *gpii_chan,
/* update must be visible to other cores */
smp_wmb();
+ /*
+ * If unlock tre is present, don't send transfer callback on
+ * on IEOT, wait for unlock IEOB. Free the respective channel
+ * descriptors.
+ * If unlock is not present, IEOB indicates freeing the descriptor
+ * and IEOT indicates channel transfer completion.
+ */
chid = compl_event->chid;
- if (compl_event->code == MSM_GPI_TCE_EOT && gpii->ieob_set) {
- if (chid == GPI_RX_CHAN)
- goto gpi_free_desc;
- else
+ if (gpii->unlock_tre_set) {
+ if (chid == GPI_RX_CHAN) {
+ if (compl_event->code == MSM_GPI_TCE_EOT)
+ goto gpi_free_desc;
+ else if (compl_event->code == MSM_GPI_TCE_UNEXP_ERR)
+ /*
+ * In case of an error in a read transfer on a
+ * shared se, unlock tre will not be processed
+ * as channels go to bad state so tx desc should
+ * be freed manually.
+ */
+ gpi_free_chan_desc(gpii_tx_chan);
+ else
+ return;
+ } else if (compl_event->code == MSM_GPI_TCE_EOT) {
return;
+ }
+ } else if (compl_event->code == MSM_GPI_TCE_EOB) {
+ goto gpi_free_desc;
}
tx_cb_param = vd->tx.callback_param;
@@ -1623,11 +1679,7 @@ static void gpi_process_xfer_compl_event(struct gpii_chan *gpii_chan,
}
gpi_free_desc:
- spin_lock_irqsave(&gpii_chan->vc.lock, flags);
- list_del(&vd->node);
- spin_unlock_irqrestore(&gpii_chan->vc.lock, flags);
- kfree(gpi_desc);
- gpi_desc = NULL;
+ gpi_free_chan_desc(gpii_chan);
}
@@ -2325,7 +2377,7 @@ struct dma_async_tx_descriptor *gpi_prep_slave_sg(struct dma_chan *chan,
void *tre, *wp = NULL;
const gfp_t gfp = GFP_ATOMIC;
struct gpi_desc *gpi_desc;
- gpii->ieob_set = false;
+ u32 tre_type;
GPII_VERB(gpii, gpii_chan->chid, "enter\n");
@@ -2362,13 +2414,12 @@ struct dma_async_tx_descriptor *gpi_prep_slave_sg(struct dma_chan *chan,
for_each_sg(sgl, sg, sg_len, i) {
tre = sg_virt(sg);
- /* Check if last tre has ieob set */
+ /* Check if last tre is an unlock tre */
if (i == sg_len - 1) {
- if ((((struct msm_gpi_tre *)tre)->dword[3] &
- GPI_IEOB_BMSK) >> GPI_IEOB_BMSK_SHIFT)
- gpii->ieob_set = true;
- else
- gpii->ieob_set = false;
+ tre_type =
+ MSM_GPI_TRE_TYPE(((struct msm_gpi_tre *)tre));
+ gpii->unlock_tre_set =
+ tre_type == MSM_GPI_TRE_UNLOCK ? true : false;
}
for (j = 0; j < sg->length;
diff --git a/drivers/edac/kryo_arm64_edac.c b/drivers/edac/kryo_arm64_edac.c
index eaa333f..f9e4ab5 100644
--- a/drivers/edac/kryo_arm64_edac.c
+++ b/drivers/edac/kryo_arm64_edac.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2016-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2016-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/kernel.h>
@@ -11,6 +11,7 @@
#include <linux/cpu.h>
#include <linux/cpu_pm.h>
#include <linux/interrupt.h>
+#include <linux/notifier.h>
#include <linux/of_irq.h>
#include <asm/cputype.h>
@@ -136,6 +137,7 @@ struct erp_drvdata {
struct edac_device_ctl_info *edev_ctl;
struct erp_drvdata __percpu **erp_cpu_drvdata;
struct notifier_block nb_pm;
+ struct notifier_block nb_panic;
int ppi;
};
@@ -403,6 +405,21 @@ void kryo_poll_cache_errors(struct edac_device_ctl_info *edev_ctl)
edev_ctl, 0);
}
+static int kryo_cpu_panic_notify(struct notifier_block *this,
+ unsigned long event, void *ptr)
+{
+ struct edac_device_ctl_info *edev_ctl =
+ panic_handler_drvdata->edev_ctl;
+
+ edev_ctl->panic_on_ce = 0;
+ edev_ctl->panic_on_ue = 0;
+
+ kryo_check_l3_scu_error(edev_ctl);
+ kryo_check_l1_l2_ecc(edev_ctl);
+
+ return NOTIFY_OK;
+}
+
static irqreturn_t kryo_l1_l2_handler(int irq, void *drvdata)
{
kryo_check_l1_l2_ecc(panic_handler_drvdata->edev_ctl);
@@ -488,6 +505,9 @@ static int kryo_cpu_erp_probe(struct platform_device *pdev)
drv->edev_ctl->panic_on_ce = ARM64_ERP_PANIC_ON_CE;
drv->edev_ctl->panic_on_ue = ARM64_ERP_PANIC_ON_UE;
drv->nb_pm.notifier_call = kryo_pmu_cpu_pm_notify;
+ drv->nb_panic.notifier_call = kryo_cpu_panic_notify;
+ atomic_notifier_chain_register(&panic_notifier_list,
+ &drv->nb_panic);
platform_set_drvdata(pdev, drv);
rc = edac_device_add_device(drv->edev_ctl);
diff --git a/drivers/gpu/drm/bridge/lt9611uxc.c b/drivers/gpu/drm/bridge/lt9611uxc.c
index 8c27c26..e37e770 100644
--- a/drivers/gpu/drm/bridge/lt9611uxc.c
+++ b/drivers/gpu/drm/bridge/lt9611uxc.c
@@ -125,6 +125,7 @@ struct lt9611 {
u8 i2c_wbuf[WRITE_BUF_MAX_SIZE];
u8 i2c_rbuf[READ_BUF_MAX_SIZE];
bool hdmi_mode;
+ bool hpd_support;
enum lt9611_fw_upgrade_status fw_status;
};
@@ -1302,7 +1303,7 @@ lt9611_connector_detect(struct drm_connector *connector, bool force)
struct lt9611 *pdata = connector_to_lt9611(connector);
pdata->status = connector_status_disconnected;
- if (force) {
+ if (force && pdata->hpd_support) {
lt9611_write_byte(pdata, 0xFF, 0x80);
lt9611_write_byte(pdata, 0xEE, 0x01);
lt9611_write_byte(pdata, 0xFF, 0xB0);
@@ -1668,6 +1669,7 @@ static int lt9611_probe(struct i2c_client *client,
{
struct lt9611 *pdata;
int ret = 0;
+ u8 chip_version = 0;
if (!client || !client->dev.of_node) {
pr_err("invalid input\n");
@@ -1730,8 +1732,12 @@ static int lt9611_probe(struct i2c_client *client,
goto err_i2c_prog;
}
- if (lt9611_get_version(pdata)) {
+ chip_version = lt9611_get_version(pdata);
+ pdata->hpd_support = false;
+ if (chip_version) {
pr_info("LT9611 works, no need to upgrade FW\n");
+ if (chip_version >= 0x40)
+ pdata->hpd_support = true;
} else {
ret = request_firmware_nowait(THIS_MODULE, true,
"lt9611_fw.bin", &client->dev, GFP_KERNEL, pdata,
diff --git a/drivers/gpu/msm/a6xx_reg.h b/drivers/gpu/msm/a6xx_reg.h
index abdd2e8e..67a5b4f 100644
--- a/drivers/gpu/msm/a6xx_reg.h
+++ b/drivers/gpu/msm/a6xx_reg.h
@@ -391,6 +391,38 @@
#define A6XX_RBBM_PERFCTR_RBBM_SEL_2 0x509
#define A6XX_RBBM_PERFCTR_RBBM_SEL_3 0x50A
#define A6XX_RBBM_PERFCTR_GPU_BUSY_MASKED 0x50B
+#define A6XX_RBBM_PERFCTR_MHUB_0_LO 0x512
+#define A6XX_RBBM_PERFCTR_MHUB_0_HI 0x513
+#define A6XX_RBBM_PERFCTR_MHUB_1_LO 0x514
+#define A6XX_RBBM_PERFCTR_MHUB_1_HI 0x515
+#define A6XX_RBBM_PERFCTR_MHUB_2_LO 0x516
+#define A6XX_RBBM_PERFCTR_MHUB_2_HI 0x517
+#define A6XX_RBBM_PERFCTR_MHUB_3_LO 0x518
+#define A6XX_RBBM_PERFCTR_MHUB_3_HI 0x519
+#define A6XX_RBBM_PERFCTR_FCHE_0_LO 0x51A
+#define A6XX_RBBM_PERFCTR_FCHE_0_HI 0x51B
+#define A6XX_RBBM_PERFCTR_FCHE_1_LO 0x51C
+#define A6XX_RBBM_PERFCTR_FCHE_1_HI 0x51D
+#define A6XX_RBBM_PERFCTR_FCHE_2_LO 0x51E
+#define A6XX_RBBM_PERFCTR_FCHE_2_HI 0x51F
+#define A6XX_RBBM_PERFCTR_FCHE_3_LO 0x520
+#define A6XX_RBBM_PERFCTR_FCHE_3_HI 0x521
+#define A6XX_RBBM_PERFCTR_GLC_0_LO 0x522
+#define A6XX_RBBM_PERFCTR_GLC_0_HI 0x523
+#define A6XX_RBBM_PERFCTR_GLC_1_LO 0x524
+#define A6XX_RBBM_PERFCTR_GLC_1_HI 0x525
+#define A6XX_RBBM_PERFCTR_GLC_2_LO 0x526
+#define A6XX_RBBM_PERFCTR_GLC_2_HI 0x527
+#define A6XX_RBBM_PERFCTR_GLC_3_LO 0x528
+#define A6XX_RBBM_PERFCTR_GLC_3_HI 0x529
+#define A6XX_RBBM_PERFCTR_GLC_4_LO 0x52A
+#define A6XX_RBBM_PERFCTR_GLC_4_HI 0x52B
+#define A6XX_RBBM_PERFCTR_GLC_5_LO 0x52C
+#define A6XX_RBBM_PERFCTR_GLC_5_HI 0x52D
+#define A6XX_RBBM_PERFCTR_GLC_6_LO 0x52E
+#define A6XX_RBBM_PERFCTR_GLC_6_HI 0x52F
+#define A6XX_RBBM_PERFCTR_GLC_7_LO 0x530
+#define A6XX_RBBM_PERFCTR_GLC_7_HI 0x531
#define A6XX_RBBM_ISDB_CNT 0x533
#define A6XX_RBBM_NC_MODE_CNTL 0X534
@@ -655,6 +687,22 @@
#define A6XX_RB_RB_SUB_BLOCK_SEL_CNTL_HOST 0x8E3B
#define A6XX_RB_RB_SUB_BLOCK_SEL_CNTL_CD 0x8E3D
#define A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE 0x8E50
+#define A6XX_RB_PERFCTR_GLC_SEL_0 0x8E90
+#define A6XX_RB_PERFCTR_GLC_SEL_1 0x8E91
+#define A6XX_RB_PERFCTR_GLC_SEL_2 0x8E92
+#define A6XX_RB_PERFCTR_GLC_SEL_3 0x8E93
+#define A6XX_RB_PERFCTR_GLC_SEL_4 0x8E94
+#define A6XX_RB_PERFCTR_GLC_SEL_5 0x8E95
+#define A6XX_RB_PERFCTR_GLC_SEL_6 0x8E96
+#define A6XX_RB_PERFCTR_GLC_SEL_7 0x8E97
+#define A6XX_RB_PERFCTR_MHUB_SEL_0 0x8EA0
+#define A6XX_RB_PERFCTR_MHUB_SEL_1 0x8EA1
+#define A6XX_RB_PERFCTR_MHUB_SEL_2 0x8EA2
+#define A6XX_RB_PERFCTR_MHUB_SEL_3 0x8EA3
+#define A6XX_RB_PERFCTR_FCHE_SEL_0 0x8EB0
+#define A6XX_RB_PERFCTR_FCHE_SEL_1 0x8EB1
+#define A6XX_RB_PERFCTR_FCHE_SEL_2 0x8EB2
+#define A6XX_RB_PERFCTR_FCHE_SEL_3 0x8EB3
/* PC registers */
#define A6XX_PC_DBG_ECO_CNTL 0x9E00
@@ -1064,6 +1112,7 @@
/* GPUCC registers */
#define A6XX_GPU_CC_GX_GDSCR 0x24403
#define A6XX_GPU_CC_GX_DOMAIN_MISC 0x24542
+#define A6XX_GPU_CC_CX_GDSCR 0x2441B
/* GPU RSC sequencer registers */
#define A6XX_GPU_RSCC_RSC_STATUS0_DRV0 0x00004
diff --git a/drivers/gpu/msm/adreno-gpulist.h b/drivers/gpu/msm/adreno-gpulist.h
index 525b4aa..e0b945e 100644
--- a/drivers/gpu/msm/adreno-gpulist.h
+++ b/drivers/gpu/msm/adreno-gpulist.h
@@ -919,7 +919,7 @@ static const struct adreno_a6xx_core adreno_gpu_core_a619 = {
},
.prim_fifo_threshold = 0x0018000,
.gmu_major = 1,
- .gmu_minor = 9,
+ .gmu_minor = 10,
.sqefw_name = "a630_sqe.fw",
.gmufw_name = "a619_gmu.bin",
.zap_name = "a615_zap",
@@ -1460,13 +1460,13 @@ static const struct adreno_a6xx_core adreno_gpu_core_a702 = {
.base = {
DEFINE_ADRENO_REV(ADRENO_REV_A702, 7, 0, 2, ANY_ID),
.features = ADRENO_64BIT | ADRENO_CONTENT_PROTECTION |
- ADRENO_APRIV,
+ ADRENO_APRIV | ADRENO_PREEMPTION,
.gpudev = &adreno_a6xx_gpudev,
.gmem_size = SZ_128K,
.busy_mask = 0xfffffffe,
.bus_width = 32,
},
- .prim_fifo_threshold = 0x00080000,
+ .prim_fifo_threshold = 0x0000c000,
.sqefw_name = "a702_sqe.fw",
.zap_name = "a702_zap",
.hwcg = a702_hwcg_regs,
diff --git a/drivers/gpu/msm/adreno.c b/drivers/gpu/msm/adreno.c
index e849e2f..7720bd1 100644
--- a/drivers/gpu/msm/adreno.c
+++ b/drivers/gpu/msm/adreno.c
@@ -2055,12 +2055,16 @@ static int _adreno_start(struct adreno_device *adreno_dev)
/* Send OOB request to turn on the GX */
status = gmu_core_dev_oob_set(device, oob_gpu);
- if (status)
+ if (status) {
+ gmu_core_snapshot(device);
goto error_mmu_off;
+ }
status = gmu_core_dev_hfi_start_msg(device);
- if (status)
+ if (status) {
+ gmu_core_snapshot(device);
goto error_oob_clear;
+ }
_set_secvid(device);
@@ -2310,26 +2314,15 @@ static int adreno_stop(struct kgsl_device *device)
error = gmu_core_dev_oob_set(device, oob_gpu);
if (error) {
gmu_core_dev_oob_clear(device, oob_gpu);
-
- if (gmu_core_regulator_isenabled(device)) {
- /* GPU is on. Try recovery */
- set_bit(GMU_FAULT, &device->gmu_core.flags);
gmu_core_snapshot(device);
error = -EINVAL;
- }
+ goto no_gx_power;
}
- adreno_dispatcher_stop(adreno_dev);
-
- adreno_ringbuffer_stop(adreno_dev);
-
kgsl_pwrscale_update_stats(device);
adreno_irqctrl(adreno_dev, 0);
- adreno_llc_deactivate_slice(adreno_dev->gpu_llc_slice);
- adreno_llc_deactivate_slice(adreno_dev->gpuhtw_llc_slice);
-
/* Save active coresight registers if applicable */
adreno_coresight_stop(adreno_dev);
@@ -2347,7 +2340,6 @@ static int adreno_stop(struct kgsl_device *device)
*/
if (!error && gmu_core_dev_wait_for_lowest_idle(device)) {
- set_bit(GMU_FAULT, &device->gmu_core.flags);
gmu_core_snapshot(device);
/*
* Assume GMU hang after 10ms without responding.
@@ -2360,6 +2352,17 @@ static int adreno_stop(struct kgsl_device *device)
adreno_clear_pending_transactions(device);
+no_gx_power:
+ adreno_dispatcher_stop(adreno_dev);
+
+ adreno_ringbuffer_stop(adreno_dev);
+
+ if (!IS_ERR_OR_NULL(adreno_dev->gpu_llc_slice))
+ llcc_slice_deactivate(adreno_dev->gpu_llc_slice);
+
+ if (!IS_ERR_OR_NULL(adreno_dev->gpuhtw_llc_slice))
+ llcc_slice_deactivate(adreno_dev->gpuhtw_llc_slice);
+
/*
* The halt is not cleared in the above function if we have GBIF.
* Clear it here if GMU is enabled as GMU stop needs access to
@@ -3065,7 +3068,15 @@ void adreno_spin_idle_debug(struct adreno_device *adreno_dev,
dev_err(device->dev, " hwfault=%8.8X\n", hwfault);
- kgsl_device_snapshot(device, NULL, adreno_gmu_gpu_fault(adreno_dev));
+ /*
+ * If CP is stuck, gmu may not perform as expected. So force a gmu
+ * snapshot which captures entire state as well as sets the gmu fault
+ * because things need to be reset anyway.
+ */
+ if (gmu_core_isenabled(device))
+ gmu_core_snapshot(device);
+ else
+ kgsl_device_snapshot(device, NULL, false);
}
/**
diff --git a/drivers/gpu/msm/adreno.h b/drivers/gpu/msm/adreno.h
index ab2f94b..982f8ba 100644
--- a/drivers/gpu/msm/adreno.h
+++ b/drivers/gpu/msm/adreno.h
@@ -1739,8 +1739,9 @@ static inline int adreno_perfcntr_active_oob_get(struct kgsl_device *device)
if (!ret) {
ret = gmu_core_dev_oob_set(device, oob_perfcntr);
if (ret) {
+ gmu_core_snapshot(device);
adreno_set_gpu_fault(ADRENO_DEVICE(device),
- ADRENO_GMU_FAULT);
+ ADRENO_GMU_FAULT_SKIP_SNAPSHOT);
adreno_dispatcher_schedule(device);
kgsl_active_count_put(device);
}
diff --git a/drivers/gpu/msm/adreno_a6xx.c b/drivers/gpu/msm/adreno_a6xx.c
index 4696e4f..bbf1059 100644
--- a/drivers/gpu/msm/adreno_a6xx.c
+++ b/drivers/gpu/msm/adreno_a6xx.c
@@ -226,9 +226,10 @@ __get_rbbm_clock_cntl_on(struct adreno_device *adreno_dev)
{
if (adreno_is_a630(adreno_dev))
return 0x8AA8AA02;
- else if (adreno_is_a612(adreno_dev) || adreno_is_a610(adreno_dev) ||
- adreno_is_a702(adreno_dev))
+ else if (adreno_is_a612(adreno_dev) || adreno_is_a610(adreno_dev))
return 0xAAA8AA82;
+ else if (adreno_is_a702(adreno_dev))
+ return 0xAAAAAA82;
else
return 0x8AA8AA82;
}
@@ -581,6 +582,13 @@ static void a6xx_start(struct adreno_device *adreno_dev)
if (a6xx_core->disable_tseskip)
kgsl_regrmw(device, A6XX_PC_DBG_ECO_CNTL, 0, (1 << 9));
+ /*
+ * Set the bit HLSQCluster3ContextDis for A702 as HLSQ doesn't
+ * have context buffer for third context
+ */
+ if (adreno_is_a702(adreno_dev))
+ kgsl_regwrite(device, A6XX_CP_CHICKEN_DBG, (1 << 24));
+
/* Enable the GMEM save/restore feature for preemption */
if (adreno_is_preemption_enabled(adreno_dev))
kgsl_regwrite(device, A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE,
@@ -1120,8 +1128,7 @@ static int64_t a6xx_read_throttling_counters(struct adreno_device *adreno_dev)
static int a6xx_reset(struct kgsl_device *device, int fault)
{
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
- int ret = -EINVAL;
- int i = 0;
+ int ret;
/* Use the regular reset sequence for No GMU */
if (!gmu_core_isenabled(device))
@@ -1133,33 +1140,20 @@ static int a6xx_reset(struct kgsl_device *device, int fault)
/* since device is officially off now clear start bit */
clear_bit(ADRENO_DEVICE_STARTED, &adreno_dev->priv);
- /* Keep trying to start the device until it works */
- for (i = 0; i < NUM_TIMES_RESET_RETRY; i++) {
- ret = adreno_start(device, 0);
- if (!ret)
- break;
-
- msleep(20);
- }
-
+ ret = adreno_start(device, 0);
if (ret)
return ret;
- if (i != 0)
- dev_warn(device->dev,
- "Device hard reset tried %d tries\n", i);
+ kgsl_pwrctrl_change_state(device, KGSL_STATE_ACTIVE);
/*
- * If active_cnt is non-zero then the system was active before
- * going into a reset - put it back in that state
+ * If active_cnt is zero, there is no need to keep the GPU active. So,
+ * we should transition to SLUMBER.
*/
+ if (!atomic_read(&device->active_cnt))
+ kgsl_pwrctrl_change_state(device, KGSL_STATE_SLUMBER);
- if (atomic_read(&device->active_cnt))
- kgsl_pwrctrl_change_state(device, KGSL_STATE_ACTIVE);
- else
- kgsl_pwrctrl_change_state(device, KGSL_STATE_NAP);
-
- return ret;
+ return 0;
}
static void a6xx_cp_hw_err_callback(struct adreno_device *adreno_dev, int bit)
@@ -2210,6 +2204,47 @@ static struct adreno_perfcount_register a6xx_perfcounters_alwayson[] = {
A6XX_CP_ALWAYS_ON_COUNTER_HI, -1 },
};
+static struct adreno_perfcount_register a6xx_perfcounters_glc[] = {
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_GLC_0_LO,
+ A6XX_RBBM_PERFCTR_GLC_0_HI, -1, A6XX_RB_PERFCTR_GLC_SEL_0 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_GLC_1_LO,
+ A6XX_RBBM_PERFCTR_GLC_1_HI, -1, A6XX_RB_PERFCTR_GLC_SEL_1 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_GLC_2_LO,
+ A6XX_RBBM_PERFCTR_GLC_2_HI, -1, A6XX_RB_PERFCTR_GLC_SEL_2 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_GLC_3_LO,
+ A6XX_RBBM_PERFCTR_GLC_3_HI, -1, A6XX_RB_PERFCTR_GLC_SEL_3 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_GLC_4_LO,
+ A6XX_RBBM_PERFCTR_GLC_4_HI, -1, A6XX_RB_PERFCTR_GLC_SEL_4 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_GLC_5_LO,
+ A6XX_RBBM_PERFCTR_GLC_5_HI, -1, A6XX_RB_PERFCTR_GLC_SEL_5 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_GLC_6_LO,
+ A6XX_RBBM_PERFCTR_GLC_6_HI, -1, A6XX_RB_PERFCTR_GLC_SEL_6 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_GLC_7_LO,
+ A6XX_RBBM_PERFCTR_GLC_7_HI, -1, A6XX_RB_PERFCTR_GLC_SEL_7 },
+};
+
+static struct adreno_perfcount_register a6xx_perfcounters_fche[] = {
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_FCHE_0_LO,
+ A6XX_RBBM_PERFCTR_FCHE_0_HI, -1, A6XX_RB_PERFCTR_FCHE_SEL_0 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_FCHE_1_LO,
+ A6XX_RBBM_PERFCTR_FCHE_1_HI, -1, A6XX_RB_PERFCTR_FCHE_SEL_1 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_FCHE_2_LO,
+ A6XX_RBBM_PERFCTR_FCHE_2_HI, -1, A6XX_RB_PERFCTR_FCHE_SEL_2 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_FCHE_3_LO,
+ A6XX_RBBM_PERFCTR_FCHE_3_HI, -1, A6XX_RB_PERFCTR_FCHE_SEL_3 },
+};
+
+static struct adreno_perfcount_register a6xx_perfcounters_mhub[] = {
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_MHUB_0_LO,
+ A6XX_RBBM_PERFCTR_MHUB_0_HI, -1, A6XX_RB_PERFCTR_MHUB_SEL_0 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_MHUB_1_LO,
+ A6XX_RBBM_PERFCTR_MHUB_1_HI, -1, A6XX_RB_PERFCTR_MHUB_SEL_1 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_MHUB_2_LO,
+ A6XX_RBBM_PERFCTR_MHUB_2_HI, -1, A6XX_RB_PERFCTR_MHUB_SEL_2 },
+ { KGSL_PERFCOUNTER_NOT_USED, 0, 0, A6XX_RBBM_PERFCTR_MHUB_3_LO,
+ A6XX_RBBM_PERFCTR_MHUB_3_HI, -1, A6XX_RB_PERFCTR_MHUB_SEL_3 },
+};
+
/*
* ADRENO_PERFCOUNTER_GROUP_RESTORE flag is enabled by default
* because most of the perfcounter groups need to be restored
@@ -2316,6 +2351,23 @@ static void a6xx_platform_setup(struct adreno_device *adreno_dev)
gpudev->vbif_xin_halt_ctrl0_mask =
A6XX_VBIF_XIN_HALT_CTRL0_MASK;
+ if (adreno_is_a702(adreno_dev)) {
+ a6xx_perfcounter_groups[KGSL_PERFCOUNTER_GROUP_GLC].regs =
+ a6xx_perfcounters_glc;
+ a6xx_perfcounter_groups[KGSL_PERFCOUNTER_GROUP_GLC].reg_count
+ = ARRAY_SIZE(a6xx_perfcounters_glc);
+
+ a6xx_perfcounter_groups[KGSL_PERFCOUNTER_GROUP_FCHE].regs =
+ a6xx_perfcounters_fche;
+ a6xx_perfcounter_groups[KGSL_PERFCOUNTER_GROUP_FCHE].reg_count
+ = ARRAY_SIZE(a6xx_perfcounters_fche);
+
+ a6xx_perfcounter_groups[KGSL_PERFCOUNTER_GROUP_MHUB].regs =
+ a6xx_perfcounters_mhub;
+ a6xx_perfcounter_groups[KGSL_PERFCOUNTER_GROUP_MHUB].reg_count
+ = ARRAY_SIZE(a6xx_perfcounters_mhub);
+ }
+
/* Set the GPU busy counter for frequency scaling */
adreno_dev->perfctr_pwr_lo = A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_L;
diff --git a/drivers/gpu/msm/adreno_a6xx_gmu.c b/drivers/gpu/msm/adreno_a6xx_gmu.c
index a5f9628..cb688bc 100644
--- a/drivers/gpu/msm/adreno_a6xx_gmu.c
+++ b/drivers/gpu/msm/adreno_a6xx_gmu.c
@@ -835,6 +835,18 @@ static bool a6xx_gmu_gx_is_on(struct kgsl_device *device)
}
/*
+ * a6xx_gmu_cx_is_on() - Check if CX is on using GPUCC register
+ * @device - Pointer to KGSL device struct
+ */
+static bool a6xx_gmu_cx_is_on(struct kgsl_device *device)
+{
+ unsigned int val;
+
+ gmu_core_regread(device, A6XX_GPU_CC_CX_GDSCR, &val);
+ return (val & BIT(31));
+}
+
+/*
* a6xx_gmu_sptprac_is_on() - Check if SPTP is on using pwr status register
* @adreno_dev - Pointer to adreno_device
* This check should only be performed if the keepalive bit is set or it
@@ -1632,6 +1644,8 @@ static void a6xx_gmu_snapshot(struct kgsl_device *device,
{
unsigned int val;
+ dev_err(device->dev, "GMU snapshot started at 0x%llx ticks\n",
+ a6xx_gmu_read_ao_counter(device));
a6xx_gmu_snapshot_versions(device, snapshot);
a6xx_gmu_snapshot_memories(device, snapshot);
@@ -1754,6 +1768,7 @@ struct gmu_dev_ops adreno_a6xx_gmudev = {
.enable_lm = a6xx_gmu_enable_lm,
.rpmh_gpu_pwrctrl = a6xx_gmu_rpmh_gpu_pwrctrl,
.gx_is_on = a6xx_gmu_gx_is_on,
+ .cx_is_on = a6xx_gmu_cx_is_on,
.wait_for_lowest_idle = a6xx_gmu_wait_for_lowest_idle,
.wait_for_gmu_idle = a6xx_gmu_wait_for_idle,
.ifpc_store = a6xx_gmu_ifpc_store,
diff --git a/drivers/gpu/msm/adreno_a6xx_preempt.c b/drivers/gpu/msm/adreno_a6xx_preempt.c
index db850c2..07bc874 100644
--- a/drivers/gpu/msm/adreno_a6xx_preempt.c
+++ b/drivers/gpu/msm/adreno_a6xx_preempt.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2020, The Linux Foundation. All rights reserved.
*/
#include "adreno.h"
@@ -145,7 +145,7 @@ static void _a6xx_preemption_fault(struct adreno_device *adreno_dev)
if (kgsl_state_is_awake(device)) {
adreno_readreg(adreno_dev, ADRENO_REG_CP_PREEMPT, &status);
- if (status == 0) {
+ if (!(status & 0x1)) {
adreno_set_preempt_state(adreno_dev,
ADRENO_PREEMPT_COMPLETE);
@@ -155,7 +155,7 @@ static void _a6xx_preemption_fault(struct adreno_device *adreno_dev)
}
dev_err(device->dev,
- "Preemption timed out: cur=%d R/W=%X/%X, next=%d R/W=%X/%X\n",
+ "Preemption Fault: cur=%d R/W=0x%x/0x%x, next=%d R/W=0x%x/0x%x\n",
adreno_dev->cur_rb->id,
adreno_get_rptr(adreno_dev->cur_rb),
adreno_dev->cur_rb->wptr,
@@ -388,10 +388,15 @@ void a6xx_preemption_trigger(struct adreno_device *adreno_dev)
return;
err:
-
- /* If fenced write fails, set the fault and trigger recovery */
+ /* If fenced write fails, take inline snapshot and trigger recovery */
+ if (!in_interrupt()) {
+ gmu_core_snapshot(device);
+ adreno_set_gpu_fault(adreno_dev,
+ ADRENO_GMU_FAULT_SKIP_SNAPSHOT);
+ } else {
+ adreno_set_gpu_fault(adreno_dev, ADRENO_GMU_FAULT);
+ }
adreno_set_preempt_state(adreno_dev, ADRENO_PREEMPT_NONE);
- adreno_set_gpu_fault(adreno_dev, ADRENO_GMU_FAULT);
adreno_dispatcher_schedule(device);
/* Clear the keep alive */
if (gmu_core_isenabled(device))
diff --git a/drivers/gpu/msm/adreno_a6xx_snapshot.c b/drivers/gpu/msm/adreno_a6xx_snapshot.c
index 715750b..ef4c8f2 100644
--- a/drivers/gpu/msm/adreno_a6xx_snapshot.c
+++ b/drivers/gpu/msm/adreno_a6xx_snapshot.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2020, The Linux Foundation. All rights reserved.
*/
#include "adreno.h"
@@ -345,7 +345,7 @@ static const unsigned int a6xx_registers[] = {
0x0540, 0x0555,
/* CP */
0x0800, 0x0803, 0x0806, 0x0808, 0x0810, 0x0813, 0x0820, 0x0821,
- 0x0823, 0x0824, 0x0826, 0x0827, 0x0830, 0x0833, 0x0840, 0x0843,
+ 0x0823, 0x0824, 0x0826, 0x0827, 0x0830, 0x0833, 0x0840, 0x0845,
0x084F, 0x086F, 0x0880, 0x088A, 0x08A0, 0x08AB, 0x08C0, 0x08C4,
0x08D0, 0x08DD, 0x08F0, 0x08F3, 0x0900, 0x0903, 0x0908, 0x0911,
0x0928, 0x093E, 0x0942, 0x094D, 0x0980, 0x0984, 0x098D, 0x0996,
diff --git a/drivers/gpu/msm/adreno_perfcounter.c b/drivers/gpu/msm/adreno_perfcounter.c
index 5b4ae58..026ba548 100644
--- a/drivers/gpu/msm/adreno_perfcounter.c
+++ b/drivers/gpu/msm/adreno_perfcounter.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2002,2007-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2002,2007-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/slab.h>
@@ -126,6 +126,9 @@ void adreno_perfcounter_restore(struct adreno_device *adreno_dev)
struct adreno_perfcount_group *group;
unsigned int counter, groupid;
+ if (adreno_is_a702(adreno_dev))
+ return;
+
if (counters == NULL)
return;
@@ -159,6 +162,9 @@ inline void adreno_perfcounter_save(struct adreno_device *adreno_dev)
struct adreno_perfcount_group *group;
unsigned int counter, groupid;
+ if (adreno_is_a702(adreno_dev))
+ return;
+
if (counters == NULL)
return;
diff --git a/drivers/gpu/msm/adreno_ringbuffer.c b/drivers/gpu/msm/adreno_ringbuffer.c
index 9cb9ec4..1a65634 100644
--- a/drivers/gpu/msm/adreno_ringbuffer.c
+++ b/drivers/gpu/msm/adreno_ringbuffer.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2002,2007-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2002,2007-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/sched/clock.h>
@@ -74,6 +74,7 @@ static void adreno_get_submit_time(struct adreno_device *adreno_dev,
static void adreno_ringbuffer_wptr(struct adreno_device *adreno_dev,
struct adreno_ringbuffer *rb)
{
+ struct kgsl_device *device = KGSL_DEVICE(adreno_dev);
unsigned long flags;
int ret = 0;
@@ -85,7 +86,7 @@ static void adreno_ringbuffer_wptr(struct adreno_device *adreno_dev,
* Let the pwrscale policy know that new commands have
* been submitted.
*/
- kgsl_pwrscale_busy(KGSL_DEVICE(adreno_dev));
+ kgsl_pwrscale_busy(device);
/*
* Ensure the write posted after a possible
@@ -110,9 +111,14 @@ static void adreno_ringbuffer_wptr(struct adreno_device *adreno_dev,
spin_unlock_irqrestore(&rb->preempt_lock, flags);
if (ret) {
- /* If WPTR update fails, set the fault and trigger recovery */
- adreno_set_gpu_fault(adreno_dev, ADRENO_GMU_FAULT);
- adreno_dispatcher_schedule(KGSL_DEVICE(adreno_dev));
+ /*
+ * If WPTR update fails, take inline snapshot and trigger
+ * recovery.
+ */
+ gmu_core_snapshot(device);
+ adreno_set_gpu_fault(adreno_dev,
+ ADRENO_GMU_FAULT_SKIP_SNAPSHOT);
+ adreno_dispatcher_schedule(device);
}
}
diff --git a/drivers/gpu/msm/kgsl.c b/drivers/gpu/msm/kgsl.c
index 6d2c272..efdd2fb 100644
--- a/drivers/gpu/msm/kgsl.c
+++ b/drivers/gpu/msm/kgsl.c
@@ -18,6 +18,7 @@
#include <linux/pm_runtime.h>
#include <linux/security.h>
#include <linux/sort.h>
+#include <asm/cacheflush.h>
#include "kgsl_compat.h"
#include "kgsl_debugfs.h"
diff --git a/drivers/gpu/msm/kgsl_gmu.c b/drivers/gpu/msm/kgsl_gmu.c
index ea0cce6..a9fe849 100644
--- a/drivers/gpu/msm/kgsl_gmu.c
+++ b/drivers/gpu/msm/kgsl_gmu.c
@@ -282,6 +282,7 @@ static int gmu_iommu_cb_probe(struct gmu_device *gmu,
struct platform_device *pdev = of_find_device_by_node(node);
struct device *dev;
int ret;
+ int no_stall = 1;
dev = &pdev->dev;
of_dma_configure(dev, node, true);
@@ -294,6 +295,14 @@ static int gmu_iommu_cb_probe(struct gmu_device *gmu,
return -ENODEV;
}
+ /*
+ * Disable stall on fault for the GMU context bank.
+ * This sets SCTLR.CFCFG = 0.
+ * Also note that, the smmu driver sets SCTLR.HUPCF = 0 by default.
+ */
+ iommu_domain_set_attr(ctx->domain,
+ DOMAIN_ATTR_FAULT_MODEL_NO_STALL, &no_stall);
+
ret = iommu_attach_device(ctx->domain, dev);
if (ret) {
dev_err(&gmu->pdev->dev, "gmu iommu fail to attach %s device\n",
@@ -927,8 +936,6 @@ static irqreturn_t gmu_irq_handler(int irq, void *data)
dev_err_ratelimited(&gmu->pdev->dev,
"GMU watchdog expired interrupt received\n");
- adreno_set_gpu_fault(adreno_dev, ADRENO_GMU_FAULT);
- adreno_dispatcher_schedule(device);
}
if (status & GMU_INT_HOST_AHB_BUS_ERR)
dev_err_ratelimited(&gmu->pdev->dev,
@@ -1471,8 +1478,9 @@ static int gmu_enable_gdsc(struct gmu_device *gmu)
}
#define CX_GDSC_TIMEOUT 5000 /* ms */
-static int gmu_disable_gdsc(struct gmu_device *gmu)
+static int gmu_disable_gdsc(struct kgsl_device *device)
{
+ struct gmu_device *gmu = KGSL_GMU_DEVICE(device);
int ret;
unsigned long t;
@@ -1494,13 +1502,13 @@ static int gmu_disable_gdsc(struct gmu_device *gmu)
*/
t = jiffies + msecs_to_jiffies(CX_GDSC_TIMEOUT);
do {
- if (!regulator_is_enabled(gmu->cx_gdsc))
+ if (!gmu_core_dev_cx_is_on(device))
return 0;
usleep_range(10, 100);
} while (!(time_after(jiffies, t)));
- if (!regulator_is_enabled(gmu->cx_gdsc))
+ if (!gmu_core_dev_cx_is_on(device))
return 0;
dev_err(&gmu->pdev->dev, "GMU CX gdsc off timeout\n");
@@ -1528,12 +1536,15 @@ static int gmu_suspend(struct kgsl_device *device)
if (ADRENO_QUIRK(adreno_dev, ADRENO_QUIRK_CX_GDSC))
regulator_set_mode(gmu->cx_gdsc, REGULATOR_MODE_IDLE);
- gmu_disable_gdsc(gmu);
+ gmu_disable_gdsc(device);
if (ADRENO_QUIRK(adreno_dev, ADRENO_QUIRK_CX_GDSC))
regulator_set_mode(gmu->cx_gdsc, REGULATOR_MODE_NORMAL);
dev_err(&gmu->pdev->dev, "Suspended GMU\n");
+
+ clear_bit(GMU_FAULT, &device->gmu_core.flags);
+
return 0;
}
@@ -1543,6 +1554,10 @@ static void gmu_snapshot(struct kgsl_device *device)
struct gmu_dev_ops *gmu_dev_ops = GMU_DEVICE_OPS(device);
struct gmu_device *gmu = KGSL_GMU_DEVICE(device);
+ /* Abstain from sending another nmi or over-writing snapshot */
+ if (test_and_set_bit(GMU_FAULT, &device->gmu_core.flags))
+ return;
+
adreno_gmu_send_nmi(adreno_dev);
/* Wait for the NMI to be handled */
udelay(100);
@@ -1684,6 +1699,12 @@ static void gmu_stop(struct kgsl_device *device)
if (!test_bit(GMU_CLK_ON, &device->gmu_core.flags))
return;
+ /* Force suspend if gmu is already in fault */
+ if (test_bit(GMU_FAULT, &device->gmu_core.flags)) {
+ gmu_core_suspend(device);
+ return;
+ }
+
/* Wait for the lowest idle level we requested */
if (gmu_core_dev_wait_for_lowest_idle(device))
goto error;
@@ -1703,20 +1724,19 @@ static void gmu_stop(struct kgsl_device *device)
gmu_dev_ops->rpmh_gpu_pwrctrl(device, GMU_FW_STOP, 0, 0);
gmu_disable_clks(device);
- gmu_disable_gdsc(gmu);
+ gmu_disable_gdsc(device);
msm_bus_scale_client_update_request(gmu->pcl, 0);
return;
error:
- /*
- * The power controller will change state to SLUMBER anyway
- * Set GMU_FAULT flag to indicate to power contrller
- * that hang recovery is needed to power on GPU
- */
- set_bit(GMU_FAULT, &device->gmu_core.flags);
dev_err(&gmu->pdev->dev, "Failed to stop GMU\n");
gmu_core_snapshot(device);
+ /*
+ * We failed to stop the gmu successfully. Force a suspend
+ * to set things up for a fresh start.
+ */
+ gmu_core_suspend(device);
}
static void gmu_remove(struct kgsl_device *device)
diff --git a/drivers/gpu/msm/kgsl_gmu_core.c b/drivers/gpu/msm/kgsl_gmu_core.c
index 26a283a..5bd48cc 100644
--- a/drivers/gpu/msm/kgsl_gmu_core.c
+++ b/drivers/gpu/msm/kgsl_gmu_core.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/of.h>
@@ -343,6 +343,16 @@ bool gmu_core_dev_gx_is_on(struct kgsl_device *device)
return true;
}
+bool gmu_core_dev_cx_is_on(struct kgsl_device *device)
+{
+ struct gmu_dev_ops *ops = GMU_DEVICE_OPS(device);
+
+ if (ops && ops->cx_is_on)
+ return ops->cx_is_on(device);
+
+ return true;
+}
+
int gmu_core_dev_ifpc_show(struct kgsl_device *device)
{
struct gmu_dev_ops *ops = GMU_DEVICE_OPS(device);
diff --git a/drivers/gpu/msm/kgsl_gmu_core.h b/drivers/gpu/msm/kgsl_gmu_core.h
index 24cb1d8..bab5907 100644
--- a/drivers/gpu/msm/kgsl_gmu_core.h
+++ b/drivers/gpu/msm/kgsl_gmu_core.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
*/
#ifndef __KGSL_GMU_CORE_H
#define __KGSL_GMU_CORE_H
@@ -138,6 +138,7 @@ struct gmu_dev_ops {
int (*wait_for_lowest_idle)(struct kgsl_device *device);
int (*wait_for_gmu_idle)(struct kgsl_device *device);
bool (*gx_is_on)(struct kgsl_device *device);
+ bool (*cx_is_on)(struct kgsl_device *device);
void (*prepare_stop)(struct kgsl_device *device);
int (*ifpc_store)(struct kgsl_device *device, unsigned int val);
unsigned int (*ifpc_show)(struct kgsl_device *device);
@@ -224,6 +225,7 @@ void gmu_core_dev_enable_lm(struct kgsl_device *device);
void gmu_core_dev_snapshot(struct kgsl_device *device,
struct kgsl_snapshot *snapshot);
bool gmu_core_dev_gx_is_on(struct kgsl_device *device);
+bool gmu_core_dev_cx_is_on(struct kgsl_device *device);
int gmu_core_dev_ifpc_show(struct kgsl_device *device);
int gmu_core_dev_ifpc_store(struct kgsl_device *device, unsigned int val);
void gmu_core_dev_prepare_stop(struct kgsl_device *device);
diff --git a/drivers/gpu/msm/kgsl_hfi.c b/drivers/gpu/msm/kgsl_hfi.c
index 754662f..6ce5665 100644
--- a/drivers/gpu/msm/kgsl_hfi.c
+++ b/drivers/gpu/msm/kgsl_hfi.c
@@ -854,7 +854,6 @@ irqreturn_t hfi_irq_handler(int irq, void *data)
struct kgsl_device *device = data;
struct gmu_device *gmu = KGSL_GMU_DEVICE(device);
struct kgsl_hfi *hfi = &gmu->hfi;
- struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
unsigned int status = 0;
adreno_read_gmureg(ADRENO_DEVICE(device),
@@ -864,12 +863,10 @@ irqreturn_t hfi_irq_handler(int irq, void *data)
if (status & HFI_IRQ_DBGQ_MASK)
tasklet_hi_schedule(&hfi->tasklet);
- if (status & HFI_IRQ_CM3_FAULT_MASK) {
+ if (status & HFI_IRQ_CM3_FAULT_MASK)
dev_err_ratelimited(&gmu->pdev->dev,
"GMU CM3 fault interrupt received\n");
- adreno_set_gpu_fault(adreno_dev, ADRENO_GMU_FAULT);
- adreno_dispatcher_schedule(device);
- }
+
if (status & ~HFI_IRQ_MASK)
dev_err_ratelimited(&gmu->pdev->dev,
"Unhandled HFI interrupts 0x%lx\n",
diff --git a/drivers/gpu/msm/kgsl_pwrctrl.c b/drivers/gpu/msm/kgsl_pwrctrl.c
index fda9c03..ca22326 100644
--- a/drivers/gpu/msm/kgsl_pwrctrl.c
+++ b/drivers/gpu/msm/kgsl_pwrctrl.c
@@ -2635,12 +2635,22 @@ static int _init(struct kgsl_device *device)
int status = 0;
switch (device->state) {
+ case KGSL_STATE_RESET:
+ if (gmu_core_isenabled(device)) {
+ /*
+ * If we fail a INIT -> AWARE transition, we will
+ * transition back to INIT. However, we must hard reset
+ * the GMU as we go back to INIT. This is done by
+ * forcing a RESET -> INIT transition.
+ */
+ gmu_core_suspend(device);
+ kgsl_pwrctrl_set_state(device, KGSL_STATE_INIT);
+ }
+ break;
case KGSL_STATE_NAP:
/* Force power on to do the stop */
status = kgsl_pwrctrl_enable(device);
case KGSL_STATE_ACTIVE:
- /* fall through */
- case KGSL_STATE_RESET:
kgsl_pwrctrl_irq(device, KGSL_PWRFLAGS_OFF);
del_timer_sync(&device->idle_timer);
kgsl_pwrscale_midframe_timer_cancel(device);
@@ -2747,7 +2757,6 @@ static int
_aware(struct kgsl_device *device)
{
int status = 0;
- unsigned int state = device->state;
switch (device->state) {
case KGSL_STATE_RESET:
@@ -2757,12 +2766,6 @@ _aware(struct kgsl_device *device)
status = gmu_core_start(device);
break;
case KGSL_STATE_INIT:
- /* if GMU already in FAULT */
- if (gmu_core_isenabled(device) &&
- test_bit(GMU_FAULT, &device->gmu_core.flags)) {
- status = -EINVAL;
- break;
- }
status = kgsl_pwrctrl_enable(device);
break;
/* The following 3 cases shouldn't occur, but don't panic. */
@@ -2774,65 +2777,26 @@ _aware(struct kgsl_device *device)
kgsl_pwrscale_midframe_timer_cancel(device);
break;
case KGSL_STATE_SLUMBER:
- /* if GMU already in FAULT */
- if (gmu_core_isenabled(device) &&
- test_bit(GMU_FAULT, &device->gmu_core.flags)) {
- status = -EINVAL;
- break;
- }
-
status = kgsl_pwrctrl_enable(device);
break;
default:
status = -EINVAL;
}
- if (status) {
- if (gmu_core_isenabled(device)) {
- /* GMU hang recovery */
- kgsl_pwrctrl_set_state(device, KGSL_STATE_RESET);
- set_bit(GMU_FAULT, &device->gmu_core.flags);
- status = kgsl_pwrctrl_enable(device);
- /* Cannot recover GMU failure GPU will not power on */
-
- if (WARN_ONCE(status, "Failed to recover GMU\n")) {
- if (device->snapshot)
- device->snapshot->recovered = false;
- /*
- * On recovery failure, we are clearing
- * GMU_FAULT bit and also not keeping
- * the state as RESET to make sure any
- * attempt to wake GMU/GPU after this
- * is treated as a fresh start. But on
- * recovery failure, GMU HS, clocks and
- * IRQs are still ON/enabled because of
- * which next GMU/GPU wakeup results in
- * multiple warnings from GMU start as HS,
- * clocks and IRQ were ON while doing a
- * fresh start i.e. wake from SLUMBER.
- *
- * Suspend the GMU on recovery failure
- * to make sure next attempt to wake up
- * GMU/GPU is indeed a fresh start.
- */
- kgsl_pwrctrl_irq(device, KGSL_PWRFLAGS_OFF);
- gmu_core_suspend(device);
- kgsl_pwrctrl_set_state(device, state);
- } else {
- if (device->snapshot)
- device->snapshot->recovered = true;
- kgsl_pwrctrl_set_state(device,
- KGSL_STATE_AWARE);
- }
-
- clear_bit(GMU_FAULT, &device->gmu_core.flags);
- return status;
- }
-
- kgsl_pwrctrl_request_state(device, KGSL_STATE_NONE);
- } else {
+ if (status && gmu_core_isenabled(device))
+ /*
+ * If a SLUMBER/INIT -> AWARE fails, we transition back to
+ * SLUMBER/INIT state. We must hard reset the GMU while
+ * transitioning back to SLUMBER/INIT. A RESET -> AWARE
+ * transition is different. It happens when dispatcher is
+ * attempting reset/recovery as part of fault handling. If it
+ * fails, we should still transition back to RESET in case
+ * we want to attempt another reset/recovery.
+ */
+ kgsl_pwrctrl_set_state(device, KGSL_STATE_RESET);
+ else
kgsl_pwrctrl_set_state(device, KGSL_STATE_AWARE);
- }
+
return status;
}
@@ -2921,6 +2885,13 @@ _slumber(struct kgsl_device *device)
trace_gpu_frequency(0, 0);
kgsl_pwrctrl_set_state(device, KGSL_STATE_SLUMBER);
break;
+ case KGSL_STATE_RESET:
+ if (gmu_core_isenabled(device)) {
+ /* Reset the GMU if we failed to boot the GMU */
+ gmu_core_suspend(device);
+ kgsl_pwrctrl_set_state(device, KGSL_STATE_SLUMBER);
+ }
+ break;
default:
kgsl_pwrctrl_request_state(device, KGSL_STATE_NONE);
break;
@@ -3313,6 +3284,41 @@ int kgsl_pwr_limits_set_freq(void *limit_ptr, unsigned int freq)
EXPORT_SYMBOL(kgsl_pwr_limits_set_freq);
/**
+ * kgsl_pwr_limits_set_gpu_fmax() - Set the requested limit for the
+ * client, if requested freq value is larger than fmax supported
+ * function returns with success.
+ * @limit_ptr: Client handle
+ * @freq: Client requested frequency
+ *
+ * Set the new limit for the client and adjust the clocks
+ */
+int kgsl_pwr_limits_set_gpu_fmax(void *limit_ptr, unsigned int freq)
+{
+ struct kgsl_pwrctrl *pwr;
+ struct kgsl_pwr_limit *limit = limit_ptr;
+ int level;
+
+ if (IS_ERR_OR_NULL(limit))
+ return -EINVAL;
+
+ pwr = &limit->device->pwrctrl;
+
+ /*
+ * When requested frequency is greater than fmax,
+ * requested limit is implicit, return success here.
+ */
+ if (freq >= pwr->pwrlevels[0].gpu_freq)
+ return 0;
+
+ level = _get_nearest_pwrlevel(pwr, freq);
+ if (level < 0)
+ return -EINVAL;
+ _update_limits(limit, KGSL_PWR_SET_LIMIT, level);
+ return 0;
+}
+EXPORT_SYMBOL(kgsl_pwr_limits_set_gpu_fmax);
+
+/**
* kgsl_pwr_limits_set_default() - Set the default thermal limit for the client
* @limit_ptr: Client handle
*
diff --git a/drivers/gpu/msm/kgsl_snapshot.c b/drivers/gpu/msm/kgsl_snapshot.c
index efc7879..59917d2 100644
--- a/drivers/gpu/msm/kgsl_snapshot.c
+++ b/drivers/gpu/msm/kgsl_snapshot.c
@@ -117,7 +117,8 @@ static size_t snapshot_os(struct kgsl_device *device,
/* Remember the power information */
header->power_flags = pwr->power_flags;
header->power_level = pwr->active_pwrlevel;
- header->power_interval_timeout = pwr->interval_timeout;
+ header->power_interval_timeout =
+ jiffies_to_msecs(pwr->interval_timeout);
header->grpclk = kgsl_get_clkrate(pwr->grp_clks[0]);
/*
@@ -204,7 +205,8 @@ static size_t snapshot_os_no_ctxt(struct kgsl_device *device,
/* Remember the power information */
header->power_flags = pwr->power_flags;
header->power_level = pwr->active_pwrlevel;
- header->power_interval_timeout = pwr->interval_timeout;
+ header->power_interval_timeout =
+ jiffies_to_msecs(pwr->interval_timeout);
header->grpclk = kgsl_get_clkrate(pwr->grp_clks[0]);
/* Return the size of the data segment */
diff --git a/drivers/hid/hid-qvr.c b/drivers/hid/hid-qvr.c
index b5e51b2..7453bd5 100644
--- a/drivers/hid/hid-qvr.c
+++ b/drivers/hid/hid-qvr.c
@@ -311,17 +311,8 @@ static int qvr_send_package_wrap(u8 *message, int msize, struct hid_device *hid)
data->gx = imuData.gx0;
data->gy = imuData.gy0;
data->gz = imuData.gz0;
- data->mx = imuData.my0;
- data->my = imuData.mx0;
- data->mz = imuData.mz0;
- data->ax = imuData.ax0;
- data->ay = imuData.ay0;
- data->az = imuData.az0;
- data->gx = imuData.gx0;
- data->gy = imuData.gy0;
- data->gz = imuData.gz0;
- data->mx = imuData.my0;
- data->my = imuData.mx0;
+ data->mx = imuData.mx0;
+ data->my = imuData.my0;
data->mz = imuData.mz0;
data->aNumerator = imuData.aNumerator;
data->aDenominator = imuData.aDenominator;
diff --git a/drivers/hwtracing/coresight/coresight-byte-cntr.c b/drivers/hwtracing/coresight/coresight-byte-cntr.c
index e238df7..6353106 100644
--- a/drivers/hwtracing/coresight/coresight-byte-cntr.c
+++ b/drivers/hwtracing/coresight/coresight-byte-cntr.c
@@ -423,7 +423,6 @@ static void usb_read_work_fn(struct work_struct *work)
sizeof(*usb_req), GFP_KERNEL);
if (!usb_req)
return;
- init_completion(&usb_req->write_done);
usb_req->sg = devm_kzalloc(tmcdrvdata->dev,
sizeof(*(usb_req->sg)) * req_sg_num,
GFP_KERNEL);
@@ -520,7 +519,7 @@ void usb_bypass_notifier(void *priv, unsigned int event,
switch (event) {
case USB_QDSS_CONNECT:
- usb_qdss_alloc_req(ch, USB_BUF_NUM, 0);
+ usb_qdss_alloc_req(ch, USB_BUF_NUM);
usb_bypass_start(drvdata);
queue_work(drvdata->usb_wq, &(drvdata->read_work));
break;
diff --git a/drivers/hwtracing/coresight/coresight-cti.c b/drivers/hwtracing/coresight/coresight-cti.c
index ac8cf05..afec7dd 100644
--- a/drivers/hwtracing/coresight/coresight-cti.c
+++ b/drivers/hwtracing/coresight/coresight-cti.c
@@ -56,6 +56,7 @@ do { \
#define ITTRIGOUTACK (0xEF0)
#define ITCHIN (0xEF4)
#define ITTRIGIN (0xEF8)
+#define DEVID (0xFC8)
#define CTI_MAX_TRIGGERS (32)
#define CTI_MAX_CHANNELS (4)
@@ -86,6 +87,8 @@ struct cti_drvdata {
struct coresight_cti cti;
int refcnt;
int cpu;
+ unsigned int trig_num_max;
+ unsigned int ch_num_max;
bool cti_save;
bool cti_hwclk;
bool l2_off;
@@ -1353,6 +1356,19 @@ static ssize_t disable_gate_store(struct device *dev,
}
static DEVICE_ATTR_WO(disable_gate);
+static ssize_t show_info_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct cti_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ ssize_t size = 0;
+
+ size = scnprintf(&buf[size], PAGE_SIZE, "%d %d\n",
+ drvdata->trig_num_max, drvdata->ch_num_max);
+
+ return size;
+}
+static DEVICE_ATTR_RO(show_info);
+
static struct attribute *cti_attrs[] = {
&dev_attr_show_trigin.attr,
&dev_attr_show_trigout.attr,
@@ -1369,6 +1385,7 @@ static struct attribute *cti_attrs[] = {
&dev_attr_show_gate.attr,
&dev_attr_enable_gate.attr,
&dev_attr_disable_gate.attr,
+ &dev_attr_show_info.attr,
NULL,
};
@@ -1468,6 +1485,7 @@ static int cti_init_save(struct cti_drvdata *drvdata,
static int cti_probe(struct amba_device *adev, const struct amba_id *id)
{
int ret;
+ unsigned int ctidevid;
struct device *dev = &adev->dev;
struct coresight_platform_data *pdata;
struct cti_drvdata *drvdata;
@@ -1539,6 +1557,9 @@ static int cti_probe(struct amba_device *adev, const struct amba_id *id)
registered++;
}
pm_runtime_put(&adev->dev);
+ ctidevid = cti_readl(drvdata, DEVID);
+ drvdata->trig_num_max = (ctidevid & GENMASK(15, 8)) >> 8;
+ drvdata->ch_num_max = (ctidevid & GENMASK(21, 16)) >> 16;
dev_dbg(dev, "CTI initialized\n");
return 0;
err:
diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
index 5ae8c65..1ac9bfd 100644
--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
+++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
@@ -26,7 +26,7 @@ static void __tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE);
writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI |
TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT |
- TMC_FFCR_TRIGON_TRIGIN,
+ TMC_FFCR_TRIGON_TRIGIN | TMC_FFCR_STOP_ON_FLUSH,
drvdata->base + TMC_FFCR);
writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG);
@@ -90,7 +90,7 @@ static void __tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
{
- coresight_disclaim_device(drvdata);
+ coresight_disclaim_device(drvdata->base);
__tmc_etb_disable_hw(drvdata);
}
diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
index 3d41b3a..7284e28 100644
--- a/drivers/hwtracing/coresight/coresight-tmc-etr.c
+++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
@@ -1016,7 +1016,7 @@ static void __tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI |
TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT |
- TMC_FFCR_TRIGON_TRIGIN,
+ TMC_FFCR_TRIGON_TRIGIN | TMC_FFCR_STOP_ON_FLUSH,
drvdata->base + TMC_FFCR);
writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG);
tmc_enable_hw(drvdata);
@@ -1162,7 +1162,7 @@ static int tmc_etr_fill_usb_bam_data(struct tmc_drvdata *drvdata)
data_fifo_iova = dma_map_resource(drvdata->dev,
bamdata->data_fifo.phys_base, bamdata->data_fifo.size,
DMA_BIDIRECTIONAL, 0);
- if (!data_fifo_iova)
+ if (dma_mapping_error(drvdata->dev, data_fifo_iova))
return -ENOMEM;
dev_dbg(drvdata->dev, "%s:data p_addr:%pa,iova:%pad,size:%x\n",
__func__, &(bamdata->data_fifo.phys_base),
@@ -1171,7 +1171,7 @@ static int tmc_etr_fill_usb_bam_data(struct tmc_drvdata *drvdata)
desc_fifo_iova = dma_map_resource(drvdata->dev,
bamdata->desc_fifo.phys_base, bamdata->desc_fifo.size,
DMA_BIDIRECTIONAL, 0);
- if (!desc_fifo_iova)
+ if (dma_mapping_error(drvdata->dev, desc_fifo_iova))
return -ENOMEM;
dev_dbg(drvdata->dev, "%s:desc p_addr:%pa,iova:%pad,size:%x\n",
__func__, &(bamdata->desc_fifo.phys_base),
@@ -1243,7 +1243,7 @@ static int get_usb_bam_iova(struct device *dev, unsigned long usb_bam_handle,
return ret;
}
*iova = dma_map_resource(dev, p_addr, bam_size, DMA_BIDIRECTIONAL, 0);
- if (!(*iova))
+ if (dma_mapping_error(dev, *iova))
return -ENOMEM;
return 0;
}
@@ -1354,6 +1354,14 @@ void usb_notifier(void *priv, unsigned int event, struct qdss_request *d_req,
int ret = 0;
mutex_lock(&drvdata->mem_lock);
+ if (drvdata->out_mode != TMC_ETR_OUT_MODE_USB
+ || drvdata->mode == CS_MODE_DISABLED) {
+ dev_err(&drvdata->csdev->dev,
+ "%s: ETR is not USB mode, or ETR is disabled.\n", __func__);
+ mutex_unlock(&drvdata->mem_lock);
+ return;
+ }
+
if (event == USB_QDSS_CONNECT) {
ret = tmc_etr_fill_usb_bam_data(drvdata);
if (ret)
@@ -1454,7 +1462,7 @@ static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev)
return -ENOMEM;
}
coresight_cti_map_trigout(drvdata->cti_flush, 3, 0);
- coresight_cti_map_trigin(drvdata->cti_reset, 2, 0);
+ coresight_cti_map_trigin(drvdata->cti_reset, 0, 0);
} else if (drvdata->byte_cntr->sw_usb) {
if (!drvdata->etr_buf) {
free_buf = new_buf =
@@ -1464,7 +1472,7 @@ static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev)
}
}
coresight_cti_map_trigout(drvdata->cti_flush, 3, 0);
- coresight_cti_map_trigin(drvdata->cti_reset, 2, 0);
+ coresight_cti_map_trigin(drvdata->cti_reset, 0, 0);
drvdata->usbch = usb_qdss_open("qdss_mdm",
drvdata->byte_cntr,
@@ -1513,12 +1521,13 @@ static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev)
(drvdata->out_mode == TMC_ETR_OUT_MODE_USB
&& drvdata->byte_cntr->sw_usb)) {
ret = tmc_etr_enable_hw(drvdata, drvdata->sysfs_buf);
- if (!ret) {
- drvdata->mode = CS_MODE_SYSFS;
- atomic_inc(csdev->refcnt);
- }
+ if (ret)
+ goto out;
}
+ drvdata->mode = CS_MODE_SYSFS;
+ atomic_inc(csdev->refcnt);
+
drvdata->enable = true;
out:
spin_unlock_irqrestore(&drvdata->spinlock, flags);
@@ -1527,11 +1536,11 @@ static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev)
if (free_buf)
tmc_etr_free_sysfs_buf(free_buf);
- if (drvdata->out_mode == TMC_ETR_OUT_MODE_MEM)
- tmc_etr_byte_cntr_start(drvdata->byte_cntr);
-
- if (!ret)
+ if (!ret) {
+ if (drvdata->out_mode == TMC_ETR_OUT_MODE_MEM)
+ tmc_etr_byte_cntr_start(drvdata->byte_cntr);
dev_info(drvdata->dev, "TMC-ETR enabled\n");
+ }
return ret;
}
@@ -1975,7 +1984,8 @@ static int tmc_enable_etr_sink(struct coresight_device *csdev,
return -EINVAL;
}
-static int _tmc_disable_etr_sink(struct coresight_device *csdev)
+static int _tmc_disable_etr_sink(struct coresight_device *csdev,
+ bool mode_switch)
{
unsigned long flags;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
@@ -1987,7 +1997,7 @@ static int _tmc_disable_etr_sink(struct coresight_device *csdev)
return -EBUSY;
}
- if (atomic_dec_return(csdev->refcnt)) {
+ if (atomic_dec_return(csdev->refcnt) && !mode_switch) {
spin_unlock_irqrestore(&drvdata->spinlock, flags);
return -EBUSY;
}
@@ -1996,12 +2006,22 @@ static int _tmc_disable_etr_sink(struct coresight_device *csdev)
WARN_ON_ONCE(drvdata->mode == CS_MODE_DISABLED);
if (drvdata->mode != CS_MODE_DISABLED) {
if (drvdata->out_mode == TMC_ETR_OUT_MODE_USB) {
- __tmc_etr_disable_to_bam(drvdata);
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
- tmc_etr_bam_disable(drvdata);
- usb_qdss_close(drvdata->usbch);
- drvdata->mode = CS_MODE_DISABLED;
- goto out;
+ if (!drvdata->byte_cntr->sw_usb) {
+ __tmc_etr_disable_to_bam(drvdata);
+ spin_unlock_irqrestore(&drvdata->spinlock,
+ flags);
+ tmc_etr_bam_disable(drvdata);
+ usb_qdss_close(drvdata->usbch);
+ drvdata->usbch = NULL;
+ drvdata->mode = CS_MODE_DISABLED;
+ goto out;
+ } else {
+ spin_unlock_irqrestore(&drvdata->spinlock,
+ flags);
+ usb_qdss_close(drvdata->usbch);
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ tmc_etr_disable_hw(drvdata);
+ }
} else {
tmc_etr_disable_hw(drvdata);
}
@@ -2020,6 +2040,7 @@ static int _tmc_disable_etr_sink(struct coresight_device *csdev)
&& drvdata->byte_cntr->sw_usb) {
usb_bypass_stop(drvdata->byte_cntr);
flush_workqueue(drvdata->byte_cntr->usb_wq);
+ drvdata->usbch = NULL;
coresight_cti_unmap_trigin(drvdata->cti_reset, 2, 0);
coresight_cti_unmap_trigout(drvdata->cti_flush, 3, 0);
/* Free memory outside the spinlock if need be */
@@ -2050,7 +2071,7 @@ static int tmc_disable_etr_sink(struct coresight_device *csdev)
int ret;
mutex_lock(&drvdata->mem_lock);
- ret = _tmc_disable_etr_sink(csdev);
+ ret = _tmc_disable_etr_sink(csdev, false);
mutex_unlock(&drvdata->mem_lock);
return ret;
}
@@ -2081,7 +2102,7 @@ int tmc_etr_switch_mode(struct tmc_drvdata *drvdata, const char *out_mode)
}
coresight_disable_all_source_link();
- _tmc_disable_etr_sink(drvdata->csdev);
+ _tmc_disable_etr_sink(drvdata->csdev, true);
old_mode = drvdata->out_mode;
drvdata->out_mode = new_mode;
if (tmc_enable_etr_sink_sysfs(drvdata->csdev)) {
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 862128d..f90553b 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -405,12 +405,8 @@ static void gi2c_gsi_tx_cb(void *ptr)
struct msm_gpi_dma_async_tx_cb_param *tx_cb = ptr;
struct geni_i2c_dev *gi2c = tx_cb->userdata;
- if (tx_cb->completion_code == MSM_GPI_TCE_EOB) {
- complete(&gi2c->xfer);
- } else if (!(gi2c->cur->flags & I2C_M_RD)) {
- gi2c_gsi_cb_err(tx_cb, "TX");
- complete(&gi2c->xfer);
- }
+ gi2c_gsi_cb_err(tx_cb, "TX");
+ complete(&gi2c->xfer);
}
static void gi2c_gsi_rx_cb(void *ptr)
@@ -480,7 +476,7 @@ static int geni_i2c_gsi_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
lock_t->dword[2] = MSM_GPI_LOCK_TRE_DWORD2;
lock_t->dword[3] = MSM_GPI_LOCK_TRE_DWORD3(0, 0, 0, 0, 1);
- /* unlock */
+ /* unlock tre: ieob set */
unlock_t->dword[0] = MSM_GPI_UNLOCK_TRE_DWORD0;
unlock_t->dword[1] = MSM_GPI_UNLOCK_TRE_DWORD1;
unlock_t->dword[2] = MSM_GPI_UNLOCK_TRE_DWORD2;
@@ -535,12 +531,14 @@ static int geni_i2c_gsi_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
segs++;
sg_init_table(gi2c->tx_sg, segs);
if (i == 0)
+ /* Send lock tre for first transfer in a msg */
sg_set_buf(&gi2c->tx_sg[index++], &gi2c->lock_t,
sizeof(gi2c->lock_t));
} else {
sg_init_table(gi2c->tx_sg, segs);
}
+ /* Send cfg tre when cfg not sent already */
if (!gi2c->cfg_sent) {
sg_set_buf(&gi2c->tx_sg[index++], &gi2c->cfg0_t,
sizeof(gi2c->cfg0_t));
@@ -553,12 +551,21 @@ static int geni_i2c_gsi_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
if (msgs[i].flags & I2C_M_RD) {
go_t->dword[2] = MSM_GPI_I2C_GO_TRE_DWORD2(msgs[i].len);
- go_t->dword[3] = MSM_GPI_I2C_GO_TRE_DWORD3(1, 0, 0, 0,
- 0);
+ /*
+ * For Rx Go tre: Set ieob for non-shared se and for all
+ * but last transfer in shared se
+ */
+ if (!gi2c->is_shared || (gi2c->is_shared && i != num-1))
+ go_t->dword[3] = MSM_GPI_I2C_GO_TRE_DWORD3(1, 0,
+ 0, 1, 0);
+ else
+ go_t->dword[3] = MSM_GPI_I2C_GO_TRE_DWORD3(1, 0,
+ 0, 0, 0);
} else {
+ /* For Tx Go tre: ieob is not set, chain bit is set */
go_t->dword[2] = MSM_GPI_I2C_GO_TRE_DWORD2(0);
go_t->dword[3] = MSM_GPI_I2C_GO_TRE_DWORD3(0, 0, 0, 0,
- 1);
+ 1);
}
sg_set_buf(&gi2c->tx_sg[index++], &gi2c->go_t,
@@ -591,6 +598,7 @@ static int geni_i2c_gsi_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
MSM_GPI_DMA_W_BUFFER_TRE_DWORD1(gi2c->rx_ph);
gi2c->rx_t.dword[2] =
MSM_GPI_DMA_W_BUFFER_TRE_DWORD2(msgs[i].len);
+ /* Set ieot for all Rx/Tx DMA tres */
gi2c->rx_t.dword[3] =
MSM_GPI_DMA_W_BUFFER_TRE_DWORD3(0, 0, 1, 0, 0);
@@ -641,6 +649,10 @@ static int geni_i2c_gsi_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
gi2c->tx_t.dword[2] =
MSM_GPI_DMA_W_BUFFER_TRE_DWORD2(msgs[i].len);
if (gi2c->is_shared && i == num-1)
+ /*
+ * For Tx: unlock tre is send for last transfer
+ * so set chain bit for last transfer DMA tre.
+ */
gi2c->tx_t.dword[3] =
MSM_GPI_DMA_W_BUFFER_TRE_DWORD3(0, 0, 1, 0, 1);
else
@@ -652,6 +664,7 @@ static int geni_i2c_gsi_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
}
if (gi2c->is_shared && i == num-1) {
+ /* Send unlock tre at the end of last transfer */
sg_set_buf(&gi2c->tx_sg[index++],
&gi2c->unlock_t, sizeof(gi2c->unlock_t));
}
@@ -689,6 +702,10 @@ static int geni_i2c_gsi_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
dmaengine_terminate_all(gi2c->tx_c);
gi2c->cfg_sent = 0;
}
+ if (gi2c->is_shared)
+ /* Resend cfg tre for every new message on shared se */
+ gi2c->cfg_sent = 0;
+
if (msgs[i].flags & I2C_M_RD)
geni_se_iommu_unmap_buf(rx_dev, &gi2c->rx_ph,
msgs[i].len, DMA_FROM_DEVICE);
diff --git a/drivers/i3c/master/i3c-master-qcom-geni.c b/drivers/i3c/master/i3c-master-qcom-geni.c
index d36280e..d9f111e 100644
--- a/drivers/i3c/master/i3c-master-qcom-geni.c
+++ b/drivers/i3c/master/i3c-master-qcom-geni.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2019-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/clk.h>
@@ -20,6 +20,8 @@
#include <linux/ipc_logging.h>
#include <linux/pinctrl/qcom-pinctrl.h>
#include <linux/delay.h>
+#include <linux/pm_wakeup.h>
+#include <linux/workqueue.h>
#define SE_I3C_SCL_HIGH 0x268
#define SE_I3C_TX_TRANS_LEN 0x26C
@@ -199,7 +201,6 @@ enum geni_i3c_err_code {
#define TLMM_I3C_MODE 0x24
#define IBI_SW_RESET_MIN_SLEEP 1000
#define IBI_SW_RESET_MAX_SLEEP 2000
-#define I3C_OD_CLK_RATE 370000
enum i3c_trans_dir {
WRITE_TRANSACTION = 0,
@@ -270,6 +271,9 @@ struct geni_i3c_dev {
const struct geni_i3c_clk_fld *clk_fld;
const struct geni_i3c_clk_fld *clk_od_fld;
struct geni_ibi ibi;
+ struct workqueue_struct *hj_wq;
+ struct work_struct hj_wd;
+ struct wakeup_source hj_wl;
};
struct geni_i3c_i2c_dev_data {
@@ -340,9 +344,10 @@ to_geni_i3c_master(struct i3c_master_controller *master)
*/
static const struct geni_i3c_clk_fld geni_i3c_clk_map[] = {
{ KHZ(100), 19200, 7, 10, 11, 0, 0, 26},
- { KHZ(400), 19200, 2, 5, 12, 0, 0, 24},
+ { KHZ(400), 19200, 1, 72, 168, 6, 7, 300},
{ KHZ(1000), 19200, 1, 3, 9, 7, 0, 18},
{ KHZ(1920), 19200, 1, 4, 9, 7, 8, 19},
+ { KHZ(3500), 19200, 1, 72, 168, 3, 4, 300},
{ KHZ(370), 100000, 20, 4, 7, 8, 14, 14},
{ KHZ(12500), 100000, 1, 72, 168, 6, 7, 300},
};
@@ -361,7 +366,7 @@ static int geni_i3c_clk_map_idx(struct geni_i3c_dev *gi3c)
gi3c->clk_fld = itr;
}
- if (itr->clk_freq_out == I3C_OD_CLK_RATE)
+ if (itr->clk_freq_out == bus->scl_rate.i2c)
gi3c->clk_od_fld = itr;
}
@@ -418,10 +423,9 @@ static void qcom_geni_i3c_conf(struct geni_i3c_dev *gi3c,
if (bus_phase == OPEN_DRAIN_MODE)
itr = gi3c->clk_od_fld;
- if (gi3c->dfs_idx > DFS_INDEX_MAX)
- ret = geni_se_clk_freq_match(&gi3c->se.i3c_rsc,
- KHZ(itr->clk_src_freq),
- &gi3c->dfs_idx, &freq, false);
+ ret = geni_se_clk_freq_match(&gi3c->se.i3c_rsc,
+ KHZ(itr->clk_src_freq),
+ &gi3c->dfs_idx, &freq, false);
if (ret)
gi3c->dfs_idx = 0;
@@ -456,6 +460,22 @@ static void geni_i3c_err(struct geni_i3c_dev *gi3c, int err)
geni_se_dump_dbg_regs(&gi3c->se.i3c_rsc, gi3c->se.base, gi3c->ipcl);
}
+static void geni_i3c_hotjoin(struct work_struct *work)
+{
+ int ret;
+ struct geni_i3c_dev *gi3c =
+ container_of(work, struct geni_i3c_dev, hj_wd);
+
+ pm_stay_awake(gi3c->se.dev);
+
+ ret = i3c_master_do_daa(&gi3c->ctrlr);
+ if (ret)
+ GENI_SE_ERR(gi3c->ipcl, true, gi3c->se.dev,
+ "hotjoin:daa failed %d\n", ret);
+
+ pm_relax(gi3c->se.dev);
+}
+
static void geni_i3c_handle_received_ibi(struct geni_i3c_dev *gi3c)
{
struct geni_i3c_i2c_dev_data *data;
@@ -527,6 +547,10 @@ static irqreturn_t geni_i3c_ibi_irq(int irq, void *dev)
(m_stat & SW_RESET_DONE_EN))
cmd_done = true;
+ if (m_stat & HOT_JOIN_IRQ_EN) {
+ /* Queue worker to service hot-join request*/
+ queue_work(gi3c->hj_wq, &gi3c->hj_wd);
+ }
/* clear interrupts */
if (m_stat)
writel_relaxed(m_stat, gi3c->se.ibi_base
@@ -842,8 +866,10 @@ static void geni_i3c_perform_daa(struct geni_i3c_dev *gi3c)
u8 rx_buf[8], tx_buf[8];
struct i3c_xfer_params xfer = { FIFO_MODE };
struct i3c_dev_boardinfo *i3cboardinfo;
+ struct i3c_dev_desc *i3cdev;
u64 pid;
u8 bcr, dcr, init_dyn_addr = 0, addr = 0;
+ bool enum_slv = false;
GENI_SE_DBG(gi3c->ipcl, false, gi3c->se.dev,
"i3c entdaa read\n");
@@ -885,18 +911,36 @@ static void geni_i3c_perform_daa(struct geni_i3c_dev *gi3c)
goto daa_err;
} else if (ret == init_dyn_addr) {
GENI_SE_DBG(gi3c->ipcl, false, gi3c->se.dev,
- "assigning requested addr:0x%x for pid:0x:%x\n"
+ "assigning requested addr:0x%x for pid:0x:%x\n"
, ret, pid);
} else if (init_dyn_addr) {
- GENI_SE_DBG(gi3c->ipcl, false, gi3c->se.dev,
- "Can't assign req addr:0x%x for pid:0x%x assigning avl addr:0x%x\n"
- , init_dyn_addr, pid, addr);
+ i3c_bus_for_each_i3cdev(&m->bus, i3cdev) {
+ if (i3cdev->info.pid == pid) {
+ enum_slv = true;
+ break;
+ }
+ }
+ if (enum_slv) {
+ addr = i3cdev->info.dyn_addr;
+ GENI_SE_DBG(gi3c->ipcl, false, gi3c->se.dev,
+ "assigning requested addr:0x%x for pid:0x:%x\n"
+ , addr, pid);
+ } else {
+ GENI_SE_DBG(gi3c->ipcl, false, gi3c->se.dev,
+ "new dev: assigning addr:0x%x for pid:x:%x\n"
+ , ret, pid);
+ }
} else {
GENI_SE_DBG(gi3c->ipcl, false, gi3c->se.dev,
- "assigning addr:0x%x for pid:x:%x\n", ret, pid);
+ "assigning addr:0x%x for pid:x:%x\n", ret, pid);
}
- set_new_addr_slot(gi3c->newaddrslots, addr);
+ if (!i3cboardinfo->init_dyn_addr)
+ i3cboardinfo->init_dyn_addr = addr;
+
+ if (!enum_slv)
+ set_new_addr_slot(gi3c->newaddrslots, addr);
+
tx_buf[0] = (addr & I3C_ADDR_MASK) << 1;
tx_buf[0] |= ~(hweight8(addr & I3C_ADDR_MASK) & 1);
@@ -1201,6 +1245,10 @@ static int geni_i3c_master_entdaa_locked(struct geni_i3c_dev *gi3c)
}
}
+ i3c_master_enec_locked(m, I3C_BROADCAST_ADDR,
+ I3C_CCC_EVENT_MR |
+ I3C_CCC_EVENT_HJ);
+
return 0;
}
@@ -1346,7 +1394,7 @@ static int geni_i3c_master_enable_ibi(struct i3c_dev_desc *dev)
return -EPERM;
ret = i3c_master_enec_locked(m, dev->info.dyn_addr,
- I3C_CCC_EVENT_SIR);
+ I3C_CCC_EVENT_SIR);
if (ret)
GENI_SE_ERR(gi3c->ipcl, true, gi3c->se.dev,
"%s: error while i3c_master_enec_locked\n", __func__);
@@ -1379,7 +1427,6 @@ static void qcom_geni_i3c_ibi_conf(struct geni_i3c_dev *gi3c)
reinit_completion(&gi3c->ibi.done);
/* set the configuration for 100Khz OD speed */
- geni_write_reg(0, gi3c->se.ibi_base, IBI_SCL_OD_TYPE);
geni_write_reg(0x5FD74322, gi3c->se.ibi_base, IBI_SCL_PP_TIMING_CONFIG);
/* Enable I3C IBI controller */
@@ -1401,7 +1448,7 @@ static void qcom_geni_i3c_ibi_conf(struct geni_i3c_dev *gi3c)
}
/* enable manager interrupts */
- geni_write_reg(~0u, gi3c->se.ibi_base, IBI_GEN_IRQ_EN);
+ geni_write_reg(0x1B, gi3c->se.ibi_base, IBI_GEN_IRQ_EN);
/* Enable GPII0 interrupts */
geni_write_reg(0x1, gi3c->se.ibi_base, IBI_GPII_IBI_EN);
@@ -1906,6 +1953,10 @@ static int geni_i3c_probe(struct platform_device *pdev)
geni_se_init(gi3c->se.base, gi3c->tx_wm, tx_depth);
se_config_packing(gi3c->se.base, BITS_PER_BYTE, PACKING_BYTES_PW, true);
+ wakeup_source_init(&gi3c->hj_wl, dev_name(gi3c->se.dev));
+ INIT_WORK(&gi3c->hj_wd, geni_i3c_hotjoin);
+ gi3c->hj_wq = alloc_workqueue("%s", 0, 0, dev_name(gi3c->se.dev));
+
ret = i3c_ibi_rsrcs_init(gi3c, pdev);
if (ret) {
se_geni_resources_off(&gi3c->se.i3c_rsc);
@@ -1921,13 +1972,16 @@ static int geni_i3c_probe(struct platform_device *pdev)
pm_runtime_use_autosuspend(gi3c->se.dev);
pm_runtime_enable(gi3c->se.dev);
+
ret = i3c_master_register(&gi3c->ctrlr, &pdev->dev,
&geni_i3c_master_ops, false);
-
if (ret)
return ret;
+ //enable hot-join IRQ also
+ geni_write_reg(~0u, gi3c->se.ibi_base, IBI_GEN_IRQ_EN);
GENI_SE_DBG(gi3c->ipcl, false, gi3c->se.dev, "I3C probed\n");
+
return ret;
}
@@ -1936,6 +1990,8 @@ static int geni_i3c_remove(struct platform_device *pdev)
struct geni_i3c_dev *gi3c = platform_get_drvdata(pdev);
int ret = 0;
+ destroy_workqueue(gi3c->hj_wq);
+ wakeup_source_trash(&gi3c->hj_wl);
pm_runtime_disable(gi3c->se.dev);
ret = i3c_master_unregister(&gi3c->ctrlr);
if (gi3c->ipcl)
diff --git a/drivers/input/touchscreen/focaltech_touch/focaltech_common.h b/drivers/input/touchscreen/focaltech_touch/focaltech_common.h
index 67415c4..2065760 100644
--- a/drivers/input/touchscreen/focaltech_touch/focaltech_common.h
+++ b/drivers/input/touchscreen/focaltech_touch/focaltech_common.h
@@ -88,6 +88,7 @@
#define FTS_REG_IDE_PARA_STATUS 0xB6
#define FTS_REG_GLOVE_MODE_EN 0xC0
#define FTS_REG_COVER_MODE_EN 0xC1
+#define FTS_REG_REPORT_RATE 0x88
#define FTS_REG_CHARGER_MODE_EN 0x8B
#define FTS_REG_GESTURE_EN 0xD0
#define FTS_REG_GESTURE_OUTPUT_ADDRESS 0xD3
diff --git a/drivers/input/touchscreen/focaltech_touch/focaltech_core.c b/drivers/input/touchscreen/focaltech_touch/focaltech_core.c
index cf1a406..273d474 100644
--- a/drivers/input/touchscreen/focaltech_touch/focaltech_core.c
+++ b/drivers/input/touchscreen/focaltech_touch/focaltech_core.c
@@ -1271,11 +1271,11 @@ static int fb_notifier_callback(struct notifier_block *self,
}
blank = evdata->data;
- FTS_INFO("FB event:%lu,blank:%d", event, *blank);
+ FTS_DEBUG("FB event:%lu,blank:%d", event, *blank);
switch (*blank) {
case DRM_PANEL_BLANK_UNBLANK:
if (event == DRM_PANEL_EARLY_EVENT_BLANK) {
- FTS_INFO("resume: event = %lu, not care\n", event);
+ FTS_DEBUG("resume: event = %lu, not care\n", event);
} else if (event == DRM_PANEL_EVENT_BLANK) {
queue_work(fts_data->ts_workqueue, &fts_data->resume_work);
}
@@ -1286,12 +1286,12 @@ static int fb_notifier_callback(struct notifier_block *self,
cancel_work_sync(&fts_data->resume_work);
fts_ts_suspend(ts_data->dev);
} else if (event == DRM_PANEL_EVENT_BLANK) {
- FTS_INFO("suspend: event = %lu, not care\n", event);
+ FTS_DEBUG("suspend: event = %lu, not care\n", event);
}
break;
default:
- FTS_INFO("FB BLANK(%d) do not need process\n", *blank);
+ FTS_DEBUG("FB BLANK(%d) do not need process\n", *blank);
break;
}
diff --git a/drivers/input/touchscreen/focaltech_touch/focaltech_core.h b/drivers/input/touchscreen/focaltech_touch/focaltech_core.h
index 7603a64..c7a8bd3 100644
--- a/drivers/input/touchscreen/focaltech_touch/focaltech_core.h
+++ b/drivers/input/touchscreen/focaltech_touch/focaltech_core.h
@@ -156,7 +156,7 @@ struct fts_ts_data {
struct mutex bus_lock;
int irq;
int log_level;
- int fw_is_running; /* confirm fw is running when using spi:default 0 */
+ int fw_is_running; /* confirm fw is running when using spi:default 0 */
int dummy_byte;
bool suspended;
bool fw_loading;
@@ -165,7 +165,9 @@ struct fts_ts_data {
bool glove_mode;
bool cover_mode;
bool charger_mode;
- bool gesture_mode; /* gesture enable or disable, default: disable */
+ bool gesture_mode; /* gesture enable or disable, default: disable */
+ int report_rate;
+
/* multi-touch */
struct ts_event *events;
u8 *bus_tx_buf;
diff --git a/drivers/input/touchscreen/focaltech_touch/focaltech_ex_mode.c b/drivers/input/touchscreen/focaltech_touch/focaltech_ex_mode.c
index 4727744..54038f8 100644
--- a/drivers/input/touchscreen/focaltech_touch/focaltech_ex_mode.c
+++ b/drivers/input/touchscreen/focaltech_touch/focaltech_ex_mode.c
@@ -45,6 +45,7 @@ enum _ex_mode {
MODE_GLOVE = 0,
MODE_COVER,
MODE_CHARGER,
+ REPORT_RATE,
};
/*****************************************************************************
@@ -61,33 +62,30 @@ enum _ex_mode {
static int fts_ex_mode_switch(enum _ex_mode mode, u8 value)
{
int ret = 0;
- u8 m_val = 0;
-
- if (value)
- m_val = 0x01;
- else
- m_val = 0x00;
switch (mode) {
case MODE_GLOVE:
- ret = fts_write_reg(FTS_REG_GLOVE_MODE_EN, m_val);
- if (ret < 0) {
- FTS_ERROR("MODE_GLOVE switch to %d fail", m_val);
- }
+ ret = fts_write_reg(FTS_REG_GLOVE_MODE_EN, value > 0 ? 1 : 0);
+ if (ret < 0)
+ FTS_ERROR("MODE_GLOVE switch to %d fail", value);
break;
case MODE_COVER:
- ret = fts_write_reg(FTS_REG_COVER_MODE_EN, m_val);
- if (ret < 0) {
- FTS_ERROR("MODE_COVER switch to %d fail", m_val);
- }
+ ret = fts_write_reg(FTS_REG_COVER_MODE_EN, value > 0 ? 1 : 0);
+ if (ret < 0)
+ FTS_ERROR("MODE_COVER switch to %d fail", value);
break;
case MODE_CHARGER:
- ret = fts_write_reg(FTS_REG_CHARGER_MODE_EN, m_val);
- if (ret < 0) {
- FTS_ERROR("MODE_CHARGER switch to %d fail", m_val);
- }
+ ret = fts_write_reg(FTS_REG_CHARGER_MODE_EN, value > 0 ? 1 : 0);
+ if (ret < 0)
+ FTS_ERROR("MODE_CHARGER switch to %d fail", value);
+ break;
+
+ case REPORT_RATE:
+ ret = fts_write_reg(FTS_REG_REPORT_RATE, value);
+ if (ret < 0)
+ FTS_ERROR("REPORT_RATE switch to %d fail", value);
break;
default:
@@ -241,6 +239,47 @@ static ssize_t fts_charger_mode_store(
return count;
}
+static ssize_t fts_report_rate_show(
+ struct device *dev, struct device_attribute *attr, char *buf)
+{
+ int count = 0;
+ u8 val = 0;
+ struct fts_ts_data *ts_data = fts_data;
+ struct input_dev *input_dev = ts_data->input_dev;
+
+ mutex_lock(&input_dev->mutex);
+ fts_read_reg(FTS_REG_REPORT_RATE, &val);
+ count = scnprintf(buf + count, PAGE_SIZE, "Report Rate:%d\n",
+ ts_data->report_rate);
+ count += scnprintf(buf + count, PAGE_SIZE,
+ "Report Rate Reg(0x88):%d\n", val);
+ mutex_unlock(&input_dev->mutex);
+
+ return count;
+}
+
+static ssize_t fts_report_rate_store(
+ struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int ret = 0;
+ struct fts_ts_data *ts_data = fts_data;
+ int rate;
+
+ ret = kstrtoint(buf, 16, &rate);
+ if (ret)
+ return ret;
+
+ if (rate != ts_data->report_rate) {
+ ret = fts_ex_mode_switch(REPORT_RATE, (u8)rate);
+ if (ret >= 0)
+ ts_data->report_rate = rate;
+ }
+
+ FTS_DEBUG("report rate:%d", ts_data->report_rate);
+ return count;
+}
+
/* read and write charger mode
* read example: cat fts_glove_mode ---read glove mode
@@ -255,10 +294,13 @@ static DEVICE_ATTR(fts_cover_mode, S_IRUGO | S_IWUSR,
static DEVICE_ATTR(fts_charger_mode, S_IRUGO | S_IWUSR,
fts_charger_mode_show, fts_charger_mode_store);
+static DEVICE_ATTR_RW(fts_report_rate);
+
static struct attribute *fts_touch_mode_attrs[] = {
&dev_attr_fts_glove_mode.attr,
&dev_attr_fts_cover_mode.attr,
&dev_attr_fts_charger_mode.attr,
+ &dev_attr_fts_report_rate.attr,
NULL,
};
@@ -280,6 +322,9 @@ int fts_ex_mode_recovery(struct fts_ts_data *ts_data)
fts_ex_mode_switch(MODE_CHARGER, ENABLE);
}
+ if (ts_data->report_rate > 0)
+ fts_ex_mode_switch(REPORT_RATE, ts_data->report_rate);
+
return 0;
}
@@ -290,6 +335,7 @@ int fts_ex_mode_init(struct fts_ts_data *ts_data)
ts_data->glove_mode = DISABLE;
ts_data->cover_mode = DISABLE;
ts_data->charger_mode = DISABLE;
+ ts_data->report_rate = 0;
ret = sysfs_create_group(&ts_data->dev->kobj, &fts_touch_mode_group);
if (ret < 0) {
diff --git a/drivers/input/touchscreen/focaltech_touch/focaltech_flash.c b/drivers/input/touchscreen/focaltech_touch/focaltech_flash.c
index 51c5622..05d93a5 100644
--- a/drivers/input/touchscreen/focaltech_touch/focaltech_flash.c
+++ b/drivers/input/touchscreen/focaltech_touch/focaltech_flash.c
@@ -1862,7 +1862,7 @@ static int fts_fwupg_get_fw_file(struct fts_upgrade *upg)
upg->lic = upg->fw;
upg->lic_length = upg->fw_length;
- FTS_INFO("upgrade fw file len:%d", upg->fw_length);
+ FTS_DEBUG("upgrade fw file len:%d", upg->fw_length);
if ((upg->fw_length < FTS_MIN_LEN)
|| (upg->fw_length > FTS_MAX_LEN_FILE)) {
FTS_ERROR("fw file len(%d) fail", upg->fw_length);
@@ -1898,7 +1898,7 @@ static void fts_fwupg_work(struct work_struct *work)
return ;
#endif
- FTS_INFO("fw upgrade work function");
+ FTS_DEBUG("fw upgrade work function");
if (!upg || !upg->ts_data) {
FTS_ERROR("upg/ts_data is null");
return ;
diff --git a/drivers/input/touchscreen/nt36xxx/nt36xxx.c b/drivers/input/touchscreen/nt36xxx/nt36xxx.c
index 65af379..de496cc 100644
--- a/drivers/input/touchscreen/nt36xxx/nt36xxx.c
+++ b/drivers/input/touchscreen/nt36xxx/nt36xxx.c
@@ -1486,7 +1486,9 @@ nvt_flash_proc_deinit();
}
err_input_dev_alloc_failed:
err_chipvertrim_failed:
+ nvt_gpio_deconfig(ts);
err_gpio_config_failed:
+ NVT_ERR("ret = %d\n", ret);
return ret;
}
@@ -1626,7 +1628,8 @@ static int32_t nvt_ts_remove(struct i2c_client *client)
#endif
#if WAKEUP_GESTURE
- device_init_wakeup(&ts->input_dev->dev, 0);
+ if (ts->input_dev)
+ device_init_wakeup(&ts->input_dev->dev, 0);
#endif
nvt_irq_enable(false);
@@ -1696,7 +1699,8 @@ static void nvt_ts_shutdown(struct i2c_client *client)
#endif
#if WAKEUP_GESTURE
- device_init_wakeup(&ts->input_dev->dev, 0);
+ if (ts->input_dev)
+ device_init_wakeup(&ts->input_dev->dev, 0);
#endif
}
diff --git a/drivers/input/touchscreen/nt36xxx/nt36xxx.h b/drivers/input/touchscreen/nt36xxx/nt36xxx.h
index 438c28b..eef848b 100644
--- a/drivers/input/touchscreen/nt36xxx/nt36xxx.h
+++ b/drivers/input/touchscreen/nt36xxx/nt36xxx.h
@@ -78,7 +78,7 @@ extern const uint16_t touch_key_array[TOUCH_KEY_NUM];
#define NVT_TOUCH_EXT_PROC 1
#define NVT_TOUCH_MP 1
#define MT_PROTOCOL_B 1
-#define WAKEUP_GESTURE 1
+#define WAKEUP_GESTURE 0
#if WAKEUP_GESTURE
extern const uint16_t gesture_key_array[];
#endif
diff --git a/drivers/input/touchscreen/nt36xxx/nt36xxx_mp_ctrlram.c b/drivers/input/touchscreen/nt36xxx/nt36xxx_mp_ctrlram.c
index c14b0c8..7aaf12d 100644
--- a/drivers/input/touchscreen/nt36xxx/nt36xxx_mp_ctrlram.c
+++ b/drivers/input/touchscreen/nt36xxx/nt36xxx_mp_ctrlram.c
@@ -1054,7 +1054,7 @@ const struct seq_operations nvt_selftest_seq_ops = {
static int32_t nvt_selftest_open(struct inode *inode, struct file *file)
{
struct device_node *np = ts->client->dev.of_node;
- unsigned char mpcriteria[32] = {0}; //novatek-mp-criteria-default
+ unsigned char mpcriteria[64] = {0}; //novatek-mp-criteria-default
TestResult_Short = 0;
TestResult_Open = 0;
@@ -1093,7 +1093,8 @@ static int32_t nvt_selftest_open(struct inode *inode, struct file *file)
* Ex. nvt_pid = 500A
* mpcriteria = "novatek-mp-criteria-500A"
*/
- snprintf(mpcriteria, PAGE_SIZE, "novatek-mp-criteria-%04X", ts->nvt_pid);
+ snprintf(mpcriteria, sizeof(mpcriteria),
+ "novatek-mp-criteria-%04X", ts->nvt_pid);
if (nvt_mp_parse_dt(np, mpcriteria)) {
mutex_unlock(&ts->lock);
diff --git a/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_core.c b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_core.c
index 2206dc0..70cfa21 100644
--- a/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_core.c
+++ b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_core.c
@@ -3,7 +3,7 @@
*
* Copyright (C) 2012-2016 Synaptics Incorporated. All rights reserved.
*
- * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020 The Linux Foundation. All rights reserved.
* Copyright (C) 2012 Alexandra Chin <alexandra.chin@tw.synaptics.com>
* Copyright (C) 2012 Scott Lin <scott.lin@tw.synaptics.com>
*
@@ -3637,7 +3637,7 @@ static int synaptics_rmi4_gpio_setup(int gpio, bool config, int dir, int state)
unsigned char buf[16];
if (config) {
- snprintf(buf, PAGE_SIZE, "dsx_gpio_%u\n", gpio);
+ snprintf(buf, sizeof(buf), "dsx_gpio_%u\n", gpio);
retval = gpio_request(gpio, buf);
if (retval) {
diff --git a/drivers/input/touchscreen/synaptics_tcm/synaptics_tcm_core.c b/drivers/input/touchscreen/synaptics_tcm/synaptics_tcm_core.c
index 7e9b9a3..f095624 100644
--- a/drivers/input/touchscreen/synaptics_tcm/synaptics_tcm_core.c
+++ b/drivers/input/touchscreen/synaptics_tcm/synaptics_tcm_core.c
@@ -56,6 +56,10 @@
#define SYNA_LOAD_MAX_UA 30000
+#define SYNA_VDD_VTG_MIN_UV 1800000
+
+#define SYNA_VDD_VTG_MAX_UV 2000000
+
#define NOTIFIER_PRIORITY 2
#define RESPONSE_TIMEOUT_MS 3000
@@ -1917,6 +1921,22 @@ static int syna_tcm_enable_regulator(struct syna_tcm_hcd *tcm_hcd, bool en)
}
if (tcm_hcd->bus_reg) {
+ retval = regulator_set_voltage(tcm_hcd->bus_reg,
+ SYNA_VDD_VTG_MIN_UV, SYNA_VDD_VTG_MAX_UV);
+ if (retval) {
+ LOGE(tcm_hcd->pdev->dev.parent,
+ "set bus regulator voltage failed\n");
+ goto exit;
+ }
+
+ retval = regulator_set_load(tcm_hcd->bus_reg,
+ SYNA_LOAD_MAX_UA);
+ if (retval) {
+ LOGE(tcm_hcd->pdev->dev.parent,
+ "set bus regulator load failed\n");
+ goto exit;
+ }
+
retval = regulator_enable(tcm_hcd->bus_reg);
if (retval < 0) {
LOGE(tcm_hcd->pdev->dev.parent,
@@ -1965,8 +1985,12 @@ static int syna_tcm_enable_regulator(struct syna_tcm_hcd *tcm_hcd, bool en)
}
disable_bus_reg:
- if (tcm_hcd->bus_reg)
+ if (tcm_hcd->bus_reg) {
+ regulator_set_load(tcm_hcd->bus_reg, 0);
+ regulator_set_voltage(tcm_hcd->bus_reg, 0,
+ SYNA_VDD_VTG_MAX_UV);
regulator_disable(tcm_hcd->bus_reg);
+ }
exit:
return retval;
@@ -2869,7 +2893,8 @@ static int syna_tcm_resume(struct device *dev)
if (!tcm_hcd->init_okay)
syna_tcm_deferred_probe(dev);
- else if (!tcm_hcd->in_suspend)
+
+ if (!tcm_hcd->in_suspend)
return 0;
else {
if (tcm_hcd->irq_enabled) {
@@ -2879,6 +2904,12 @@ static int syna_tcm_resume(struct device *dev)
}
}
+ retval = syna_tcm_enable_regulator(tcm_hcd, true);
+ if (retval < 0) {
+ LOGE(tcm_hcd->pdev->dev.parent,
+ "Failed to enable regulators\n");
+ }
+
retval = pinctrl_select_state(
tcm_hcd->ts_pinctrl,
tcm_hcd->pinctrl_state_active);
@@ -2973,6 +3004,7 @@ static int syna_tcm_suspend(struct device *dev)
{
struct syna_tcm_module_handler *mod_handler;
struct syna_tcm_hcd *tcm_hcd = dev_get_drvdata(dev);
+ int retval;
if (tcm_hcd->in_suspend || !tcm_hcd->init_okay)
return 0;
@@ -2995,11 +3027,17 @@ static int syna_tcm_suspend(struct device *dev)
}
}
+ retval = syna_tcm_enable_regulator(tcm_hcd, false);
+ if (retval < 0) {
+ LOGE(tcm_hcd->pdev->dev.parent,
+ "Failed to disable regulators\n");
+ }
+
mutex_unlock(&mod_pool.mutex);
tcm_hcd->in_suspend = true;
- return 0;
+ return retval;
}
#endif
@@ -3694,11 +3732,6 @@ static int syna_tcm_remove(struct platform_device *pdev)
return 0;
}
-static void syna_tcm_shutdown(struct platform_device *pdev)
-{
- syna_tcm_remove(pdev);
-}
-
#ifdef CONFIG_PM
static const struct dev_pm_ops syna_tcm_dev_pm_ops = {
#if !defined(CONFIG_DRM) && !defined(CONFIG_FB)
@@ -3718,7 +3751,6 @@ static struct platform_driver syna_tcm_driver = {
},
.probe = syna_tcm_probe,
.remove = syna_tcm_remove,
- .shutdown = syna_tcm_shutdown,
};
static int __init syna_tcm_module_init(void)
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 1e30fc6..4a2bc3f 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -417,7 +417,6 @@ struct qsmmuv500_archdata {
u32 actlr_tbl_size;
u32 testbus_version;
};
-
#define get_qsmmuv500_archdata(smmu) \
((struct qsmmuv500_archdata *)(smmu->archdata))
@@ -491,6 +490,7 @@ static int arm_smmu_setup_default_domain(struct device *dev,
struct iommu_domain *domain);
static int __arm_smmu_domain_set_attr(struct iommu_domain *domain,
enum iommu_attr attr, void *data);
+struct iommu_device *get_iommu_by_fwnode(struct fwnode_handle *fwnode);
static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
{
@@ -585,12 +585,37 @@ static void arm_smmu_secure_domain_unlock(struct arm_smmu_domain *smmu_domain)
mutex_unlock(&smmu_domain->assign_lock);
}
+
+static struct qsmmuv500_tbu_device *qsmmuv500_find_tbu(
+ struct arm_smmu_device *smmu, u32 sid)
+{
+ struct qsmmuv500_tbu_device *tbu = NULL;
+ struct qsmmuv500_archdata *data = get_qsmmuv500_archdata(smmu);
+
+ list_for_each_entry(tbu, &data->tbus, list) {
+ if (tbu->sid_start <= sid &&
+ sid < tbu->sid_start + tbu->num_sids)
+ return tbu;
+ }
+ return NULL;
+}
+
+static bool selftest_running;
#ifdef CONFIG_ARM_SMMU_SELFTEST
+struct sme_pair {
+ u32 num_smrs;
+ struct arm_smmu_smr *smrs;
+};
static int selftest;
module_param_named(selftest, selftest, int, 0644);
static int irq_count;
+#define MAXLEN 1000
+static char selftestsids[MAXLEN];
+module_param_string(selftestsids, selftestsids, sizeof(selftestsids), 0644);
+
+
static DECLARE_WAIT_QUEUE_HEAD(wait_int);
static irqreturn_t arm_smmu_cf_selftest(int irq, void *cb_base)
{
@@ -681,10 +706,273 @@ static void arm_smmu_interrupt_selftest(struct arm_smmu_device *smmu)
WARN_ON(cb_count != irq_count);
irq_count = 0;
}
+
+static int arm_smmu_find_sme(struct arm_smmu_smr *, u32, u16, u16);
+
+static int arm_smmu_run_atos(struct device *dev)
+{
+ dma_addr_t iova;
+ phys_addr_t phys, output, phys_soft;
+ struct page *page = NULL;
+ struct iommu_domain *domain;
+ int ret = 0;
+
+ page = alloc_page(GFP_KERNEL);
+ if (!page) {
+ dev_err(dev, "Unable to allocate memory\n");
+ return -ENOMEM;
+ }
+ phys = page_to_phys(page);
+
+ domain = iommu_get_domain_for_dev(dev);
+ domain->is_debug_domain = true;
+
+ iova = 0x1000;
+ if (iommu_map(domain, iova, phys, SZ_4K,
+ IOMMU_READ | IOMMU_WRITE)) {
+ dev_err(dev, "Mapping failed\n");
+ goto out_detach;
+ }
+
+ output = iommu_iova_to_phys_hard(domain, iova, IOMMU_TRANS_DEFAULT);
+ if (!output || output != phys) {
+ phys_soft = arm_smmu_iova_to_phys(domain, iova);
+ dev_err(dev, "atos is failed, output : %pa\n", &output);
+ dev_err(dev, "soft iova-to-phys : %pa\n", &phys_soft);
+ } else
+ dev_err(dev, "atos succeeded, output : %pa\n", &output);
+
+ iommu_unmap(domain, iova, SZ_4K);
+out_detach:
+ __free_pages(page, 0);
+ return ret;
+}
+
+static int of_iommu_do_atos(struct device *dev, struct sme_pair *sme,
+ struct of_phandle_args *iommu_spec)
+{
+ u16 i;
+ int err = 0;
+ bool set_iommu_ops = false;
+ const struct iommu_ops *ops = NULL;
+
+ for (i = 0; i < sme->num_smrs; ++i) {
+ struct arm_smmu_smr *smr;
+
+ smr = &sme->smrs[i];
+ if (!smr->valid) {
+ dev_info(dev, "Can't run atos smr idx %d\n", i);
+ continue;
+ }
+
+ iommu_spec->args[0] = smr->id;
+ iommu_spec->args[1] = smr->mask;
+
+ dev_dbg(dev, "ATOS for : SID 0x%x, MASK 0x%x\n",
+ iommu_spec->args[0], iommu_spec->args[1]);
+
+ err = of_iommu_fill_fwspec(dev, iommu_spec);
+ if (err) {
+ dev_err(dev, "Failed to do the of_iommu_xlate\n");
+ break;
+ }
+
+ ops = dev->iommu_fwspec->ops;
+ if (!platform_bus_type.iommu_ops) {
+ platform_bus_type.iommu_ops = ops;
+ set_iommu_ops = true;
+ }
+
+ if (ops && ops->add_device && dev->bus && !dev->iommu_group)
+ err = ops->add_device(dev);
+ if (err) {
+ dev_err(dev, "Adding to IOMMU failed: %d\n", err);
+ return err;
+ }
+
+ /* Now we have every thing. Run ATOS. */
+ arm_smmu_run_atos(dev);
+
+ if (ops->remove_device && dev->iommu_group)
+ ops->remove_device(dev);
+
+ if (set_iommu_ops)
+ platform_bus_type.iommu_ops = NULL;
+ }
+
+ return err;
+}
+
+static bool arm_smmu_valid_smr(struct arm_smmu_device *smmu, u32 idx,
+ u32 sid, u32 mask)
+{
+ u32 smr1, smr2;
+ void __iomem *gr0_smr = ARM_SMMU_GR0(smmu) + ARM_SMMU_GR0_SMR(idx);
+
+
+ smr1 = SMR_VALID | sid << SMR_ID_SHIFT | mask << SMR_MASK_SHIFT;
+ writel_relaxed(smr1, gr0_smr);
+ smr2 = readl_relaxed(gr0_smr);
+ writel_relaxed(0, gr0_smr);
+
+ return smr1 == smr2;
+}
+
+static int get_atos_selftest_sids(struct arm_smmu_device *smmu,
+ struct sme_pair *sme)
+{
+ struct device *dev = smmu->dev;
+ struct arm_smmu_smr *smrs = smmu->smrs;
+ struct arm_smmu_smr *selftest_smrs;
+ enum arm_smmu_implementation model;
+ struct qsmmuv500_tbu_device *tbu;
+ int i, idx, sid_count, ret = 0;
+ char *name, *buf, *split, *sid, *buf_start;
+
+ buf = kstrdup(selftestsids, GFP_KERNEL);
+ buf_start = buf;
+
+ while (buf) {
+ name = strsep(&buf, ",");
+
+ if (strnstr(dev_name(dev), name, strlen(dev_name(dev)))) {
+ kstrtoint(strsep(&buf, ","), 0, &sid_count);
+
+ if (sid_count <= 0) {
+ dev_err(smmu->dev, "Invalid sid_count : %d\n",
+ sid_count);
+ goto out;
+ }
+
+ sme->smrs = kcalloc(sid_count, sizeof(*smmu->smrs),
+ GFP_KERNEL);
+ if (!sme->smrs) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ selftest_smrs = sme->smrs;
+ for (i = 0; i < sid_count; i++) {
+ split = strsep(&buf, ",");
+ sid = strsep(&split, ":");
+ if (!split) {
+ ret = -EINVAL;
+ goto invalid_format;
+ }
+ kstrtou16(sid, 0,
+ &selftest_smrs[i].id);
+ kstrtou16(split, 0, &selftest_smrs[i].mask);
+ }
+
+ sme->num_smrs = sid_count;
+ for (i = 0; i < sid_count; i++) {
+ mutex_lock(&smmu->stream_map_mutex);
+ idx = arm_smmu_find_sme(smrs,
+ smmu->num_mapping_groups,
+ selftest_smrs[i].id,
+ selftest_smrs[i].mask);
+ mutex_unlock(&smmu->stream_map_mutex);
+
+ if (idx < 0) {
+ selftest_smrs[i].valid = false;
+ } else if ((idx >= 0) && smrs &&
+ (smrs[idx].valid)) {
+ dev_err(dev,
+ "sid : 0x%x is already present at idx = %d choose a different sid\n",
+ selftest_smrs[i].id, idx);
+ selftest_smrs[i].valid = false;
+ } else {
+ if (!arm_smmu_valid_smr(smmu, idx,
+ selftest_smrs[i].id,
+ selftest_smrs[i].mask))
+ selftest_smrs[i].valid = false;
+ else
+ selftest_smrs[i].valid = true;
+ }
+ model = smmu->model;
+ switch (model) {
+ case QCOM_SMMUV500:
+ tbu = qsmmuv500_find_tbu(smmu,
+ selftest_smrs[i].id);
+ dev_info(tbu->dev, "idx = %d valid: %d, sid : 0x%x, mask: 0x%x\n",
+ idx,
+ selftest_smrs[i].valid,
+ selftest_smrs[i].id,
+ selftest_smrs[i].mask);
+ break;
+ case QCOM_SMMUV2:
+ dev_info(smmu->dev, "idx = %d valid: %d, sid : 0x%x, mask: 0x%x\n",
+ idx,
+ selftest_smrs[i].valid,
+ selftest_smrs[i].id,
+ selftest_smrs[i].mask);
+ break;
+ default:
+ ret = -EINVAL;
+ goto out;
+ }
+ }
+ }
+ }
+ ret = sid_count;
+ goto out;
+
+invalid_format:
+ dev_err(smmu->dev, "Invalid Format : <%s> Expected Format : <smmu_name,sid_count,sid:mask>\n",
+ selftestsids);
+ kfree(sme->smrs);
+out:
+ kfree(buf_start);
+ return ret;
+}
+static void arm_smmu_atos_selftest(struct arm_smmu_device *smmu)
+{
+ struct platform_device *pdev;
+ struct device *smmu_dev = smmu->dev;
+ struct device *atos_dev;
+ struct of_phandle_args iommu_spec = {0};
+ struct sme_pair sme = {0};
+ int ret;
+
+ if (!selftest)
+ return;
+
+ dev_notice(smmu_dev, "ATOS Self test started\n");
+ ret = get_atos_selftest_sids(smmu, &sme);
+ if (ret <= 0) {
+ dev_err(smmu_dev, "ATOS Self test failed ret %d!!\n", ret);
+ return;
+ }
+
+ pdev = platform_device_register_simple("atos_test_device",
+ -1, NULL, 0);
+ if (!pdev) {
+ dev_err(smmu_dev, "Unable to create a atos test device\n");
+ return;
+ }
+
+ atos_dev = &pdev->dev;
+
+ /* try to fill the iommu_fwspec to use. */
+ iommu_spec.np = of_node_get(smmu_dev->of_node);
+ iommu_spec.args_count = (smmu->model == QCOM_SMMUV2) ? 1 : 2;
+
+ selftest_running = true;
+ of_iommu_do_atos(atos_dev, &sme, &iommu_spec);
+ selftest_running = false;
+ dev_notice(smmu_dev, "ATOS Self test complete\n");
+ kfree(sme.smrs);
+ of_node_put(iommu_spec.np);
+ platform_device_unregister(pdev);
+}
#else
static void arm_smmu_interrupt_selftest(struct arm_smmu_device *smmu)
{
}
+
+static void arm_smmu_atos_selftest(struct arm_smmu_device *smmu)
+{
+}
#endif
/*
@@ -1156,20 +1444,6 @@ static void arm_smmu_domain_power_off(struct iommu_domain *domain,
arm_smmu_power_off(smmu->pwr);
}
-static struct qsmmuv500_tbu_device *qsmmuv500_find_tbu(
- struct arm_smmu_device *smmu, u32 sid)
-{
- struct qsmmuv500_tbu_device *tbu = NULL;
- struct qsmmuv500_archdata *data = get_qsmmuv500_archdata(smmu);
-
- list_for_each_entry(tbu, &data->tbus, list) {
- if (tbu->sid_start <= sid &&
- sid < tbu->sid_start + tbu->num_sids)
- return tbu;
- }
- return NULL;
-}
-
static void arm_smmu_testbus_dump(struct arm_smmu_device *smmu, u16 sid)
{
if (smmu->model == QCOM_SMMUV500 &&
@@ -2616,9 +2890,9 @@ static void arm_smmu_test_smr_masks(struct arm_smmu_device *smmu)
smmu->smr_mask_mask = smr >> SMR_MASK_SHIFT;
}
-static int arm_smmu_find_sme(struct arm_smmu_device *smmu, u16 id, u16 mask)
+static int arm_smmu_find_sme(struct arm_smmu_smr *smrs, u32 count, u16 id,
+ u16 mask)
{
- struct arm_smmu_smr *smrs = smmu->smrs;
int i, free_idx = -ENOSPC;
/* Stream indexing is blissfully easy */
@@ -2626,7 +2900,7 @@ static int arm_smmu_find_sme(struct arm_smmu_device *smmu, u16 id, u16 mask)
return id;
/* Validating SMRs is... less so */
- for (i = 0; i < smmu->num_mapping_groups; ++i) {
+ for (i = 0; i < count; ++i) {
if (!smrs[i].valid) {
/*
* Note the first free entry we come across, which
@@ -2691,7 +2965,8 @@ static int arm_smmu_master_alloc_smes(struct device *dev)
goto sme_err;
}
- ret = arm_smmu_find_sme(smmu, sid, mask);
+ ret = arm_smmu_find_sme(smrs, smmu->num_mapping_groups, sid,
+ mask);
if (ret < 0)
goto sme_err;
@@ -3486,7 +3761,8 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain,
struct arm_smmu_device *smmu = smmu_domain->smmu;
if (smmu->options & ARM_SMMU_OPT_DISABLE_ATOS)
- return 0;
+ if (!selftest_running)
+ return 0;
if (arm_smmu_power_on(smmu_domain->smmu->pwr))
return 0;
@@ -3613,13 +3889,28 @@ static int arm_smmu_add_device(struct device *dev)
if (ret)
goto out_free;
} else if (fwspec && fwspec->ops == &arm_smmu_ops) {
+ struct fwnode_handle *iommu_fwnode = fwspec->iommu_fwnode;
+
smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
- if (!smmu)
+ if (!smmu) {
+ if (IS_ENABLED(CONFIG_ARM_SMMU_SELFTEST)) {
+ struct iommu_device *iommu = NULL;
+
+ iommu = get_iommu_by_fwnode(iommu_fwnode);
+ smmu = iommu ? container_of(iommu, struct
+ arm_smmu_device,
+ iommu) : NULL;
+ if (smmu)
+ goto cont;
+ }
return -ENODEV;
+
+ }
} else {
return -ENODEV;
}
+cont:
ret = arm_smmu_power_on(smmu->pwr);
if (ret)
goto out_free;
@@ -5377,6 +5668,7 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev)
arm_smmu_device_reset(smmu);
arm_smmu_test_smr_masks(smmu);
arm_smmu_interrupt_selftest(smmu);
+ arm_smmu_atos_selftest(smmu);
arm_smmu_power_off(smmu->pwr);
/*
@@ -5657,7 +5949,6 @@ static void qsmmuv500_tbu_resume(struct qsmmuv500_tbu_device *tbu)
spin_unlock_irqrestore(&tbu->halt_lock, flags);
}
-
static int qsmmuv500_ecats_lock(struct arm_smmu_domain *smmu_domain,
struct qsmmuv500_tbu_device *tbu,
unsigned long *flags)
diff --git a/drivers/iommu/io-pgtable-fast.c b/drivers/iommu/io-pgtable-fast.c
index 31596c0..2908b56 100644
--- a/drivers/iommu/io-pgtable-fast.c
+++ b/drivers/iommu/io-pgtable-fast.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2016-2017, 2020, The Linux Foundation. All rights reserved.
*/
#define pr_fmt(fmt) "io-pgtable-fast: " fmt
@@ -739,7 +739,7 @@ static int __init av8l_fast_positive_testing(void)
}
/* sweep up TLB proving PTEs */
- av8l_fast_clear_stale_ptes(pmds, base, base, max, false);
+ av8l_fast_clear_stale_ptes(ops, base, base, max, false);
/* map the entire 4GB VA space with 8K map calls */
for (iova = base; iova < max; iova += SZ_8K) {
@@ -760,7 +760,7 @@ static int __init av8l_fast_positive_testing(void)
}
/* sweep up TLB proving PTEs */
- av8l_fast_clear_stale_ptes(pmds, base, base, max, false);
+ av8l_fast_clear_stale_ptes(ops, base, base, max, false);
/* map the entire 4GB VA space with 16K map calls */
for (iova = base; iova < max; iova += SZ_16K) {
@@ -781,7 +781,7 @@ static int __init av8l_fast_positive_testing(void)
}
/* sweep up TLB proving PTEs */
- av8l_fast_clear_stale_ptes(pmds, base, base, max, false);
+ av8l_fast_clear_stale_ptes(ops, base, base, max, false);
/* map the entire 4GB VA space with 64K map calls */
for (iova = base; iova < max; iova += SZ_64K) {
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index ee4d58e4..4b46809 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -102,6 +102,29 @@ int iommu_device_register(struct iommu_device *iommu)
return 0;
}
+#ifdef CONFIG_ARM_SMMU_SELFTEST
+struct iommu_device *get_iommu_by_fwnode(struct fwnode_handle *fwnode)
+{
+ struct iommu_device *iommu;
+
+ spin_lock(&iommu_device_lock);
+ list_for_each_entry(iommu, &iommu_device_list, list) {
+ if (iommu->fwnode == fwnode) {
+ spin_unlock(&iommu_device_lock);
+ return iommu;
+ }
+ }
+ spin_unlock(&iommu_device_lock);
+
+ return NULL;
+}
+#else
+struct iommu_device *get_iommu_by_fwnode(struct fwnode_handle *fwnode)
+{
+ return NULL;
+}
+#endif
+
void iommu_device_unregister(struct iommu_device *iommu)
{
spin_lock(&iommu_device_lock);
diff --git a/drivers/iommu/of_iommu.c b/drivers/iommu/of_iommu.c
index 0e0e88e..482ca65 100644
--- a/drivers/iommu/of_iommu.c
+++ b/drivers/iommu/of_iommu.c
@@ -222,3 +222,15 @@ const struct iommu_ops *of_iommu_configure(struct device *dev,
return ops;
}
+
+#ifdef CONFIG_ARM_SMMU_SELFTEST
+int of_iommu_fill_fwspec(struct device *dev, struct of_phandle_args *iommu_spec)
+{
+ return of_iommu_xlate(dev, iommu_spec);
+}
+#else
+int of_iommu_fill_fwspec(struct device *dev, struct of_phandle_args *iommu_spec)
+{
+ return 0;
+}
+#endif
diff --git a/drivers/leds/leds-qti-flash.c b/drivers/leds/leds-qti-flash.c
index 219621d..0e02e78 100644
--- a/drivers/leds/leds-qti-flash.c
+++ b/drivers/leds/leds-qti-flash.c
@@ -15,10 +15,13 @@
#include <linux/of_gpio.h>
#include <linux/of_irq.h>
#include <linux/platform_device.h>
+#include <linux/power_supply.h>
#include <linux/regmap.h>
#include "leds.h"
+#define FLASH_PERPH_SUBTYPE 0x05
+
#define FLASH_LED_STATUS1 0x06
#define FLASH_LED_STATUS2 0x07
@@ -55,10 +58,17 @@
#define FLASH_LED_HW_SW_STROBE_SEL BIT(2)
#define FLASH_LED_STROBE_SEL_SHIFT 2
+#define FLASH_LED_IBATT_OCP_THRESH_DEFAULT_UA 2500000
+#define FLASH_LED_RPARA_DEFAULT_UOHM 80000
+#define VPH_DROOP_THRESH_VAL_UV 3400000
+
#define FLASH_EN_LED_CTRL 0x4E
#define FLASH_LED_ENABLE(id) BIT(id)
#define FLASH_LED_DISABLE 0
+#define FORCE_TORCH_MODE 0x68
+#define FORCE_TORCH BIT(0)
+
#define MAX_IRES_LEVELS 2
#define IRES_12P5_MAX_CURR_MA 1500
#define IRES_5P0_MAX_CURR_MA 640
@@ -79,6 +89,11 @@ enum strobe_type {
HW_STROBE,
};
+enum pmic_type {
+ PM8350C,
+ PM2250,
+};
+
/* Configurations for each individual flash or torch device */
struct flash_node_data {
struct qti_flash_led *led;
@@ -106,6 +121,11 @@ struct flash_switch_data {
bool symmetry_en;
};
+struct pmic_data {
+ u8 max_channels;
+ int pmic_type;
+};
+
/**
* struct qti_flash_led: Main Flash LED data structure
* @pdev : Pointer for platform device
@@ -120,7 +140,6 @@ struct flash_switch_data {
* @all_ramp_down_done_irq : IRQ number for all ramp down interrupt
* @led_fault_irq : IRQ number for LED fault interrupt
* @base : Base address of the flash LED module
- * @max_channels : Maximum number of channels supported by flash module
* @ref_count : Reference count used to enable/disable flash LED
*/
struct qti_flash_led {
@@ -128,6 +147,10 @@ struct qti_flash_led {
struct regmap *regmap;
struct flash_node_data *fnode;
struct flash_switch_data *snode;
+ struct power_supply *usb_psy;
+ struct power_supply *main_psy;
+ struct power_supply *bms_psy;
+ struct pmic_data *data;
spinlock_t lock;
u32 num_fnodes;
u32 num_snodes;
@@ -135,10 +158,11 @@ struct qti_flash_led {
int all_ramp_up_done_irq;
int all_ramp_down_done_irq;
int led_fault_irq;
+ int ibatt_ocp_threshold_ua;
int max_current;
u16 base;
- u8 max_channels;
u8 ref_count;
+ u8 subtype;
};
static const u32 flash_led_max_ires_values[MAX_IRES_LEVELS] = {
@@ -226,6 +250,62 @@ static int qti_flash_led_masked_write(struct qti_flash_led *led,
return rc;
}
+static int is_main_psy_available(struct qti_flash_led *led)
+{
+ if (!led->main_psy) {
+ led->main_psy = power_supply_get_by_name("main");
+ if (!led->main_psy) {
+ pr_err_ratelimited("Couldn't get main_psy\n");
+ return -ENODEV;
+ }
+ }
+
+ return 0;
+}
+
+static int qti_flash_poll_vreg_ok(struct qti_flash_led *led)
+{
+ int rc, i;
+ union power_supply_propval pval = {0, };
+
+ if (led->data->pmic_type != PM2250)
+ return 0;
+
+ rc = is_main_psy_available(led);
+ if (rc < 0)
+ return rc;
+
+ for (i = 0; i < 60; i++) {
+ /* wait for the flash vreg_ok to be set */
+ usleep_range(5000, 5500);
+
+ rc = power_supply_get_property(led->main_psy,
+ POWER_SUPPLY_PROP_FLASH_TRIGGER, &pval);
+ if (rc < 0) {
+ pr_err("main psy doesn't support reading prop %d rc = %d\n",
+ POWER_SUPPLY_PROP_FLASH_TRIGGER, rc);
+ return rc;
+ }
+
+ if (pval.intval > 0) {
+ pr_debug("Flash trigger set\n");
+ break;
+ }
+
+ if (pval.intval < 0) {
+ pr_err("Error during flash trigger %d\n", pval.intval);
+ return pval.intval;
+ }
+ }
+
+ if (!pval.intval) {
+ pr_err("Failed to enable the module\n");
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
static int qti_flash_led_module_control(struct qti_flash_led *led,
bool enable)
{
@@ -241,6 +321,15 @@ static int qti_flash_led_module_control(struct qti_flash_led *led,
return rc;
}
+ val = FLASH_MODULE_DISABLE;
+ rc = qti_flash_poll_vreg_ok(led);
+ if (rc < 0) {
+ /* Disable the module */
+ rc = qti_flash_led_write(led, FLASH_ENABLE_CONTROL,
+ &val, 1);
+ return rc;
+ }
+
led->ref_count++;
} else {
if (led->ref_count)
@@ -336,6 +425,13 @@ static int qti_flash_led_enable(struct flash_node_data *fnode)
goto out;
}
+ if (fnode->type == FLASH_LED_TYPE_TORCH && led->subtype == 0x6) {
+ rc = qti_flash_led_masked_write(led, FORCE_TORCH_MODE,
+ FORCE_TORCH, FORCE_TORCH);
+ if (rc < 0)
+ goto out;
+ }
+
fnode->configured = true;
if ((fnode->strobe_sel == HW_STROBE) &&
@@ -368,6 +464,13 @@ static int qti_flash_led_disable(struct flash_node_data *fnode)
if (rc < 0)
goto out;
+ if (fnode->type == FLASH_LED_TYPE_TORCH && led->subtype == 0x6) {
+ rc = qti_flash_led_masked_write(led, FORCE_TORCH_MODE,
+ FORCE_TORCH, 0);
+ if (rc < 0)
+ goto out;
+ }
+
fnode->current_ma = 0;
out:
@@ -570,14 +673,189 @@ static void qti_flash_led_switch_brightness_set(
snode->enabled = state;
}
+static int is_usb_psy_available(struct qti_flash_led *led)
+{
+ if (!led->usb_psy) {
+ led->usb_psy = power_supply_get_by_name("usb");
+ if (!led->usb_psy) {
+ pr_err_ratelimited("Couldn't get usb_psy\n");
+ return -ENODEV;
+ }
+ }
+
+ return 0;
+}
+
+static int get_property_from_fg(struct qti_flash_led *led,
+ enum power_supply_property prop, int *val)
+{
+ int rc;
+ union power_supply_propval pval = {0, };
+
+ if (!led->bms_psy)
+ led->bms_psy = power_supply_get_by_name("bms");
+
+ if (!led->bms_psy) {
+ pr_err("no bms psy found\n");
+ return -EINVAL;
+ }
+
+ rc = power_supply_get_property(led->bms_psy, prop, &pval);
+ if (rc) {
+ pr_err("bms psy doesn't support reading prop %d rc = %d\n",
+ prop, rc);
+ return rc;
+ }
+
+ *val = pval.intval;
+
+ return rc;
+}
+
+#define UCONV 1000000LL
+#define MCONV 1000LL
+#define CHGBST_EFFICIENCY 800LL
+#define CHGBST_FLASH_VDIP_MARGIN 10000
+#define VIN_FLASH_UV 5000000
+#define VIN_FLASH_RANGE_1 4250000
+#define VIN_FLASH_RANGE_2 4500000
+#define VIN_FLASH_RANGE_3 4750000
+#define OCV_RANGE_1 3800000
+#define OCV_RANGE_2 4100000
+#define OCV_RANGE_3 4350000
+#define BHARGER_FLASH_LED_MAX_TOTAL_CURRENT_MA 1000
+static int qti_flash_led_calc_bharger_max_current(struct qti_flash_led *led,
+ int *max_current)
+{
+ union power_supply_propval pval = {0, };
+ int ocv_uv, ibat_now, flash_led_max_total_curr_ma, rc;
+ int rbatt_uohm = 0, usb_present, otg_enable;
+ int64_t ibat_flash_ua, avail_flash_ua, avail_flash_power_fw;
+ int64_t ibat_safe_ua, vin_flash_uv, vph_flash_uv, vph_flash_vdip;
+
+ if (led->data->pmic_type != PM2250)
+ return 0;
+
+ rc = is_usb_psy_available(led);
+ if (rc < 0)
+ return rc;
+
+ rc = power_supply_get_property(led->usb_psy, POWER_SUPPLY_PROP_SCOPE,
+ &pval);
+ if (rc < 0) {
+ pr_err("usb psy does not support usb present, rc=%d\n", rc);
+ return rc;
+ }
+ otg_enable = pval.intval;
+
+ /* RESISTANCE = esr_uohm + rslow_uohm */
+ rc = get_property_from_fg(led, POWER_SUPPLY_PROP_RESISTANCE,
+ &rbatt_uohm);
+ if (rc < 0) {
+ pr_err("bms psy does not support resistance, rc=%d\n", rc);
+ return rc;
+ }
+
+ /* If no battery is connected, return max possible flash current */
+ if (!rbatt_uohm) {
+ *max_current = BHARGER_FLASH_LED_MAX_TOTAL_CURRENT_MA;
+ return 0;
+ }
+
+ rc = get_property_from_fg(led, POWER_SUPPLY_PROP_VOLTAGE_OCV, &ocv_uv);
+ if (rc < 0) {
+ pr_err("bms psy does not support OCV, rc=%d\n", rc);
+ return rc;
+ }
+
+ rc = get_property_from_fg(led, POWER_SUPPLY_PROP_CURRENT_NOW,
+ &ibat_now);
+ if (rc < 0) {
+ pr_err("bms psy does not support current, rc=%d\n", rc);
+ return rc;
+ }
+
+ rc = power_supply_get_property(led->usb_psy, POWER_SUPPLY_PROP_PRESENT,
+ &pval);
+ if (rc < 0) {
+ pr_err("usb psy does not support usb present, rc=%d\n", rc);
+ return rc;
+ }
+ usb_present = pval.intval;
+
+ rbatt_uohm += FLASH_LED_RPARA_DEFAULT_UOHM;
+
+ vph_flash_vdip = VPH_DROOP_THRESH_VAL_UV + CHGBST_FLASH_VDIP_MARGIN;
+
+ /*
+ * Calculate the maximum current that can pulled out of the battery
+ * before the battery voltage dips below a safe threshold.
+ */
+ ibat_safe_ua = div_s64((ocv_uv - vph_flash_vdip) * UCONV,
+ rbatt_uohm);
+
+ if (ibat_safe_ua <= led->ibatt_ocp_threshold_ua) {
+ /*
+ * If the calculated current is below the OCP threshold, then
+ * use it as the possible flash current.
+ */
+ ibat_flash_ua = ibat_safe_ua - ibat_now;
+ vph_flash_uv = vph_flash_vdip;
+ } else {
+ /*
+ * If the calculated current is above the OCP threshold, then
+ * use the ocp threshold instead.
+ *
+ * Any higher current will be tripping the battery OCP.
+ */
+ ibat_flash_ua = led->ibatt_ocp_threshold_ua - ibat_now;
+ vph_flash_uv = ocv_uv - div64_s64((int64_t)rbatt_uohm
+ * led->ibatt_ocp_threshold_ua, UCONV);
+ }
+
+ /* when USB is present or OTG is enabled, VIN_FLASH is always at 5V */
+ if (usb_present || (otg_enable == POWER_SUPPLY_SCOPE_SYSTEM))
+ vin_flash_uv = VIN_FLASH_UV;
+ else if (ocv_uv <= OCV_RANGE_1)
+ vin_flash_uv = VIN_FLASH_RANGE_1;
+ else if (ocv_uv > OCV_RANGE_1 && ocv_uv <= OCV_RANGE_2)
+ vin_flash_uv = VIN_FLASH_RANGE_2;
+ else if (ocv_uv > OCV_RANGE_2 && ocv_uv <= OCV_RANGE_3)
+ vin_flash_uv = VIN_FLASH_RANGE_3;
+
+ /* Calculate the available power for the flash module. */
+ avail_flash_power_fw = CHGBST_EFFICIENCY * vph_flash_uv * ibat_flash_ua;
+ /*
+ * Calculate the available amount of current the flash module can draw
+ * before collapsing the battery. (available power/ flash input voltage)
+ */
+ avail_flash_ua = div64_s64(avail_flash_power_fw, vin_flash_uv * MCONV);
+
+ flash_led_max_total_curr_ma = BHARGER_FLASH_LED_MAX_TOTAL_CURRENT_MA;
+ *max_current = min(flash_led_max_total_curr_ma,
+ (int)(div64_s64(avail_flash_ua, MCONV)));
+
+ pr_debug("avail_iflash=%lld, ocv=%d, ibat=%d, rbatt=%d,max_current=%lld, usb_present=%d, otg_enable = %d\n",
+ avail_flash_ua, ocv_uv, ibat_now, rbatt_uohm,
+ (*max_current * MCONV), usb_present, otg_enable);
+
+ return 0;
+}
+
static ssize_t qti_flash_led_max_current_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct flash_switch_data *snode;
struct led_classdev *led_cdev = dev_get_drvdata(dev);
+ int rc = 0;
snode = container_of(led_cdev, struct flash_switch_data, cdev);
+ rc = qti_flash_led_calc_bharger_max_current(snode->led,
+ &snode->led->max_current);
+ if (rc < 0)
+ pr_err("Failed to query max avail current, rc=%d\n", rc);
+
return scnprintf(buf, PAGE_SIZE, "%d\n", snode->led->max_current);
}
@@ -586,6 +864,7 @@ int qti_flash_led_prepare(struct led_trigger *trig, int options,
{
struct led_classdev *led_cdev;
struct flash_switch_data *snode;
+ int rc = 0;
if (!trig) {
pr_err("Invalid led_trigger\n");
@@ -601,10 +880,33 @@ int qti_flash_led_prepare(struct led_trigger *trig, int options,
snode = container_of(led_cdev, struct flash_switch_data, cdev);
if (options & QUERY_MAX_AVAIL_CURRENT) {
- *max_current = snode->led->max_current;
+ if (!max_current) {
+ pr_err("Invalid max_current pointer\n");
+ return -EINVAL;
+ }
+
+ if (snode->led->data->pmic_type == PM2250) {
+ rc = qti_flash_led_calc_bharger_max_current(snode->led,
+ max_current);
+ if (rc < 0) {
+ pr_err("Failed to query max avail current, rc=%d\n",
+ rc);
+ *max_current = snode->led->max_current;
+ return rc;
+ }
+ } else {
+ *max_current = snode->led->max_current;
+ }
+
return 0;
}
+ if (options & ENABLE_REGULATOR)
+ return 0;
+
+ if (options & DISABLE_REGULATOR)
+ return 0;
+
return -EINVAL;
}
EXPORT_SYMBOL(qti_flash_led_prepare);
@@ -837,7 +1139,7 @@ static int register_switch_device(struct qti_flash_led *led,
pr_err("Failed to read led mask rc=%d\n", rc);
return rc;
}
- if ((snode->led_mask > ((1 << led->max_channels) - 1))) {
+ if ((snode->led_mask > ((1 << led->data->max_channels) - 1))) {
pr_err("Error, Invalid value for led-mask mask=0x%x\n",
snode->led_mask);
return -EINVAL;
@@ -1035,13 +1337,12 @@ static int qti_flash_led_register_device(struct qti_flash_led *led,
return rc;
}
led->base = val;
-
led->hw_strobe_gpio = devm_kcalloc(&led->pdev->dev,
- led->max_channels, sizeof(u32), GFP_KERNEL);
+ led->data->max_channels, sizeof(u32), GFP_KERNEL);
if (!led->hw_strobe_gpio)
return -ENOMEM;
- for (i = 0; i < led->max_channels; i++) {
+ for (i = 0; i < led->data->max_channels; i++) {
led->hw_strobe_gpio[i] = -EINVAL;
@@ -1153,6 +1454,15 @@ static int qti_flash_led_register_device(struct qti_flash_led *led,
}
}
+ led->ibatt_ocp_threshold_ua = FLASH_LED_IBATT_OCP_THRESH_DEFAULT_UA;
+ rc = of_property_read_u32(node, "qcom,ibatt-ocp-threshold-ua", &val);
+ if (!rc) {
+ led->ibatt_ocp_threshold_ua = val;
+ } else if (rc != -EINVAL) {
+ pr_err("Unable to parse ibatt_ocp threshold, rc=%d\n", rc);
+ return rc;
+ }
+
return 0;
unreg_led:
@@ -1178,9 +1488,9 @@ static int qti_flash_led_probe(struct platform_device *pdev)
return -EINVAL;
}
- led->max_channels = (u8)of_device_get_match_data(&pdev->dev);
- if (!led->max_channels) {
- pr_err("Failed to get max supported led channels\n");
+ led->data = (struct pmic_data *)of_device_get_match_data(&pdev->dev);
+ if (!led->data) {
+ pr_err("Failed to get max match_data\n");
return -EINVAL;
}
@@ -1193,6 +1503,12 @@ static int qti_flash_led_probe(struct platform_device *pdev)
return rc;
}
+ rc = qti_flash_led_read(led, FLASH_PERPH_SUBTYPE, &led->subtype, 1);
+ if (rc < 0) {
+ pr_err("Failed to read flash-perph subtype rc=%d\n", rc);
+ return rc;
+ }
+
rc = qti_flash_led_setup(led);
if (rc < 0) {
pr_err("Failed to initialize flash LED, rc=%d\n", rc);
@@ -1235,9 +1551,29 @@ static int qti_flash_led_remove(struct platform_device *pdev)
return 0;
}
+static const struct pmic_data data[] = {
+ [PM8350C] = {
+ .max_channels = 4,
+ .pmic_type = PM8350C,
+ },
+
+ [PM2250] = {
+ .max_channels = 1,
+ .pmic_type = PM2250,
+ },
+};
+
const static struct of_device_id qti_flash_led_match_table[] = {
- { .compatible = "qcom,pm8350c-flash-led", .data = (void *)4, },
- { .compatible = "qcom,pm2250-flash-led", .data = (void *)1, },
+ {
+ .compatible = "qcom,pm8350c-flash-led",
+ .data = &data[PM8350C],
+ },
+
+ {
+ .compatible = "qcom,pm2250-flash-led",
+ .data = &data[PM2250],
+ },
+
{ },
};
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index 850e669..3b0bdde 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -295,21 +295,22 @@
If unsure, say N.
config DM_DEFAULT_KEY
- tristate "Default-key crypt target support"
+ tristate "Default-key target support"
depends on BLK_DEV_DM
- depends on PFK
- ---help---
- This (currently Android-specific) device-mapper target allows you to
- create a device that assigns a default encryption key to bios that
- don't already have one. This can sit between inline cryptographic
- acceleration hardware and filesystems that use it. This ensures a
- default key is used when the filesystem doesn't explicitly specify a
- key, such as for filesystem metadata, leaving no sectors unencrypted.
+ depends on BLK_INLINE_ENCRYPTION
+ help
+ This device-mapper target allows you to create a device that
+ assigns a default encryption key to bios that aren't for the
+ contents of an encrypted file.
- To compile this code as a module, choose M here: the module will be
- called dm-default-key.
+ This ensures that all blocks on-disk will be encrypted with
+ some key, without the performance hit of file contents being
+ encrypted twice when fscrypt (File-Based Encryption) is used.
- If unsure, say N.
+ It is only appropriate to use dm-default-key when key
+ configuration is tightly controlled, like it is in Android,
+ such that all fscrypt keys are at least as hard to compromise
+ as the default key.
config DM_SNAPSHOT
tristate "Snapshot target"
diff --git a/drivers/md/dm-bow.c b/drivers/md/dm-bow.c
index 9323c7c..ee0e2b6 100644
--- a/drivers/md/dm-bow.c
+++ b/drivers/md/dm-bow.c
@@ -725,6 +725,7 @@ static int dm_bow_ctr(struct dm_target *ti, unsigned int argc, char **argv)
rb_insert_color(&br->node, &bc->ranges);
ti->discards_supported = true;
+ ti->may_passthrough_inline_crypto = true;
return 0;
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 73b321b..62f7004 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -125,8 +125,7 @@ struct iv_tcw_private {
* and encrypts / decrypts at the same time.
*/
enum flags { DM_CRYPT_SUSPENDED, DM_CRYPT_KEY_VALID,
- DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD,
- DM_CRYPT_ENCRYPT_OVERRIDE };
+ DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD };
enum cipher_flags {
CRYPT_MODE_INTEGRITY_AEAD, /* Use authenticated mode for cihper */
@@ -2665,8 +2664,6 @@ static int crypt_ctr_optional(struct dm_target *ti, unsigned int argc, char **ar
cc->sector_shift = __ffs(cc->sector_size) - SECTOR_SHIFT;
} else if (!strcasecmp(opt_string, "iv_large_sectors"))
set_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags);
- else if (!strcasecmp(opt_string, "allow_encrypt_override"))
- set_bit(DM_CRYPT_ENCRYPT_OVERRIDE, &cc->flags);
else {
ti->error = "Invalid feature arguments";
return -EINVAL;
@@ -2872,15 +2869,12 @@ static int crypt_map(struct dm_target *ti, struct bio *bio)
struct crypt_config *cc = ti->private;
/*
- * If bio is REQ_PREFLUSH, REQ_NOENCRYPT, or REQ_OP_DISCARD,
- * just bypass crypt queues.
+ * If bio is REQ_PREFLUSH or REQ_OP_DISCARD, just bypass crypt queues.
* - for REQ_PREFLUSH device-mapper core ensures that no IO is in-flight
* - for REQ_OP_DISCARD caller must use flush if IO ordering matters
*/
- if (unlikely(bio->bi_opf & REQ_PREFLUSH) ||
- (unlikely(bio->bi_opf & REQ_NOENCRYPT) &&
- test_bit(DM_CRYPT_ENCRYPT_OVERRIDE, &cc->flags)) ||
- bio_op(bio) == REQ_OP_DISCARD) {
+ if (unlikely(bio->bi_opf & REQ_PREFLUSH ||
+ bio_op(bio) == REQ_OP_DISCARD)) {
bio_set_dev(bio, cc->dev->bdev);
if (bio_sectors(bio))
bio->bi_iter.bi_sector = cc->start +
@@ -2967,8 +2961,6 @@ static void crypt_status(struct dm_target *ti, status_type_t type,
num_feature_args += test_bit(DM_CRYPT_NO_OFFLOAD, &cc->flags);
num_feature_args += cc->sector_size != (1 << SECTOR_SHIFT);
num_feature_args += test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags);
- num_feature_args += test_bit(DM_CRYPT_ENCRYPT_OVERRIDE,
- &cc->flags);
if (cc->on_disk_tag_size)
num_feature_args++;
if (num_feature_args) {
@@ -2985,8 +2977,6 @@ static void crypt_status(struct dm_target *ti, status_type_t type,
DMEMIT(" sector_size:%d", cc->sector_size);
if (test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags))
DMEMIT(" iv_large_sectors");
- if (test_bit(DM_CRYPT_ENCRYPT_OVERRIDE, &cc->flags))
- DMEMIT(" allow_encrypt_override");
}
break;
diff --git a/drivers/md/dm-default-key.c b/drivers/md/dm-default-key.c
index 8812dea..19be201 100644
--- a/drivers/md/dm-default-key.c
+++ b/drivers/md/dm-default-key.c
@@ -1,50 +1,187 @@
+// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2017 Google, Inc.
- *
- * This software is licensed under the terms of the GNU General Public
- * License version 2, as published by the Free Software Foundation, and
- * may be copied, distributed, and modified under those terms.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
*/
+#include <linux/blk-crypto.h>
#include <linux/device-mapper.h>
#include <linux/module.h>
-#include <linux/pfk.h>
-#define DM_MSG_PREFIX "default-key"
+#define DM_MSG_PREFIX "default-key"
+#define DM_DEFAULT_KEY_MAX_WRAPPED_KEY_SIZE 128
+
+static const struct dm_default_key_cipher {
+ const char *name;
+ enum blk_crypto_mode_num mode_num;
+ int key_size;
+} dm_default_key_ciphers[] = {
+ {
+ .name = "aes-xts-plain64",
+ .mode_num = BLK_ENCRYPTION_MODE_AES_256_XTS,
+ .key_size = 64,
+ }, {
+ .name = "xchacha12,aes-adiantum-plain64",
+ .mode_num = BLK_ENCRYPTION_MODE_ADIANTUM,
+ .key_size = 32,
+ },
+};
+
+/**
+ * struct dm_default_c - private data of a default-key target
+ * @dev: the underlying device
+ * @start: starting sector of the range of @dev which this target actually maps.
+ * For this purpose a "sector" is 512 bytes.
+ * @cipher_string: the name of the encryption algorithm being used
+ * @iv_offset: starting offset for IVs. IVs are generated as if the target were
+ * preceded by @iv_offset 512-byte sectors.
+ * @sector_size: crypto sector size in bytes (usually 4096)
+ * @sector_bits: log2(sector_size)
+ * @key: the encryption key to use
+ */
struct default_key_c {
struct dm_dev *dev;
sector_t start;
- struct blk_encryption_key key;
+ const char *cipher_string;
+ u64 iv_offset;
+ unsigned int sector_size;
+ unsigned int sector_bits;
+ struct blk_crypto_key key;
+ bool is_hw_wrapped;
};
+static const struct dm_default_key_cipher *
+lookup_cipher(const char *cipher_string)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(dm_default_key_ciphers); i++) {
+ if (strcmp(cipher_string, dm_default_key_ciphers[i].name) == 0)
+ return &dm_default_key_ciphers[i];
+ }
+ return NULL;
+}
+
static void default_key_dtr(struct dm_target *ti)
{
struct default_key_c *dkc = ti->private;
+ int err;
- if (dkc->dev)
+ if (dkc->dev) {
+ err = blk_crypto_evict_key(dkc->dev->bdev->bd_queue, &dkc->key);
+ if (err && err != -ENOKEY)
+ DMWARN("Failed to evict crypto key: %d", err);
dm_put_device(ti, dkc->dev);
+ }
+ kzfree(dkc->cipher_string);
kzfree(dkc);
}
+static int default_key_ctr_optional(struct dm_target *ti,
+ unsigned int argc, char **argv)
+{
+ struct default_key_c *dkc = ti->private;
+ struct dm_arg_set as;
+ static const struct dm_arg _args[] = {
+ {0, 4, "Invalid number of feature args"},
+ };
+ unsigned int opt_params;
+ const char *opt_string;
+ bool iv_large_sectors = false;
+ char dummy;
+ int err;
+
+ as.argc = argc;
+ as.argv = argv;
+
+ err = dm_read_arg_group(_args, &as, &opt_params, &ti->error);
+ if (err)
+ return err;
+
+ while (opt_params--) {
+ opt_string = dm_shift_arg(&as);
+ if (!opt_string) {
+ ti->error = "Not enough feature arguments";
+ return -EINVAL;
+ }
+ if (!strcmp(opt_string, "allow_discards")) {
+ ti->num_discard_bios = 1;
+ } else if (sscanf(opt_string, "sector_size:%u%c",
+ &dkc->sector_size, &dummy) == 1) {
+ if (dkc->sector_size < SECTOR_SIZE ||
+ dkc->sector_size > 4096 ||
+ !is_power_of_2(dkc->sector_size)) {
+ ti->error = "Invalid sector_size";
+ return -EINVAL;
+ }
+ } else if (!strcmp(opt_string, "iv_large_sectors")) {
+ iv_large_sectors = true;
+ } else if (!strcmp(opt_string, "wrappedkey_v0")) {
+ dkc->is_hw_wrapped = true;
+ } else {
+ ti->error = "Invalid feature arguments";
+ return -EINVAL;
+ }
+ }
+
+ /* dm-default-key doesn't implement iv_large_sectors=false. */
+ if (dkc->sector_size != SECTOR_SIZE && !iv_large_sectors) {
+ ti->error = "iv_large_sectors must be specified";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+void default_key_adjust_sector_size_and_iv(char **argv, struct dm_target *ti,
+ struct default_key_c **dkc, u8 *raw,
+ u32 size)
+{
+ struct dm_dev *dev;
+ int i;
+ union {
+ u8 bytes[BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE];
+ u32 words[BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE / sizeof(u32)];
+ } key_new;
+
+ dev = (*dkc)->dev;
+
+ if (!strcmp(argv[0], "AES-256-XTS")) {
+ memcpy(key_new.bytes, raw, size);
+
+ for (i = 0; i < ARRAY_SIZE(key_new.words); i++)
+ __cpu_to_be32s(&key_new.words[i]);
+
+ memcpy(raw, key_new.bytes, size);
+
+ if (ti->len & (((*dkc)->sector_size >> SECTOR_SHIFT) - 1))
+ (*dkc)->sector_size = SECTOR_SIZE;
+
+ if (dev->bdev->bd_part)
+ (*dkc)->iv_offset += dev->bdev->bd_part->start_sect;
+ }
+}
+
/*
- * Construct a default-key mapping: <mode> <key> <dev_path> <start>
+ * Construct a default-key mapping:
+ * <cipher> <key> <iv_offset> <dev_path> <start>
+ *
+ * This syntax matches dm-crypt's, but lots of unneeded functionality has been
+ * removed. Also, dm-default-key requires that the "iv_large_sectors" option be
+ * given whenever a non-default sector size is used.
*/
static int default_key_ctr(struct dm_target *ti, unsigned int argc, char **argv)
{
struct default_key_c *dkc;
- size_t key_size;
- unsigned long long tmp;
+ const struct dm_default_key_cipher *cipher;
+ u8 raw_key[DM_DEFAULT_KEY_MAX_WRAPPED_KEY_SIZE];
+ unsigned int raw_key_size;
+ unsigned long long tmpll;
char dummy;
int err;
- if (argc != 4) {
- ti->error = "Invalid argument count";
+ if (argc < 5) {
+ ti->error = "Not enough arguments";
return -EINVAL;
}
@@ -55,86 +192,147 @@ static int default_key_ctr(struct dm_target *ti, unsigned int argc, char **argv)
}
ti->private = dkc;
- if (strcmp(argv[0], "AES-256-XTS") != 0) {
- ti->error = "Unsupported encryption mode";
+ /* <cipher> */
+ dkc->cipher_string = kstrdup(argv[0], GFP_KERNEL);
+ if (!dkc->cipher_string) {
+ ti->error = "Out of memory";
+ err = -ENOMEM;
+ goto bad;
+ }
+ cipher = lookup_cipher(dkc->cipher_string);
+ if (!cipher) {
+ ti->error = "Unsupported cipher";
err = -EINVAL;
goto bad;
}
- key_size = strlen(argv[1]);
- if (key_size != 2 * BLK_ENCRYPTION_KEY_SIZE_AES_256_XTS) {
- ti->error = "Unsupported key size";
+ /* <key> */
+ raw_key_size = strlen(argv[1]);
+ if (raw_key_size > 2 * DM_DEFAULT_KEY_MAX_WRAPPED_KEY_SIZE ||
+ raw_key_size % 2) {
+ ti->error = "Invalid keysize";
err = -EINVAL;
goto bad;
}
- key_size /= 2;
-
- if (hex2bin(dkc->key.raw, argv[1], key_size) != 0) {
+ raw_key_size /= 2;
+ if (hex2bin(raw_key, argv[1], raw_key_size) != 0) {
ti->error = "Malformed key string";
err = -EINVAL;
goto bad;
}
- err = dm_get_device(ti, argv[2], dm_table_get_mode(ti->table),
+ /* <iv_offset> */
+ if (sscanf(argv[2], "%llu%c", &dkc->iv_offset, &dummy) != 1) {
+ ti->error = "Invalid iv_offset sector";
+ err = -EINVAL;
+ goto bad;
+ }
+
+ /* <dev_path> */
+ err = dm_get_device(ti, argv[3], dm_table_get_mode(ti->table),
&dkc->dev);
if (err) {
ti->error = "Device lookup failed";
goto bad;
}
- if (sscanf(argv[3], "%llu%c", &tmp, &dummy) != 1) {
+ /* <start> */
+ if (sscanf(argv[4], "%llu%c", &tmpll, &dummy) != 1 ||
+ tmpll != (sector_t)tmpll) {
ti->error = "Invalid start sector";
err = -EINVAL;
goto bad;
}
- dkc->start = tmp;
+ dkc->start = tmpll;
- if (!blk_queue_inlinecrypt(bdev_get_queue(dkc->dev->bdev))) {
- ti->error = "Device does not support inline encryption";
+ /* optional arguments */
+ dkc->sector_size = SECTOR_SIZE;
+ if (argc > 5) {
+ err = default_key_ctr_optional(ti, argc - 5, &argv[5]);
+ if (err)
+ goto bad;
+ }
+
+ default_key_adjust_sector_size_and_iv(argv, ti, &dkc, raw_key,
+ raw_key_size);
+
+ dkc->sector_bits = ilog2(dkc->sector_size);
+ if (ti->len & ((dkc->sector_size >> SECTOR_SHIFT) - 1)) {
+ ti->error = "Device size is not a multiple of sector_size";
err = -EINVAL;
goto bad;
}
- /* Pass flush requests through to the underlying device. */
+ err = blk_crypto_init_key(&dkc->key, raw_key, cipher->key_size,
+ dkc->is_hw_wrapped, cipher->mode_num,
+ dkc->sector_size);
+ if (err) {
+ ti->error = "Error initializing blk-crypto key";
+ goto bad;
+ }
+
+ err = blk_crypto_start_using_mode(cipher->mode_num, dkc->sector_size,
+ dkc->dev->bdev->bd_queue);
+ if (err) {
+ ti->error = "Error starting to use blk-crypto";
+ goto bad;
+ }
+
ti->num_flush_bios = 1;
- /*
- * We pass discard requests through to the underlying device, although
- * the discarded blocks will be zeroed, which leaks information about
- * unused blocks. It's also impossible for dm-default-key to know not
- * to decrypt discarded blocks, so they will not be read back as zeroes
- * and we must set discard_zeroes_data_unsupported.
- */
- ti->num_discard_bios = 1;
+ ti->may_passthrough_inline_crypto = true;
- /*
- * It's unclear whether WRITE_SAME would work with inline encryption; it
- * would depend on whether the hardware duplicates the data before or
- * after encryption. But since the internal storage in some devices
- * (MSM8998-based) doesn't claim to support WRITE_SAME anyway, we don't
- * currently have a way to test it. Leave it disabled it for now.
- */
- /*ti->num_write_same_bios = 1;*/
-
- return 0;
+ err = 0;
+ goto out;
bad:
default_key_dtr(ti);
+out:
+ memzero_explicit(raw_key, sizeof(raw_key));
return err;
}
static int default_key_map(struct dm_target *ti, struct bio *bio)
{
const struct default_key_c *dkc = ti->private;
+ sector_t sector_in_target;
+ u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE] = { 0 };
bio_set_dev(bio, dkc->dev->bdev);
- if (bio_sectors(bio)) {
- bio->bi_iter.bi_sector = dkc->start +
- dm_target_offset(ti, bio->bi_iter.bi_sector);
- }
- if (!bio->bi_crypt_key && !bio->bi_crypt_skip)
- bio->bi_crypt_key = &dkc->key;
+ /*
+ * If the bio is a device-level request which doesn't target a specific
+ * sector, there's nothing more to do.
+ */
+ if (bio_sectors(bio) == 0)
+ return DM_MAPIO_REMAPPED;
+
+ /* Map the bio's sector to the underlying device. (512-byte sectors) */
+ sector_in_target = dm_target_offset(ti, bio->bi_iter.bi_sector);
+ bio->bi_iter.bi_sector = dkc->start + sector_in_target;
+
+ /*
+ * If the bio should skip dm-default-key (i.e. if it's for an encrypted
+ * file's contents), or if it doesn't have any data (e.g. if it's a
+ * DISCARD request), there's nothing more to do.
+ */
+ if (bio_should_skip_dm_default_key(bio) || !bio_has_data(bio))
+ return DM_MAPIO_REMAPPED;
+
+ /*
+ * Else, dm-default-key needs to set this bio's encryption context.
+ * It must not already have one.
+ */
+ if (WARN_ON_ONCE(bio_has_crypt_ctx(bio)))
+ return DM_MAPIO_KILL;
+
+ /* Calculate the DUN and enforce data-unit (crypto sector) alignment. */
+ dun[0] = dkc->iv_offset + sector_in_target; /* 512-byte sectors */
+ if (dun[0] & ((dkc->sector_size >> SECTOR_SHIFT) - 1))
+ return DM_MAPIO_KILL;
+ dun[0] >>= dkc->sector_bits - SECTOR_SHIFT; /* crypto sectors */
+
+ bio_crypt_set_ctx(bio, &dkc->key, dun, GFP_NOIO);
return DM_MAPIO_REMAPPED;
}
@@ -145,6 +343,7 @@ static void default_key_status(struct dm_target *ti, status_type_t type,
{
const struct default_key_c *dkc = ti->private;
unsigned int sz = 0;
+ int num_feature_args = 0;
switch (type) {
case STATUSTYPE_INFO:
@@ -152,16 +351,26 @@ static void default_key_status(struct dm_target *ti, status_type_t type,
break;
case STATUSTYPE_TABLE:
+ /* Omit the key for now. */
+ DMEMIT("%s - %llu %s %llu", dkc->cipher_string, dkc->iv_offset,
+ dkc->dev->name, (unsigned long long)dkc->start);
- /* encryption mode */
- DMEMIT("AES-256-XTS");
-
- /* reserved for key; dm-crypt shows it, but we don't for now */
- DMEMIT(" -");
-
- /* name of underlying device, and the start sector in it */
- DMEMIT(" %s %llu", dkc->dev->name,
- (unsigned long long)dkc->start);
+ num_feature_args += !!ti->num_discard_bios;
+ if (dkc->sector_size != SECTOR_SIZE)
+ num_feature_args += 2;
+ if (dkc->is_hw_wrapped)
+ num_feature_args += 1;
+ if (num_feature_args != 0) {
+ DMEMIT(" %d", num_feature_args);
+ if (ti->num_discard_bios)
+ DMEMIT(" allow_discards");
+ if (dkc->sector_size != SECTOR_SIZE) {
+ DMEMIT(" sector_size:%u", dkc->sector_size);
+ DMEMIT(" iv_large_sectors");
+ }
+ if (dkc->is_hw_wrapped)
+ DMEMIT(" wrappedkey_v0");
+ }
break;
}
}
@@ -169,15 +378,13 @@ static void default_key_status(struct dm_target *ti, status_type_t type,
static int default_key_prepare_ioctl(struct dm_target *ti,
struct block_device **bdev)
{
- struct default_key_c *dkc = ti->private;
- struct dm_dev *dev = dkc->dev;
+ const struct default_key_c *dkc = ti->private;
+ const struct dm_dev *dev = dkc->dev;
*bdev = dev->bdev;
- /*
- * Only pass ioctls through if the device sizes match exactly.
- */
- if (dkc->start ||
+ /* Only pass ioctls through if the device sizes match exactly. */
+ if (dkc->start != 0 ||
ti->len != i_size_read(dev->bdev->bd_inode) >> SECTOR_SHIFT)
return 1;
return 0;
@@ -187,21 +394,35 @@ static int default_key_iterate_devices(struct dm_target *ti,
iterate_devices_callout_fn fn,
void *data)
{
- struct default_key_c *dkc = ti->private;
+ const struct default_key_c *dkc = ti->private;
return fn(ti, dkc->dev, dkc->start, ti->len, data);
}
+static void default_key_io_hints(struct dm_target *ti,
+ struct queue_limits *limits)
+{
+ const struct default_key_c *dkc = ti->private;
+ const unsigned int sector_size = dkc->sector_size;
+
+ limits->logical_block_size =
+ max_t(unsigned short, limits->logical_block_size, sector_size);
+ limits->physical_block_size =
+ max_t(unsigned int, limits->physical_block_size, sector_size);
+ limits->io_min = max_t(unsigned int, limits->io_min, sector_size);
+}
+
static struct target_type default_key_target = {
- .name = "default-key",
- .version = {1, 0, 0},
- .module = THIS_MODULE,
- .ctr = default_key_ctr,
- .dtr = default_key_dtr,
- .map = default_key_map,
- .status = default_key_status,
- .prepare_ioctl = default_key_prepare_ioctl,
- .iterate_devices = default_key_iterate_devices,
+ .name = "default-key",
+ .version = {2, 1, 0},
+ .module = THIS_MODULE,
+ .ctr = default_key_ctr,
+ .dtr = default_key_dtr,
+ .map = default_key_map,
+ .status = default_key_status,
+ .prepare_ioctl = default_key_prepare_ioctl,
+ .iterate_devices = default_key_iterate_devices,
+ .io_hints = default_key_io_hints,
};
static int __init dm_default_key_init(void)
@@ -221,4 +442,4 @@ MODULE_AUTHOR("Paul Lawrence <paullawrence@google.com>");
MODULE_AUTHOR("Paul Crowley <paulcrowley@google.com>");
MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
MODULE_DESCRIPTION(DM_NAME " target for encrypting filesystem metadata");
-MODULE_LICENSE("GPL v2");
+MODULE_LICENSE("GPL");
diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
index f0b088a..6cc231f 100644
--- a/drivers/md/dm-linear.c
+++ b/drivers/md/dm-linear.c
@@ -62,6 +62,7 @@ int dm_linear_ctr(struct dm_target *ti, unsigned int argc, char **argv)
ti->num_secure_erase_bios = 1;
ti->num_write_same_bios = 1;
ti->num_write_zeroes_bios = 1;
+ ti->may_passthrough_inline_crypto = true;
ti->private = lc;
return 0;
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 96343c7..cc2fbb0 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -22,6 +22,8 @@
#include <linux/blk-mq.h>
#include <linux/mount.h>
#include <linux/dax.h>
+#include <linux/bio.h>
+#include <linux/keyslot-manager.h>
#define DM_MSG_PREFIX "table"
@@ -1638,6 +1640,54 @@ static void dm_table_verify_integrity(struct dm_table *t)
}
}
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+static int device_intersect_crypto_modes(struct dm_target *ti,
+ struct dm_dev *dev, sector_t start,
+ sector_t len, void *data)
+{
+ struct keyslot_manager *parent = data;
+ struct keyslot_manager *child = bdev_get_queue(dev->bdev)->ksm;
+
+ keyslot_manager_intersect_modes(parent, child);
+ return 0;
+}
+
+/*
+ * Update the inline crypto modes supported by 'q->ksm' to be the intersection
+ * of the modes supported by all targets in the table.
+ *
+ * For any mode to be supported at all, all targets must have explicitly
+ * declared that they can pass through inline crypto support. For a particular
+ * mode to be supported, all underlying devices must also support it.
+ *
+ * Assume that 'q->ksm' initially declares all modes to be supported.
+ */
+static void dm_calculate_supported_crypto_modes(struct dm_table *t,
+ struct request_queue *q)
+{
+ struct dm_target *ti;
+ unsigned int i;
+
+ for (i = 0; i < dm_table_get_num_targets(t); i++) {
+ ti = dm_table_get_target(t, i);
+
+ if (!ti->may_passthrough_inline_crypto) {
+ keyslot_manager_intersect_modes(q->ksm, NULL);
+ return;
+ }
+ if (!ti->type->iterate_devices)
+ continue;
+ ti->type->iterate_devices(ti, device_intersect_crypto_modes,
+ q->ksm);
+ }
+}
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+static inline void dm_calculate_supported_crypto_modes(struct dm_table *t,
+ struct request_queue *q)
+{
+}
+#endif /* !CONFIG_BLK_INLINE_ENCRYPTION */
+
static int device_flush_capable(struct dm_target *ti, struct dm_dev *dev,
sector_t start, sector_t len, void *data)
{
@@ -1730,16 +1780,6 @@ static int queue_supports_sg_merge(struct dm_target *ti, struct dm_dev *dev,
return q && !test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags);
}
-static int queue_supports_inline_encryption(struct dm_target *ti,
- struct dm_dev *dev,
- sector_t start, sector_t len,
- void *data)
-{
- struct request_queue *q = bdev_get_queue(dev->bdev);
-
- return q && blk_queue_inlinecrypt(q);
-}
-
static bool dm_table_all_devices_attribute(struct dm_table *t,
iterate_devices_callout_fn func)
{
@@ -1971,13 +2011,10 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
else
blk_queue_flag_set(QUEUE_FLAG_NO_SG_MERGE, q);
- if (dm_table_all_devices_attribute(t, queue_supports_inline_encryption))
- queue_flag_set_unlocked(QUEUE_FLAG_INLINECRYPT, q);
- else
- queue_flag_clear_unlocked(QUEUE_FLAG_INLINECRYPT, q);
-
dm_table_verify_integrity(t);
+ dm_calculate_supported_crypto_modes(t, q);
+
/*
* Some devices don't use blk_integrity but still want stable pages
* because they do their own checksumming.
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index c9860e3..5df0480 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -25,6 +25,8 @@
#include <linux/wait.h>
#include <linux/pr.h>
#include <linux/refcount.h>
+#include <linux/blk-crypto.h>
+#include <linux/keyslot-manager.h>
#define DM_MSG_PREFIX "core"
@@ -1315,9 +1317,10 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio,
__bio_clone_fast(clone, bio);
+ bio_crypt_clone(clone, bio, GFP_NOIO);
+
if (unlikely(bio_integrity(bio) != NULL)) {
int r;
-
if (unlikely(!dm_target_has_integrity(tio->ti->type) &&
!dm_target_passes_integrity(tio->ti->type))) {
DMWARN("%s: the target %s doesn't support integrity data.",
@@ -1822,6 +1825,8 @@ static void dm_init_normal_md_queue(struct mapped_device *md)
md->queue->backing_dev_info->congested_fn = dm_any_congested;
}
+static void dm_destroy_inline_encryption(struct request_queue *q);
+
static void cleanup_mapped_device(struct mapped_device *md)
{
if (md->wq)
@@ -1845,8 +1850,10 @@ static void cleanup_mapped_device(struct mapped_device *md)
put_disk(md->disk);
}
- if (md->queue)
+ if (md->queue) {
+ dm_destroy_inline_encryption(md->queue);
blk_cleanup_queue(md->queue);
+ }
cleanup_srcu_struct(&md->io_barrier);
@@ -2214,6 +2221,160 @@ struct queue_limits *dm_get_queue_limits(struct mapped_device *md)
}
EXPORT_SYMBOL_GPL(dm_get_queue_limits);
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+struct dm_keyslot_evict_args {
+ const struct blk_crypto_key *key;
+ int err;
+};
+
+static int dm_keyslot_evict_callback(struct dm_target *ti, struct dm_dev *dev,
+ sector_t start, sector_t len, void *data)
+{
+ struct dm_keyslot_evict_args *args = data;
+ int err;
+
+ err = blk_crypto_evict_key(dev->bdev->bd_queue, args->key);
+ if (!args->err)
+ args->err = err;
+ /* Always try to evict the key from all devices. */
+ return 0;
+}
+
+/*
+ * When an inline encryption key is evicted from a device-mapper device, evict
+ * it from all the underlying devices.
+ */
+static int dm_keyslot_evict(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key, unsigned int slot)
+{
+ struct mapped_device *md = keyslot_manager_private(ksm);
+ struct dm_keyslot_evict_args args = { key };
+ struct dm_table *t;
+ int srcu_idx;
+ int i;
+ struct dm_target *ti;
+
+ t = dm_get_live_table(md, &srcu_idx);
+ if (!t)
+ return 0;
+ for (i = 0; i < dm_table_get_num_targets(t); i++) {
+ ti = dm_table_get_target(t, i);
+ if (!ti->type->iterate_devices)
+ continue;
+ ti->type->iterate_devices(ti, dm_keyslot_evict_callback, &args);
+ }
+ dm_put_live_table(md, srcu_idx);
+ return args.err;
+}
+
+struct dm_derive_raw_secret_args {
+ const u8 *wrapped_key;
+ unsigned int wrapped_key_size;
+ u8 *secret;
+ unsigned int secret_size;
+ int err;
+};
+
+static int dm_derive_raw_secret_callback(struct dm_target *ti,
+ struct dm_dev *dev, sector_t start,
+ sector_t len, void *data)
+{
+ struct dm_derive_raw_secret_args *args = data;
+ struct request_queue *q = dev->bdev->bd_queue;
+
+ if (!args->err)
+ return 0;
+
+ if (!q->ksm) {
+ args->err = -EOPNOTSUPP;
+ return 0;
+ }
+
+ args->err = keyslot_manager_derive_raw_secret(q->ksm, args->wrapped_key,
+ args->wrapped_key_size,
+ args->secret,
+ args->secret_size);
+ /* Try another device in case this fails. */
+ return 0;
+}
+
+/*
+ * Retrieve the raw_secret from the underlying device. Given that
+ * only only one raw_secret can exist for a particular wrappedkey,
+ * retrieve it only from the first device that supports derive_raw_secret()
+ */
+static int dm_derive_raw_secret(struct keyslot_manager *ksm,
+ const u8 *wrapped_key,
+ unsigned int wrapped_key_size,
+ u8 *secret, unsigned int secret_size)
+{
+ struct mapped_device *md = keyslot_manager_private(ksm);
+ struct dm_derive_raw_secret_args args = {
+ .wrapped_key = wrapped_key,
+ .wrapped_key_size = wrapped_key_size,
+ .secret = secret,
+ .secret_size = secret_size,
+ .err = -EOPNOTSUPP,
+ };
+ struct dm_table *t;
+ int srcu_idx;
+ int i;
+ struct dm_target *ti;
+
+ t = dm_get_live_table(md, &srcu_idx);
+ if (!t)
+ return -EOPNOTSUPP;
+ for (i = 0; i < dm_table_get_num_targets(t); i++) {
+ ti = dm_table_get_target(t, i);
+ if (!ti->type->iterate_devices)
+ continue;
+ ti->type->iterate_devices(ti, dm_derive_raw_secret_callback,
+ &args);
+ if (!args.err)
+ break;
+ }
+ dm_put_live_table(md, srcu_idx);
+ return args.err;
+}
+
+static struct keyslot_mgmt_ll_ops dm_ksm_ll_ops = {
+ .keyslot_evict = dm_keyslot_evict,
+ .derive_raw_secret = dm_derive_raw_secret,
+};
+
+static int dm_init_inline_encryption(struct mapped_device *md)
+{
+ unsigned int mode_masks[BLK_ENCRYPTION_MODE_MAX];
+
+ /*
+ * Start out with all crypto mode support bits set. Any unsupported
+ * bits will be cleared later when calculating the device restrictions.
+ */
+ memset(mode_masks, 0xFF, sizeof(mode_masks));
+
+ md->queue->ksm = keyslot_manager_create_passthrough(&dm_ksm_ll_ops,
+ mode_masks, md);
+ if (!md->queue->ksm)
+ return -ENOMEM;
+ return 0;
+}
+
+static void dm_destroy_inline_encryption(struct request_queue *q)
+{
+ keyslot_manager_destroy(q->ksm);
+ q->ksm = NULL;
+}
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+static inline int dm_init_inline_encryption(struct mapped_device *md)
+{
+ return 0;
+}
+
+static inline void dm_destroy_inline_encryption(struct request_queue *q)
+{
+}
+#endif /* !CONFIG_BLK_INLINE_ENCRYPTION */
+
/*
* Setup the DM device's queue based on md's type
*/
@@ -2258,6 +2419,13 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
DMERR("Cannot calculate initial queue limits");
return r;
}
+
+ r = dm_init_inline_encryption(md);
+ if (r) {
+ DMERR("Cannot initialize inline encryption");
+ return r;
+ }
+
dm_table_set_restrictions(t, md->queue, &limits);
blk_register_queue(md->disk);
diff --git a/drivers/media/platform/msm/cvp/msm_cvp.c b/drivers/media/platform/msm/cvp/msm_cvp.c
index 36a1375..e87f392c 100644
--- a/drivers/media/platform/msm/cvp/msm_cvp.c
+++ b/drivers/media/platform/msm/cvp/msm_cvp.c
@@ -1329,6 +1329,7 @@ static int msm_cvp_session_process_hfi_fence(
struct cvp_kmd_hfi_packet *in_pkt;
unsigned int signal, offset, buf_num, in_offset, in_buf_num;
struct msm_cvp_inst *s;
+ unsigned int max_buf_num;
struct msm_cvp_fence_thread_data *fence_thread_data;
dprintk(CVP_DBG, "%s: Enter inst = %#x", __func__, inst);
@@ -1374,6 +1375,16 @@ static int msm_cvp_session_process_hfi_fence(
buf_num = in_buf_num;
}
+ max_buf_num = sizeof(struct cvp_kmd_hfi_packet)
+ / sizeof(struct cvp_buf_type);
+
+ if (buf_num > max_buf_num)
+ return -EINVAL;
+
+ if ((offset + buf_num * sizeof(struct cvp_buf_type)) >
+ sizeof(struct cvp_kmd_hfi_packet))
+ return -EINVAL;
+
rc = msm_cvp_map_buf(inst, in_pkt, offset, buf_num);
if (rc)
goto free_and_exit;
diff --git a/drivers/media/platform/msm/cvp/msm_cvp_debug.c b/drivers/media/platform/msm/cvp/msm_cvp_debug.c
index 9d61d2ac..2beee35 100644
--- a/drivers/media/platform/msm/cvp/msm_cvp_debug.c
+++ b/drivers/media/platform/msm/cvp/msm_cvp_debug.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
*/
#define CREATE_TRACE_POINTS
@@ -277,7 +277,8 @@ struct dentry *msm_cvp_debugfs_init_core(struct msm_cvp_core *core,
snprintf(debugfs_name, MAX_DEBUGFS_NAME, "core%d", core->id);
dir = debugfs_create_dir(debugfs_name, parent);
- if (!dir) {
+ if (IS_ERR_OR_NULL(dir)) {
+ dir = NULL;
dprintk(CVP_ERR, "Failed to create debugfs for msm_cvp\n");
goto failed_create_dir;
}
@@ -423,7 +424,8 @@ struct dentry *msm_cvp_debugfs_init_inst(struct msm_cvp_inst *inst,
idata->inst = inst;
dir = debugfs_create_dir(debugfs_name, parent);
- if (!dir) {
+ if (IS_ERR_OR_NULL(dir)) {
+ dir = NULL;
dprintk(CVP_ERR, "Failed to create debugfs for msm_cvp\n");
goto failed_create_dir;
}
diff --git a/drivers/media/platform/msm/cvp/msm_cvp_dsp.c b/drivers/media/platform/msm/cvp/msm_cvp_dsp.c
index 1eab89a..2e80670 100644
--- a/drivers/media/platform/msm/cvp/msm_cvp_dsp.c
+++ b/drivers/media/platform/msm/cvp/msm_cvp_dsp.c
@@ -1,12 +1,13 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/module.h>
#include <linux/rpmsg.h>
#include <linux/of_platform.h>
#include <linux/of_fdt.h>
#include <soc/qcom/secure_buffer.h>
+#include "cvp_core_hfi.h"
#include "msm_cvp_dsp.h"
#define VMID_CDSP_Q6 (30)
@@ -90,25 +91,81 @@ static int cvp_dsp_send_cmd(void *msg, uint32_t len)
return err;
}
+static void __reset_queue_hdr_defaults(struct cvp_hfi_queue_header *q_hdr)
+{
+ q_hdr->qhdr_status = 0x1;
+ q_hdr->qhdr_type = CVP_IFACEQ_DFLT_QHDR;
+ q_hdr->qhdr_q_size = CVP_IFACEQ_QUEUE_SIZE / 4;
+ q_hdr->qhdr_pkt_size = 0;
+ q_hdr->qhdr_rx_wm = 0x1;
+ q_hdr->qhdr_tx_wm = 0x1;
+ q_hdr->qhdr_rx_req = 0x1;
+ q_hdr->qhdr_tx_req = 0x0;
+ q_hdr->qhdr_rx_irq_status = 0x0;
+ q_hdr->qhdr_tx_irq_status = 0x0;
+ q_hdr->qhdr_read_idx = 0x0;
+ q_hdr->qhdr_write_idx = 0x0;
+}
+
void msm_cvp_cdsp_ssr_handler(struct work_struct *work)
{
struct cvp_dsp_apps *me;
uint64_t msg_ptr;
uint32_t msg_ptr_len;
int err;
+ u32 i;
+ struct iris_hfi_device *dev;
+ struct cvp_hfi_queue_table_header *q_tbl_hdr;
+ struct cvp_hfi_queue_header *q_hdr;
+ struct cvp_iface_q_info *iface_q;
+ dprintk(CVP_WARN, "%s: Entering CDSP-SSR handler\n", __func__);
me = container_of(work, struct cvp_dsp_apps, ssr_work);
if (!me) {
dprintk(CVP_ERR, "%s: Invalid params\n", __func__);
return;
}
+ dev = me->device;
+ for (i = 0; i < CVP_IFACEQ_NUMQ; i++) {
+ iface_q = &dev->dsp_iface_queues[i];
+ iface_q->q_hdr = CVP_IFACEQ_GET_QHDR_START_ADDR(
+ dev->dsp_iface_q_table.align_virtual_addr, i);
+ __reset_queue_hdr_defaults(iface_q->q_hdr);
+ }
+
+ q_tbl_hdr = (struct cvp_hfi_queue_table_header *)
+ dev->dsp_iface_q_table.align_virtual_addr;
+ q_tbl_hdr->qtbl_version = 0;
+ q_tbl_hdr->device_addr = (void *)dev;
+ strlcpy(q_tbl_hdr->name, "msm_v4l2_cvp", sizeof(q_tbl_hdr->name));
+ q_tbl_hdr->qtbl_size = CVP_IFACEQ_TABLE_SIZE;
+ q_tbl_hdr->qtbl_qhdr0_offset =
+ sizeof(struct cvp_hfi_queue_table_header);
+ q_tbl_hdr->qtbl_qhdr_size = sizeof(struct cvp_hfi_queue_header);
+ q_tbl_hdr->qtbl_num_q = CVP_IFACEQ_NUMQ;
+ q_tbl_hdr->qtbl_num_active_q = CVP_IFACEQ_NUMQ;
+
+ iface_q = &dev->dsp_iface_queues[CVP_IFACEQ_CMDQ_IDX];
+ q_hdr = iface_q->q_hdr;
+ q_hdr->qhdr_type |= HFI_Q_ID_HOST_TO_CTRL_CMD_Q;
+
+ iface_q = &dev->dsp_iface_queues[CVP_IFACEQ_MSGQ_IDX];
+ q_hdr = iface_q->q_hdr;
+ q_hdr->qhdr_type |= HFI_Q_ID_CTRL_TO_HOST_MSG_Q;
+
+ iface_q = &dev->dsp_iface_queues[CVP_IFACEQ_DBGQ_IDX];
+ q_hdr = iface_q->q_hdr;
+ q_hdr->qhdr_type |= HFI_Q_ID_CTRL_TO_HOST_DEBUG_Q;
+ q_hdr->qhdr_rx_req = 0;
+
msg_ptr = cmd_msg.msg_ptr;
msg_ptr_len = cmd_msg.msg_ptr_len;
+ dprintk(CVP_WARN, "%s: HFI queue cmd after CDSP-SSR\n", __func__);
err = cvp_dsp_send_cmd_hfi_queue((phys_addr_t *)msg_ptr,
msg_ptr_len,
- (void *)NULL);
+ (struct iris_hfi_device *)(me->device));
if (err) {
dprintk(CVP_ERR,
"%s: Failed to send HFI Queue address. err=%d\n",
diff --git a/drivers/media/platform/msm/npu/npu_common.h b/drivers/media/platform/msm/npu/npu_common.h
index 30afa4a..7618c66 100644
--- a/drivers/media/platform/msm/npu/npu_common.h
+++ b/drivers/media/platform/msm/npu/npu_common.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
- * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2020, The Linux Foundation. All rights reserved.
*/
#ifndef _NPU_COMMON_H
@@ -341,5 +341,7 @@ int load_fw(struct npu_device *npu_dev);
int unload_fw(struct npu_device *npu_dev);
int npu_set_bw(struct npu_device *npu_dev, int new_ib, int new_ab);
int npu_process_kevent(struct npu_client *client, struct npu_kevent *kevt);
-
+int npu_notify_cdsprm_cxlimit_activity(struct npu_device *npu_dev, bool enable);
+int npu_bridge_mbox_send_data(struct npu_host_ctx *host_ctx,
+ struct npu_mbox *mbox, void *data);
#endif /* _NPU_COMMON_H */
diff --git a/drivers/media/platform/msm/npu/npu_dev.c b/drivers/media/platform/msm/npu/npu_dev.c
index beb17951..bcfd230 100644
--- a/drivers/media/platform/msm/npu/npu_dev.c
+++ b/drivers/media/platform/msm/npu/npu_dev.c
@@ -17,6 +17,7 @@
#include <linux/regulator/consumer.h>
#include <linux/thermal.h>
#include <linux/soc/qcom/llcc-qcom.h>
+#include <linux/soc/qcom/cdsprm_cxlimit.h>
#include <soc/qcom/devfreq_devbw.h>
#include "npu_common.h"
@@ -111,7 +112,8 @@ static int npu_pm_suspend(struct device *dev);
static int npu_pm_resume(struct device *dev);
static int __init npu_init(void);
static void __exit npu_exit(void);
-
+static uint32_t npu_notify_cdsprm_cxlimit_corner(struct npu_device *npu_dev,
+ uint32_t pwr_lvl);
/* -------------------------------------------------------------------------
* File Scope Variables
* -------------------------------------------------------------------------
@@ -387,6 +389,168 @@ static ssize_t boot_store(struct device *dev,
* Power Related
* -------------------------------------------------------------------------
*/
+static enum npu_power_level cdsprm_corner_to_npu_power_level(
+ enum cdsprm_npu_corner corner)
+{
+ enum npu_power_level pwr_lvl = NPU_PWRLEVEL_TURBO_L1;
+
+ switch (corner) {
+ case CDSPRM_NPU_CLK_OFF:
+ pwr_lvl = NPU_PWRLEVEL_OFF;
+ break;
+ case CDSPRM_NPU_MIN_SVS:
+ pwr_lvl = NPU_PWRLEVEL_MINSVS;
+ break;
+ case CDSPRM_NPU_LOW_SVS:
+ pwr_lvl = NPU_PWRLEVEL_LOWSVS;
+ break;
+ case CDSPRM_NPU_SVS:
+ pwr_lvl = NPU_PWRLEVEL_SVS;
+ break;
+ case CDSPRM_NPU_SVS_L1:
+ pwr_lvl = NPU_PWRLEVEL_SVS_L1;
+ break;
+ case CDSPRM_NPU_NOM:
+ pwr_lvl = NPU_PWRLEVEL_NOM;
+ break;
+ case CDSPRM_NPU_NOM_L1:
+ pwr_lvl = NPU_PWRLEVEL_NOM_L1;
+ break;
+ case CDSPRM_NPU_TURBO:
+ pwr_lvl = NPU_PWRLEVEL_TURBO;
+ break;
+ case CDSPRM_NPU_TURBO_L1:
+ default:
+ pwr_lvl = NPU_PWRLEVEL_TURBO_L1;
+ break;
+ }
+
+ return pwr_lvl;
+}
+
+static enum cdsprm_npu_corner npu_power_level_to_cdsprm_corner(
+ enum npu_power_level pwr_lvl)
+{
+ enum cdsprm_npu_corner corner = CDSPRM_NPU_MIN_SVS;
+
+ switch (pwr_lvl) {
+ case NPU_PWRLEVEL_OFF:
+ corner = CDSPRM_NPU_CLK_OFF;
+ break;
+ case NPU_PWRLEVEL_MINSVS:
+ corner = CDSPRM_NPU_MIN_SVS;
+ break;
+ case NPU_PWRLEVEL_LOWSVS:
+ corner = CDSPRM_NPU_LOW_SVS;
+ break;
+ case NPU_PWRLEVEL_SVS:
+ corner = CDSPRM_NPU_SVS;
+ break;
+ case NPU_PWRLEVEL_SVS_L1:
+ corner = CDSPRM_NPU_SVS_L1;
+ break;
+ case NPU_PWRLEVEL_NOM:
+ corner = CDSPRM_NPU_NOM;
+ break;
+ case NPU_PWRLEVEL_NOM_L1:
+ corner = CDSPRM_NPU_NOM_L1;
+ break;
+ case NPU_PWRLEVEL_TURBO:
+ corner = CDSPRM_NPU_TURBO;
+ break;
+ case NPU_PWRLEVEL_TURBO_L1:
+ default:
+ corner = CDSPRM_NPU_TURBO_L1;
+ break;
+ }
+
+ return corner;
+}
+
+static int npu_set_cdsprm_corner_limit(enum cdsprm_npu_corner corner)
+{
+ struct npu_pwrctrl *pwr;
+ enum npu_power_level pwr_lvl;
+
+ if (!g_npu_dev)
+ return 0;
+
+ pwr = &g_npu_dev->pwrctrl;
+ pwr_lvl = cdsprm_corner_to_npu_power_level(corner);
+ pwr->cdsprm_pwrlevel = pwr_lvl;
+ NPU_DBG("power level from cdsp %d\n", pwr_lvl);
+
+ return npu_set_power_level(g_npu_dev, false);
+}
+
+const struct cdsprm_npu_limit_cbs cdsprm_npu_limit_cbs = {
+ .set_corner_limit = npu_set_cdsprm_corner_limit,
+};
+
+int npu_notify_cdsprm_cxlimit_activity(struct npu_device *npu_dev, bool enable)
+{
+ if (!npu_dev->cxlimit_registered)
+ return 0;
+
+ NPU_DBG("notify cxlimit %s activity\n", enable ? "enable" : "disable");
+
+ return cdsprm_cxlimit_npu_activity_notify(enable ? 1 : 0);
+}
+
+static uint32_t npu_notify_cdsprm_cxlimit_corner(
+ struct npu_device *npu_dev, uint32_t pwr_lvl)
+{
+ uint32_t corner, pwr_lvl_to_set;
+
+ if (!npu_dev->cxlimit_registered)
+ return pwr_lvl;
+
+ corner = npu_power_level_to_cdsprm_corner(pwr_lvl);
+ corner = cdsprm_cxlimit_npu_corner_notify(corner);
+ pwr_lvl_to_set = cdsprm_corner_to_npu_power_level(corner);
+ NPU_DBG("Notify cdsprm %d:%d\n", pwr_lvl,
+ pwr_lvl_to_set);
+
+ return pwr_lvl_to_set;
+}
+
+int npu_cdsprm_cxlimit_init(struct npu_device *npu_dev)
+{
+ bool enabled;
+ int ret = 0;
+
+ enabled = of_property_read_bool(npu_dev->pdev->dev.of_node,
+ "qcom,npu-cxlimit-enable");
+ NPU_DBG("qcom,npu-xclimit-enable is %s\n", enabled ? "true" : "false");
+
+ npu_dev->cxlimit_registered = false;
+ if (enabled) {
+ ret = cdsprm_cxlimit_npu_limit_register(&cdsprm_npu_limit_cbs);
+ if (ret) {
+ NPU_ERR("register cxlimit npu limit failed\n");
+ } else {
+ NPU_DBG("register cxlimit npu limit succeeds\n");
+ npu_dev->cxlimit_registered = true;
+ }
+ }
+
+ return ret;
+}
+
+int npu_cdsprm_cxlimit_deinit(struct npu_device *npu_dev)
+{
+ int ret = 0;
+
+ if (npu_dev->cxlimit_registered) {
+ ret = cdsprm_cxlimit_npu_limit_deregister();
+ if (ret)
+ NPU_ERR("deregister cxlimit npu limit failed\n");
+ npu_dev->cxlimit_registered = false;
+ }
+
+ return ret;
+}
+
int npu_enable_core_power(struct npu_device *npu_dev)
{
struct npu_pwrctrl *pwr = &npu_dev->pwrctrl;
@@ -530,6 +694,11 @@ int npu_set_power_level(struct npu_device *npu_dev, bool notify_cxlimit)
return 0;
}
+ /* notify cxlimit to get allowed power level */
+ if ((pwr_level_to_set > pwr->active_pwrlevel) && notify_cxlimit)
+ pwr_level_to_set = npu_notify_cdsprm_cxlimit_corner(
+ npu_dev, pwr_level_to_cdsprm);
+
pwr_level_to_set = min(pwr_level_to_set,
npu_dev->pwrctrl.cdsprm_pwrlevel);
@@ -596,6 +765,12 @@ int npu_set_power_level(struct npu_device *npu_dev, bool notify_cxlimit)
ret = 0;
}
+ if ((pwr_level_to_cdsprm < pwr->active_pwrlevel) && notify_cxlimit) {
+ npu_notify_cdsprm_cxlimit_corner(npu_dev,
+ pwr_level_to_cdsprm);
+ NPU_DBG("Notify cdsprm(post) %d\n", pwr_level_to_cdsprm);
+ }
+
pwr->active_pwrlevel = pwr_level_to_set;
return ret;
}
@@ -708,6 +883,13 @@ static int npu_enable_clocks(struct npu_device *npu_dev, bool post_pil)
uint32_t pwrlevel_to_set, pwrlevel_idx;
pwrlevel_to_set = pwr->active_pwrlevel;
+ if (!post_pil) {
+ pwrlevel_to_set = npu_notify_cdsprm_cxlimit_corner(
+ npu_dev, pwrlevel_to_set);
+ NPU_DBG("Notify cdsprm %d\n", pwrlevel_to_set);
+ pwr->active_pwrlevel = pwrlevel_to_set;
+ }
+
pwrlevel_idx = npu_power_level_to_index(npu_dev, pwrlevel_to_set);
pwrlevel = &pwr->pwrlevels[pwrlevel_idx];
for (i = 0; i < npu_dev->core_clk_num; i++) {
@@ -775,6 +957,11 @@ static void npu_disable_clocks(struct npu_device *npu_dev, bool post_pil)
int i, rc = 0;
struct npu_clk *core_clks = npu_dev->core_clks;
+ if (!post_pil) {
+ npu_notify_cdsprm_cxlimit_corner(npu_dev, NPU_PWRLEVEL_OFF);
+ NPU_DBG("Notify cdsprm clock off\n");
+ }
+
for (i = npu_dev->core_clk_num - 1; i >= 0 ; i--) {
if (post_pil) {
if (!npu_is_post_clock(core_clks[i].clk_name))
@@ -1355,12 +1542,6 @@ static int npu_set_fw_state(struct npu_client *client, uint32_t enable)
struct npu_host_ctx *host_ctx = &npu_dev->host_ctx;
int rc = 0;
- if (host_ctx->network_num > 0) {
- NPU_ERR("Need to unload network first\n");
- mutex_unlock(&npu_dev->dev_lock);
- return -EINVAL;
- }
-
if (enable) {
NPU_DBG("enable fw\n");
rc = enable_fw(npu_dev);
@@ -1370,9 +1551,6 @@ static int npu_set_fw_state(struct npu_client *client, uint32_t enable)
host_ctx->npu_init_cnt++;
NPU_DBG("npu_init_cnt %d\n",
host_ctx->npu_init_cnt);
- /* set npu to lowest power level */
- if (npu_set_uc_power_level(npu_dev, 1))
- NPU_WARN("Failed to set uc power level\n");
}
} else if (host_ctx->npu_init_cnt > 0) {
NPU_DBG("disable fw\n");
@@ -1469,7 +1647,7 @@ static int npu_get_property(struct npu_client *client,
default:
ret = npu_host_get_fw_property(client->npu_dev, &prop);
if (ret) {
- NPU_ERR("npu_host_set_fw_property failed\n");
+ NPU_ERR("npu_host_get_fw_property failed\n");
return ret;
}
break;
@@ -2021,6 +2199,10 @@ static int npu_ipcc_bridge_mbox_send_data(struct mbox_chan *chan, void *data)
queue_work(host_ctx->wq, &host_ctx->bridge_mbox_work);
spin_unlock_irqrestore(&host_ctx->bridge_mbox_lock, flags);
+ if (host_ctx->app_crashed)
+ npu_bridge_mbox_send_data(host_ctx,
+ ipcc_mbox_chan->npu_mbox, NULL);
+
return 0;
}
@@ -2444,9 +2626,7 @@ static int npu_probe(struct platform_device *pdev)
goto error_res_init;
}
- rc = npu_debugfs_init(npu_dev);
- if (rc)
- goto error_driver_init;
+ npu_debugfs_init(npu_dev);
rc = npu_host_init(npu_dev);
if (rc) {
@@ -2468,10 +2648,15 @@ static int npu_probe(struct platform_device *pdev)
thermal_cdev_update(tcdev);
}
+ rc = npu_cdsprm_cxlimit_init(npu_dev);
+ if (rc)
+ goto error_driver_init;
+
g_npu_dev = npu_dev;
return rc;
error_driver_init:
+ npu_cdsprm_cxlimit_deinit(npu_dev);
if (npu_dev->tcdev)
thermal_cooling_device_unregister(npu_dev->tcdev);
sysfs_remove_group(&npu_dev->device->kobj, &npu_fs_attr_group);
@@ -2496,6 +2681,7 @@ static int npu_remove(struct platform_device *pdev)
npu_dev = platform_get_drvdata(pdev);
npu_host_deinit(npu_dev);
npu_debugfs_deinit(npu_dev);
+ npu_cdsprm_cxlimit_deinit(npu_dev);
if (npu_dev->tcdev)
thermal_cooling_device_unregister(npu_dev->tcdev);
sysfs_remove_group(&npu_dev->device->kobj, &npu_fs_attr_group);
diff --git a/drivers/media/platform/msm/npu/npu_mgr.c b/drivers/media/platform/msm/npu/npu_mgr.c
index 5381e2c..d497a7f 100644
--- a/drivers/media/platform/msm/npu/npu_mgr.c
+++ b/drivers/media/platform/msm/npu/npu_mgr.c
@@ -49,7 +49,7 @@ static void free_network(struct npu_host_ctx *ctx, struct npu_client *client,
int64_t id);
static int network_get(struct npu_network *network);
static int network_put(struct npu_network *network);
-static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg);
+static int app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg);
static void log_msg_proc(struct npu_device *npu_dev, uint32_t *msg);
static void host_session_msg_hdlr(struct npu_device *npu_dev);
static void host_session_log_hdlr(struct npu_device *npu_dev);
@@ -85,6 +85,7 @@ static void npu_dequeue_misc_cmd(struct npu_host_ctx *ctx,
struct npu_misc_cmd *cmd);
static struct npu_misc_cmd *npu_find_misc_cmd(struct npu_host_ctx *ctx,
uint32_t trans_id);
+static int npu_get_fw_caps(struct npu_device *npu_dev);
/* -------------------------------------------------------------------------
* Function Definitions - Init / Deinit
@@ -211,6 +212,37 @@ static int load_fw_nolock(struct npu_device *npu_dev, bool enable)
return ret;
}
+static int npu_get_fw_caps(struct npu_device *npu_dev)
+{
+ int ret = 0, i;
+ struct npu_host_ctx *host_ctx = &npu_dev->host_ctx;
+
+ if (host_ctx->fw_caps_valid) {
+ NPU_DBG("cached fw caps available\n");
+ return ret;
+ }
+
+ memset(&host_ctx->fw_caps, 0, sizeof(host_ctx->fw_caps));
+ host_ctx->fw_caps.prop_id = MSM_NPU_PROP_ID_FW_GETCAPS;
+ host_ctx->fw_caps.num_of_params = PROP_PARAM_MAX_SIZE;
+
+ ret = npu_host_get_fw_property(npu_dev, &host_ctx->fw_caps);
+ if (!ret) {
+ NPU_DBG("Get fw caps successfully\n");
+ host_ctx->fw_caps_valid = true;
+
+ for (i = 0; i < host_ctx->fw_caps.num_of_params; i++)
+ NPU_INFO("fw caps %d:%x\n", i,
+ host_ctx->fw_caps.prop_param[i]);
+ } else {
+ /* save the return code */
+ host_ctx->fw_caps_err_code = ret;
+ NPU_ERR("get fw caps failed %d\n", ret);
+ }
+
+ return ret;
+}
+
static void npu_load_fw_work(struct work_struct *work)
{
int ret;
@@ -224,8 +256,12 @@ static void npu_load_fw_work(struct work_struct *work)
ret = load_fw_nolock(npu_dev, false);
mutex_unlock(&host_ctx->lock);
- if (ret)
+ if (ret) {
NPU_ERR("load fw failed %d\n", ret);
+ return;
+ }
+
+ npu_get_fw_caps(npu_dev);
}
int load_fw(struct npu_device *npu_dev)
@@ -265,6 +301,8 @@ int unload_fw(struct npu_device *npu_dev)
subsystem_put_local(host_ctx->subsystem_handle);
host_ctx->fw_state = FW_UNLOADED;
+ host_ctx->fw_caps_valid = false;
+ host_ctx->fw_caps_err_code = 0;
NPU_DBG("fw is unloaded\n");
mutex_unlock(&host_ctx->lock);
@@ -634,6 +672,25 @@ static int npu_notifier_cb(struct notifier_block *this, unsigned long code,
return ret;
}
+static int npu_panic_handler(struct notifier_block *this,
+ unsigned long event, void *ptr)
+{
+ int i;
+ struct npu_host_ctx *host_ctx =
+ container_of(this, struct npu_host_ctx, panic_nb);
+ struct npu_device *npu_dev = host_ctx->npu_dev;
+
+ NPU_INFO("Apps crashed\n");
+
+ for (i = 0; i < NPU_MAX_MBOX_NUM; i++)
+ if (npu_dev->mbox[i].send_data_pending)
+ npu_bridge_mbox_send_data(host_ctx,
+ &npu_dev->mbox[i], NULL);
+
+ host_ctx->app_crashed = true;
+ return NOTIFY_DONE;
+}
+
static void npu_update_pwr_work(struct work_struct *work)
{
int ret;
@@ -686,6 +743,14 @@ int npu_host_init(struct npu_device *npu_dev)
goto fail;
}
+ host_ctx->panic_nb.notifier_call = npu_panic_handler;
+ ret = atomic_notifier_chain_register(&panic_notifier_list,
+ &host_ctx->panic_nb);
+ if (ret) {
+ NPU_ERR("register panic notifier failed\n");
+ goto fail;
+ }
+
host_ctx->wq = create_workqueue("npu_general_wq");
host_ctx->wq_pri =
alloc_workqueue("npu_ipc_wq", WQ_HIGHPRI | WQ_UNBOUND, 0);
@@ -736,6 +801,8 @@ int npu_host_init(struct npu_device *npu_dev)
INIT_LIST_HEAD(&host_ctx->misc_cmd_list);
host_ctx->auto_pil_disable = false;
+ host_ctx->fw_caps_valid = false;
+ host_ctx->fw_caps_err_code = 0;
return 0;
@@ -1064,7 +1131,7 @@ static void npu_disable_fw_work(struct work_struct *work)
NPU_DBG("Exit disable fw work\n");
}
-static int npu_bridge_mbox_send_data(struct npu_host_ctx *host_ctx,
+int npu_bridge_mbox_send_data(struct npu_host_ctx *host_ctx,
struct npu_mbox *mbox, void *data)
{
NPU_DBG("Generating IRQ for client_id: %u; signal_id: %u\n",
@@ -1563,7 +1630,7 @@ int npu_process_kevent(struct npu_client *client, struct npu_kevent *kevt)
return ret;
}
-static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
+static int app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
{
uint32_t msg_id;
struct npu_network *network = NULL;
@@ -1571,6 +1638,7 @@ static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
struct npu_device *npu_dev = host_ctx->npu_dev;
struct npu_network_cmd *network_cmd = NULL;
struct npu_misc_cmd *misc_cmd = NULL;
+ int need_ctx_switch = 0;
msg_id = msg[1];
switch (msg_id) {
@@ -1615,7 +1683,7 @@ static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
NPU_ERR("queue npu event failed\n");
}
network_put(network);
-
+ need_ctx_switch = 1;
break;
}
case NPU_IPC_MSG_EXECUTE_V2_DONE:
@@ -1675,6 +1743,7 @@ static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
complete(&network_cmd->cmd_done);
}
network_put(network);
+ need_ctx_switch = 1;
break;
}
case NPU_IPC_MSG_LOAD_DONE:
@@ -1713,6 +1782,7 @@ static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
complete(&network_cmd->cmd_done);
network_put(network);
+ need_ctx_switch = 1;
break;
}
case NPU_IPC_MSG_UNLOAD_DONE:
@@ -1745,6 +1815,7 @@ static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
complete(&network_cmd->cmd_done);
network_put(network);
+ need_ctx_switch = 1;
break;
}
case NPU_IPC_MSG_LOOPBACK_DONE:
@@ -1765,6 +1836,7 @@ static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
misc_cmd->ret_status = lb_rsp_pkt->header.status;
complete_all(&misc_cmd->cmd_done);
+ need_ctx_switch = 1;
break;
}
case NPU_IPC_MSG_SET_PROPERTY_DONE:
@@ -1788,6 +1860,7 @@ static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
misc_cmd->ret_status = prop_rsp_pkt->header.status;
complete(&misc_cmd->cmd_done);
+ need_ctx_switch = 1;
break;
}
case NPU_IPC_MSG_GET_PROPERTY_DONE:
@@ -1826,6 +1899,7 @@ static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
}
complete_all(&misc_cmd->cmd_done);
+ need_ctx_switch = 1;
break;
}
case NPU_IPC_MSG_GENERAL_NOTIFY:
@@ -1856,12 +1930,15 @@ static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
msg_id);
break;
}
+
+ return need_ctx_switch;
}
static void host_session_msg_hdlr(struct npu_device *npu_dev)
{
struct npu_host_ctx *host_ctx = &npu_dev->host_ctx;
+retry:
mutex_lock(&host_ctx->lock);
if (host_ctx->fw_state != FW_ENABLED) {
NPU_WARN("handle npu session msg when FW is disabled\n");
@@ -1871,7 +1948,15 @@ static void host_session_msg_hdlr(struct npu_device *npu_dev)
while (npu_host_ipc_read_msg(npu_dev, IPC_QUEUE_APPS_RSP,
host_ctx->ipc_msg_buf) == 0) {
NPU_DBG("received from msg queue\n");
- app_msg_proc(host_ctx, host_ctx->ipc_msg_buf);
+ if (app_msg_proc(host_ctx, host_ctx->ipc_msg_buf)) {
+ /*
+ * force context switch to let user
+ * process have chance to run
+ */
+ mutex_unlock(&host_ctx->lock);
+ usleep_range(500, 501);
+ goto retry;
+ }
}
skip_read_msg:
@@ -2125,7 +2210,13 @@ int32_t npu_host_set_fw_property(struct npu_device *npu_dev,
break;
default:
NPU_ERR("unsupported property %d\n", property->prop_id);
- goto set_prop_exit;
+ goto free_prop_packet;
+ }
+
+ ret = enable_fw(npu_dev);
+ if (ret) {
+ NPU_ERR("failed to enable fw\n");
+ goto free_prop_packet;
}
prop_packet->header.cmd_type = NPU_IPC_CMD_SET_PROPERTY;
@@ -2140,16 +2231,17 @@ int32_t npu_host_set_fw_property(struct npu_device *npu_dev,
for (i = 0; i < num_of_params; i++)
prop_packet->prop_param[i] = property->prop_param[i];
- mutex_lock(&host_ctx->lock);
misc_cmd = npu_alloc_misc_cmd(host_ctx);
if (!misc_cmd) {
NPU_ERR("Can't allocate misc_cmd\n");
ret = -ENOMEM;
- goto set_prop_exit;
+ goto disable_fw;
}
misc_cmd->cmd_type = NPU_IPC_CMD_SET_PROPERTY;
misc_cmd->trans_id = prop_packet->header.trans_id;
+
+ mutex_lock(&host_ctx->lock);
npu_queue_misc_cmd(host_ctx, misc_cmd);
ret = npu_send_misc_cmd(npu_dev, IPC_QUEUE_APPS_EXEC,
@@ -2183,10 +2275,13 @@ int32_t npu_host_set_fw_property(struct npu_device *npu_dev,
free_misc_cmd:
npu_dequeue_misc_cmd(host_ctx, misc_cmd);
- npu_free_misc_cmd(host_ctx, misc_cmd);
-set_prop_exit:
mutex_unlock(&host_ctx->lock);
+ npu_free_misc_cmd(host_ctx, misc_cmd);
+disable_fw:
+ disable_fw(npu_dev);
+free_prop_packet:
kfree(prop_packet);
+
return ret;
}
@@ -2204,6 +2299,15 @@ int32_t npu_host_get_fw_property(struct npu_device *npu_dev,
NPU_ERR("Not supproted fw property id %x\n",
property->prop_id);
return -EINVAL;
+ } else if (property->prop_id == MSM_NPU_PROP_ID_FW_GETCAPS) {
+ if (host_ctx->fw_caps_valid) {
+ NPU_DBG("return cached fw_caps\n");
+ memcpy(property, &host_ctx->fw_caps, sizeof(*property));
+ return 0;
+ } else if (host_ctx->fw_caps_err_code) {
+ NPU_DBG("return cached error code\n");
+ return host_ctx->fw_caps_err_code;
+ }
}
num_of_params = min_t(uint32_t, property->num_of_params,
@@ -2214,6 +2318,12 @@ int32_t npu_host_get_fw_property(struct npu_device *npu_dev,
if (!prop_packet)
return -ENOMEM;
+ ret = enable_fw(npu_dev);
+ if (ret) {
+ NPU_ERR("failed to enable fw\n");
+ goto free_prop_packet;
+ }
+
prop_packet->header.cmd_type = NPU_IPC_CMD_GET_PROPERTY;
prop_packet->header.size = pkt_size;
prop_packet->header.trans_id =
@@ -2226,16 +2336,17 @@ int32_t npu_host_get_fw_property(struct npu_device *npu_dev,
for (i = 0; i < num_of_params; i++)
prop_packet->prop_param[i] = property->prop_param[i];
- mutex_lock(&host_ctx->lock);
misc_cmd = npu_alloc_misc_cmd(host_ctx);
if (!misc_cmd) {
NPU_ERR("Can't allocate misc_cmd\n");
ret = -ENOMEM;
- goto get_prop_exit;
+ goto disable_fw;
}
misc_cmd->cmd_type = NPU_IPC_CMD_GET_PROPERTY;
misc_cmd->trans_id = prop_packet->header.trans_id;
+
+ mutex_lock(&host_ctx->lock);
npu_queue_misc_cmd(host_ctx, misc_cmd);
ret = npu_send_misc_cmd(npu_dev, IPC_QUEUE_APPS_EXEC,
@@ -2264,26 +2375,43 @@ int32_t npu_host_get_fw_property(struct npu_device *npu_dev,
}
ret = misc_cmd->ret_status;
+ prop_from_fw = &misc_cmd->u.prop;
if (!ret) {
/* Return prop data retrieved from fw to user */
- prop_from_fw = &misc_cmd->u.prop;
if (property->prop_id == prop_from_fw->prop_id &&
property->network_hdl == prop_from_fw->network_hdl) {
+ num_of_params = min_t(uint32_t,
+ prop_from_fw->num_of_params,
+ (uint32_t)PROP_PARAM_MAX_SIZE);
property->num_of_params = num_of_params;
for (i = 0; i < num_of_params; i++)
property->prop_param[i] =
prop_from_fw->prop_param[i];
+ } else {
+ NPU_WARN("Not Match: id %x:%x hdl %x:%x\n",
+ property->prop_id, prop_from_fw->prop_id,
+ property->network_hdl,
+ prop_from_fw->network_hdl);
+ property->num_of_params = 0;
}
} else {
NPU_ERR("get fw property failed %d\n", ret);
+ NPU_ERR("prop_id: %x\n", prop_from_fw->prop_id);
+ NPU_ERR("network_hdl: %x\n", prop_from_fw->network_hdl);
+ NPU_ERR("param_num: %x\n", prop_from_fw->num_of_params);
+ for (i = 0; i < prop_from_fw->num_of_params; i++)
+ NPU_ERR("%x\n", prop_from_fw->prop_param[i]);
}
free_misc_cmd:
npu_dequeue_misc_cmd(host_ctx, misc_cmd);
- npu_free_misc_cmd(host_ctx, misc_cmd);
-get_prop_exit:
mutex_unlock(&host_ctx->lock);
+ npu_free_misc_cmd(host_ctx, misc_cmd);
+disable_fw:
+ disable_fw(npu_dev);
+free_prop_packet:
kfree(prop_packet);
+
return ret;
}
@@ -2614,6 +2742,9 @@ int32_t npu_host_exec_network_v2(struct npu_client *client,
return -EINVAL;
}
+ if (atomic_inc_return(&host_ctx->network_execute_cnt) == 1)
+ npu_notify_cdsprm_cxlimit_activity(npu_dev, true);
+
if (!network->is_active) {
NPU_ERR("network is not active\n");
ret = -EINVAL;
@@ -2746,6 +2877,8 @@ int32_t npu_host_exec_network_v2(struct npu_client *client,
exec_ioctl->stats_buf_size = 0;
}
+
+ NPU_DBG("Execute done %x\n", ret);
free_exec_cmd:
npu_dequeue_network_cmd(network, exec_cmd);
npu_free_network_cmd(host_ctx, exec_cmd);
@@ -2764,6 +2897,9 @@ int32_t npu_host_exec_network_v2(struct npu_client *client,
host_error_hdlr(npu_dev, true);
}
+ if (atomic_dec_return(&host_ctx->network_execute_cnt) == 0)
+ npu_notify_cdsprm_cxlimit_activity(npu_dev, false);
+
return ret;
}
diff --git a/drivers/media/platform/msm/npu/npu_mgr.h b/drivers/media/platform/msm/npu/npu_mgr.h
index 397d450..e44fb38 100644
--- a/drivers/media/platform/msm/npu/npu_mgr.h
+++ b/drivers/media/platform/msm/npu/npu_mgr.h
@@ -126,18 +126,24 @@ struct npu_host_ctx {
uint32_t fw_dbg_mode;
uint32_t exec_flags_override;
atomic_t ipc_trans_id;
- atomic_t network_exeute_cnt;
+ atomic_t network_execute_cnt;
uint32_t err_irq_sts;
uint32_t wdg_irq_sts;
bool fw_error;
bool cancel_work;
+ bool app_crashed;
struct notifier_block nb;
+ struct notifier_block panic_nb;
void *notif_hdle;
spinlock_t bridge_mbox_lock;
bool bridge_mbox_pwr_on;
void *ipc_msg_buf;
struct list_head misc_cmd_list;
+
+ struct msm_npu_property fw_caps;
+ bool fw_caps_valid;
+ uint32_t fw_caps_err_code;
};
struct npu_device;
diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
index dddf0f5..fcda5c4 100644
--- a/drivers/media/v4l2-core/v4l2-ctrls.c
+++ b/drivers/media/v4l2-core/v4l2-ctrls.c
@@ -836,6 +836,8 @@ const char *v4l2_ctrl_get_name(u32 id)
case V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER:return "H264 Number of HC Layers";
case V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER_QP:
return "H264 Set QP Value for HC Layers";
+ case V4L2_CID_MPEG_VIDEO_H264_CHROMA_QP_INDEX_OFFSET:
+ return "H264 Chroma QP Index Offset";
case V4L2_CID_MPEG_VIDEO_MPEG4_I_FRAME_QP: return "MPEG4 I-Frame QP Value";
case V4L2_CID_MPEG_VIDEO_MPEG4_P_FRAME_QP: return "MPEG4 P-Frame QP Value";
case V4L2_CID_MPEG_VIDEO_MPEG4_B_FRAME_QP: return "MPEG4 B-Frame QP Value";
diff --git a/drivers/mfd/qcom-i2c-pmic.c b/drivers/mfd/qcom-i2c-pmic.c
index 8ea90e8..8c9c249 100644
--- a/drivers/mfd/qcom-i2c-pmic.c
+++ b/drivers/mfd/qcom-i2c-pmic.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2016-2017, 2020, The Linux Foundation. All rights reserved.
*/
#define pr_fmt(fmt) "I2C PMIC: %s: " fmt, __func__
@@ -61,6 +61,7 @@ struct i2c_pmic {
int summary_irq;
bool resume_completed;
bool irq_waiting;
+ bool toggle_stat;
};
static void i2c_pmic_irq_bus_lock(struct irq_data *d)
@@ -473,6 +474,9 @@ static int i2c_pmic_parse_dt(struct i2c_pmic *chip)
of_property_read_string(node, "pinctrl-names", &chip->pinctrl_name);
+ chip->toggle_stat = of_property_read_bool(node,
+ "qcom,enable-toggle-stat");
+
return rc;
}
@@ -513,6 +517,69 @@ static int i2c_pmic_determine_initial_status(struct i2c_pmic *chip)
return 0;
}
+#define INT_TEST_OFFSET 0xE0
+#define INT_TEST_MODE_EN_BIT BIT(7)
+#define INT_TEST_VAL_OFFSET 0xE1
+#define INT_0_BIT BIT(0)
+static int i2c_pmic_toggle_stat(struct i2c_pmic *chip)
+{
+ int rc = 0, i;
+
+ if (!chip->toggle_stat || !chip->num_periphs)
+ return 0;
+
+ rc = regmap_write(chip->regmap,
+ chip->periph[0].addr | INT_EN_SET_OFFSET,
+ INT_0_BIT);
+ if (rc < 0) {
+ pr_err("Couldn't write to int_en_set rc=%d\n", rc);
+ return rc;
+ }
+
+ rc = regmap_write(chip->regmap, chip->periph[0].addr | INT_TEST_OFFSET,
+ INT_TEST_MODE_EN_BIT);
+ if (rc < 0) {
+ pr_err("Couldn't write to int_test rc=%d\n", rc);
+ return rc;
+ }
+
+ for (i = 0; i < 5; i++) {
+ rc = regmap_write(chip->regmap,
+ chip->periph[0].addr | INT_TEST_VAL_OFFSET,
+ INT_0_BIT);
+ if (rc < 0) {
+ pr_err("Couldn't write to int_test_val rc=%d\n", rc);
+ goto exit;
+ }
+
+ usleep_range(5000, 5500);
+
+ rc = regmap_write(chip->regmap,
+ chip->periph[0].addr | INT_TEST_VAL_OFFSET,
+ 0);
+ if (rc < 0) {
+ pr_err("Couldn't write to int_test_val rc=%d\n", rc);
+ goto exit;
+ }
+
+ rc = regmap_write(chip->regmap,
+ chip->periph[0].addr | INT_LATCHED_CLR_OFFSET,
+ INT_0_BIT);
+ if (rc < 0) {
+ pr_err("Couldn't write to int_latched_clr rc=%d\n", rc);
+ goto exit;
+ }
+
+ usleep_range(5000, 5500);
+ }
+exit:
+ regmap_write(chip->regmap, chip->periph[0].addr | INT_TEST_OFFSET, 0);
+ regmap_write(chip->regmap, chip->periph[0].addr | INT_EN_CLR_OFFSET,
+ INT_0_BIT);
+
+ return rc;
+}
+
static struct regmap_config i2c_pmic_regmap_config = {
.reg_bits = 16,
.val_bits = 8,
@@ -571,6 +638,12 @@ static int i2c_pmic_probe(struct i2c_client *client,
chip->resume_completed = true;
mutex_init(&chip->irq_complete);
+ rc = i2c_pmic_toggle_stat(chip);
+ if (rc < 0) {
+ pr_err("Couldn't toggle stat rc=%d\n", rc);
+ goto cleanup;
+ }
+
rc = devm_request_threaded_irq(&client->dev, client->irq, NULL,
i2c_pmic_irq_handler,
IRQF_ONESHOT | IRQF_SHARED,
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 96d64a8..1679b66 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -604,6 +604,15 @@
driver initializes gpios, enables/disables LDOs that are part of
XR standalone reference device.
+config QTI_MAXIM_FAN_CONTROLLER
+ tristate "QTI MAXIM fan controller driver support"
+ help
+ This driver supports the Maxim(MAX31760) fan controller.
+ This driver exposes i2c control to control registers for
+ setting different PWM, different temperature settings etc.
+ Also, this driver initializes the power for the fan controller
+ and exposes sysfs node to control different speeds of fan.
+
source "drivers/misc/c2port/Kconfig"
source "drivers/misc/eeprom/Kconfig"
source "drivers/misc/cb710/Kconfig"
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index 8cf0060..b79e632 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -72,5 +72,6 @@
obj-$(CONFIG_OKL4_GUEST) += okl4-panic.o
obj-$(CONFIG_OKL4_LINK_SHBUF) += okl4-link-shbuf.o
obj-$(CONFIG_WIGIG_SENSING_SPI) += wigig_sensing.o
+obj-$(CONFIG_QTI_MAXIM_FAN_CONTROLLER) += max31760.o
obj-$(CONFIG_QTI_XR_SMRTVWR_MISC) += qxr-stdalonevwr.o
obj-$(CONFIG_FPR_FPC) += fpr_FingerprintCard/
diff --git a/drivers/misc/max31760.c b/drivers/misc/max31760.c
new file mode 100644
index 0000000..2479583
--- /dev/null
+++ b/drivers/misc/max31760.c
@@ -0,0 +1,374 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#include <linux/device.h>
+#include <linux/i2c.h>
+#include <linux/slab.h>
+#include <linux/platform_device.h>
+#include <linux/input.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/of.h>
+#include <linux/of_graph.h>
+#include <linux/kernel.h>
+#include <linux/of_gpio.h>
+#include <linux/gpio.h>
+#include <linux/delay.h>
+#include <linux/regulator/consumer.h>
+#include <linux/rwlock.h>
+#include <linux/uaccess.h>
+#include <linux/regmap.h>
+
+struct max31760 {
+ struct device *dev;
+ u8 i2c_addr;
+ struct regmap *regmap;
+ u32 fan_pwr_en;
+ u32 fan_pwr_bp;
+ struct i2c_client *i2c_client;
+ int pwm;
+ bool fan_off;
+};
+
+static void turn_gpio(struct max31760 *pdata, bool on)
+{
+ if (on) {
+ gpio_direction_output(pdata->fan_pwr_en, 0);
+ gpio_set_value(pdata->fan_pwr_en, 1);
+ pr_debug("%s gpio:%d set to high\n", __func__,
+ pdata->fan_pwr_en);
+ msleep(20);
+ gpio_direction_output(pdata->fan_pwr_bp, 0);
+ gpio_set_value(pdata->fan_pwr_bp, 1);
+ pr_debug("%s gpio:%d set to high\n", __func__,
+ pdata->fan_pwr_bp);
+ msleep(20);
+ } else {
+ gpio_direction_output(pdata->fan_pwr_en, 1);
+ gpio_set_value(pdata->fan_pwr_en, 0);
+ pr_debug("%s gpio:%d set to low\n", __func__,
+ pdata->fan_pwr_en);
+ msleep(20);
+ gpio_direction_output(pdata->fan_pwr_bp, 1);
+ gpio_set_value(pdata->fan_pwr_bp, 0);
+ pr_debug("%s gpio:%d set to low\n", __func__,
+ pdata->fan_pwr_bp);
+ msleep(20);
+ }
+}
+
+static int max31760_i2c_reg_get(struct max31760 *pdata,
+ u8 reg)
+{
+ int ret;
+ u32 val1;
+
+ pr_debug("%s, reg:%x\n", __func__, reg);
+ ret = regmap_read(pdata->regmap, (unsigned int)reg, &val1);
+ if (ret < 0) {
+ pr_err("%s failed reading reg 0x%02x failure\n", __func__, reg);
+ return ret;
+ }
+
+ pr_debug("%s success reading reg 0x%x=0x%x, val1=%x\n",
+ __func__, reg, val1, val1);
+
+ return 0;
+}
+
+static int max31760_i2c_reg_set(struct max31760 *pdata,
+ u8 reg, u8 val)
+{
+ int ret;
+ int i;
+
+ for (i = 0; i < 10; i++) {
+ ret = regmap_write(pdata->regmap, reg, val);
+ if (ret >= 0)
+ return ret;
+ msleep(20);
+ }
+ if (ret < 0)
+ pr_err("%s loop:%d failed to write reg 0x%02x=0x%02x\n",
+ __func__, i, reg, val);
+ return ret;
+}
+
+static ssize_t fan_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct max31760 *pdata;
+ int ret;
+
+ pdata = dev_get_drvdata(dev);
+ if (!pdata) {
+ pr_err("invalid driver pointer\n");
+ return -ENODEV;
+ }
+
+ if (pdata->fan_off)
+ ret = scnprintf(buf, PAGE_SIZE, "off\n");
+ else
+ ret = scnprintf(buf, PAGE_SIZE, "0x%x\n", pdata->pwm);
+
+ return ret;
+}
+
+static ssize_t fan_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ long val;
+ struct max31760 *pdata;
+
+ pdata = dev_get_drvdata(dev);
+ if (!pdata) {
+ pr_err("invalid driver pointer\n");
+ return -ENODEV;
+ }
+
+ kstrtol(buf, 0, &val);
+ pr_debug("%s, count:%d val:%lx, buf:%s\n",
+ __func__, count, val, buf);
+
+ if (val == 0xff) {
+ turn_gpio(pdata, false);
+ pdata->fan_off = true;
+ } else if (val == 0xfe) {
+ pdata->fan_off = false;
+ turn_gpio(pdata, true);
+ max31760_i2c_reg_set(pdata, 0x00, pdata->pwm);
+ } else {
+ max31760_i2c_reg_set(pdata, 0x00, (int)val);
+ pdata->pwm = (int)val;
+ }
+
+ return count;
+}
+
+static DEVICE_ATTR_RW(fan);
+
+static struct attribute *max31760_fs_attrs[] = {
+ &dev_attr_fan.attr,
+ NULL
+};
+
+static struct attribute_group max31760_fs_attr_group = {
+ .attrs = max31760_fs_attrs,
+};
+
+static int max31760_parse_dt(struct device *dev,
+ struct max31760 *pdata)
+{
+ struct device_node *np = dev->of_node;
+ int ret;
+
+ pdata->fan_pwr_en =
+ of_get_named_gpio(np, "qcom,fan-pwr-en", 0);
+ if (!gpio_is_valid(pdata->fan_pwr_en)) {
+ pr_err("%s fan_pwr_en gpio not specified\n", __func__);
+ ret = -EINVAL;
+ } else {
+ ret = gpio_request(pdata->fan_pwr_en, "fan_pwr_en");
+ if (ret) {
+ pr_err("max31760 fan_pwr_en gpio request failed\n");
+ goto error1;
+ }
+ }
+
+ pdata->fan_pwr_bp =
+ of_get_named_gpio(np, "qcom,fan-pwr-bp", 0);
+ if (!gpio_is_valid(pdata->fan_pwr_bp)) {
+ pr_err("%s fan_pwr_bp gpio not specified\n", __func__);
+ ret = -EINVAL;
+ } else
+ ret = gpio_request(pdata->fan_pwr_bp, "fan_pwr_bp");
+ if (ret) {
+ pr_err("max31760 fan_pwr_bp gpio request failed\n");
+ goto error2;
+ }
+ turn_gpio(pdata, true);
+
+ return ret;
+
+error2:
+ gpio_free(pdata->fan_pwr_bp);
+error1:
+ gpio_free(pdata->fan_pwr_en);
+ return ret;
+}
+
+static int max31760_fan_pwr_enable_vregs(struct device *dev,
+ struct max31760 *pdata)
+{
+ int ret;
+ struct regulator *reg;
+
+ /* Fan Control LDO L10A */
+ reg = devm_regulator_get(dev, "pm8150_l10");
+ if (!IS_ERR(reg)) {
+ regulator_set_load(reg, 600000);
+ ret = regulator_enable(reg);
+ if (ret < 0) {
+ pr_err("%s pm8150_l10 failed\n", __func__);
+ return -EINVAL;
+ }
+ }
+
+ /* Fan Control LDO S4 */
+ reg = devm_regulator_get(dev, "pm8150_s4");
+ if (!IS_ERR(reg)) {
+ regulator_set_load(reg, 600000);
+ ret = regulator_enable(reg);
+ if (ret < 0) {
+ pr_err("%s pm8150_s4 failed\n", __func__);
+ return -EINVAL;
+ }
+ }
+
+ return ret;
+}
+
+static const struct regmap_config max31760_regmap = {
+ .reg_bits = 8,
+ .val_bits = 8,
+ .max_register = 0xFF,
+};
+
+static int max31760_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ int ret;
+ struct max31760 *pdata;
+
+ if (!client || !client->dev.of_node) {
+ pr_err("%s invalid input\n", __func__);
+ return -EINVAL;
+ }
+
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
+ pr_err("%s device doesn't support I2C\n", __func__);
+ return -ENODEV;
+ }
+
+ pdata = devm_kzalloc(&client->dev,
+ sizeof(struct max31760), GFP_KERNEL);
+ if (!pdata)
+ return -ENOMEM;
+
+ pdata->regmap = devm_regmap_init_i2c(client, &max31760_regmap);
+ if (IS_ERR(pdata->regmap)) {
+ ret = PTR_ERR(pdata->regmap);
+ pr_err("%s Failed to allocate regmap: %d\n", __func__, ret);
+ return -EINVAL;
+ }
+
+ ret = max31760_parse_dt(&client->dev, pdata);
+ if (ret) {
+ pr_err("%s failed to parse device tree\n", __func__);
+ return -EINVAL;
+ }
+
+ ret = max31760_fan_pwr_enable_vregs(&client->dev, pdata);
+ if (ret) {
+ pr_err("%s failed to pwr regulators\n", __func__);
+ return -EINVAL;
+ }
+
+ pdata->dev = &client->dev;
+ i2c_set_clientdata(client, pdata);
+
+ pdata->i2c_client = client;
+
+ dev_set_drvdata(&client->dev, pdata);
+
+ ret = sysfs_create_group(&pdata->dev->kobj, &max31760_fs_attr_group);
+ if (ret)
+ pr_err("%s unable to register max31760 sysfs nodes\n");
+
+ /* 00 - 0x01 -- 33Hz */
+ /* 01 - 0x09 -- 150Hz */
+ /* 10 - 0x11 -- 1500Hz */
+ /* 11 - 0x19 -- 25Khz */
+ pdata->pwm = 0x19;
+ max31760_i2c_reg_set(pdata, 0x00, pdata->pwm);
+ max31760_i2c_reg_set(pdata, 0x01, 0x11);
+ max31760_i2c_reg_set(pdata, 0x02, 0x31);
+ max31760_i2c_reg_set(pdata, 0x03, 0x45);
+ max31760_i2c_reg_set(pdata, 0x04, 0xff);
+ max31760_i2c_reg_set(pdata, 0x50, 0xcf);
+ max31760_i2c_reg_set(pdata, 0x01, 0x11);
+ max31760_i2c_reg_set(pdata, 0x00, pdata->pwm);
+ max31760_i2c_reg_get(pdata, 0x00);
+
+ return ret;
+}
+
+static int max31760_remove(struct i2c_client *client)
+{
+ struct max31760 *pdata = i2c_get_clientdata(client);
+
+ if (!pdata)
+ goto end;
+
+ sysfs_remove_group(&pdata->dev->kobj, &max31760_fs_attr_group);
+ turn_gpio(pdata, false);
+end:
+ return 0;
+}
+
+
+static void max31760_shutdown(struct i2c_client *client)
+{
+}
+
+static int max31760_suspend(struct device *dev, pm_message_t state)
+{
+ struct max31760 *pdata = dev_get_drvdata(dev);
+
+ dev_dbg(dev, "suspend\n");
+ if (pdata)
+ turn_gpio(pdata, false);
+ return 0;
+}
+
+static int max31760_resume(struct device *dev)
+{
+ struct max31760 *pdata = dev_get_drvdata(dev);
+
+ dev_dbg(dev, "resume\n");
+ if (pdata) {
+ turn_gpio(pdata, true);
+ max31760_i2c_reg_set(pdata, 0x00, pdata->pwm);
+ }
+ return 0;
+}
+
+static const struct of_device_id max31760_id_table[] = {
+ { .compatible = "maxim,xrfancontroller",},
+ { },
+};
+static const struct i2c_device_id max31760_i2c_table[] = {
+ { "xrfancontroller", 0 },
+ { },
+};
+
+static struct i2c_driver max31760_i2c_driver = {
+ .probe = max31760_probe,
+ .remove = max31760_remove,
+ .shutdown = max31760_shutdown,
+ .driver = {
+ .name = "maxim xrfancontroller",
+ .of_match_table = max31760_id_table,
+ .suspend = max31760_suspend,
+ .resume = max31760_resume,
+ },
+ .id_table = max31760_i2c_table,
+};
+module_i2c_driver(max31760_i2c_driver);
+MODULE_DEVICE_TABLE(i2c, max31760_i2c_table);
+MODULE_DESCRIPTION("Maxim 31760 Fan Controller");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/qseecom.c b/drivers/misc/qseecom.c
index 6c2080d..80ce22b 100644
--- a/drivers/misc/qseecom.c
+++ b/drivers/misc/qseecom.c
@@ -2252,10 +2252,6 @@ static int __qseecom_process_incomplete_cmd(struct qseecom_dev_handle *data,
goto exit;
}
- ret = qseecom_dmabuf_cache_operations(ptr_svc->dmabuf,
- QSEECOM_CACHE_INVALIDATE);
- if (ret)
- goto exit;
} else {
ret = qseecom_scm_call(SCM_SVC_TZSCHEDULER, 1,
cmd_buf, cmd_len, resp, sizeof(*resp));
@@ -2587,10 +2583,6 @@ static int __qseecom_reentrancy_process_incomplete_cmd(
ret, data->client.app_id);
goto exit;
}
- ret = qseecom_dmabuf_cache_operations(ptr_svc->dmabuf,
- QSEECOM_CACHE_INVALIDATE);
- if (ret)
- goto exit;
} else {
ret = qseecom_scm_call(SCM_SVC_TZSCHEDULER, 1,
cmd_buf, cmd_len, resp, sizeof(*resp));
@@ -3761,14 +3753,6 @@ static int __qseecom_send_cmd(struct qseecom_dev_handle *data,
ret, data->client.app_id);
goto exit;
}
- if (data->client.dmabuf) {
- ret = qseecom_dmabuf_cache_operations(data->client.dmabuf,
- QSEECOM_CACHE_INVALIDATE);
- if (ret) {
- pr_err("cache operation failed %d\n", ret);
- goto exit;
- }
- }
if (qseecom.qsee_reentrancy_support) {
ret = __qseecom_process_reentrancy(&resp, ptr_app, data);
@@ -3791,6 +3775,15 @@ static int __qseecom_send_cmd(struct qseecom_dev_handle *data,
}
}
}
+
+ if (data->client.dmabuf) {
+ ret = qseecom_dmabuf_cache_operations(data->client.dmabuf,
+ QSEECOM_CACHE_INVALIDATE);
+ if (ret) {
+ pr_err("cache operation failed %d\n", ret);
+ goto exit;
+ }
+ }
exit:
return ret;
}
@@ -9298,6 +9291,15 @@ static int qseecom_init_dev(struct platform_device *pdev)
goto exit_del_cdev;
}
+ if (!qseecom.dev->dma_parms) {
+ qseecom.dev->dma_parms =
+ kzalloc(sizeof(*qseecom.dev->dma_parms), GFP_KERNEL);
+ if (!qseecom.dev->dma_parms) {
+ rc = -ENOMEM;
+ goto exit_del_cdev;
+ }
+ }
+ dma_set_max_seg_size(qseecom.dev, DMA_BIT_MASK(32));
return 0;
exit_del_cdev:
@@ -9314,6 +9316,8 @@ static int qseecom_init_dev(struct platform_device *pdev)
static void qseecom_deinit_dev(void)
{
+ kfree(qseecom.dev->dma_parms);
+ qseecom.dev->dma_parms = NULL;
cdev_del(&qseecom.cdev);
device_destroy(qseecom.driver_class, qseecom.qseecom_device_no);
class_destroy(qseecom.driver_class);
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index b013b84..a3c4862 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1569,16 +1569,35 @@ static int mmc_blk_cqe_issue_flush(struct mmc_queue *mq, struct request *req)
static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req)
{
struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
+ struct mmc_card *card = mq->card;
+ struct mmc_host *host = card->host;
int err = 0;
mmc_blk_data_prep(mq, mqrq, 0, NULL, NULL);
- mqrq->brq.mrq.req = req;
mmc_deferred_scaling(mq->card->host);
mmc_cqe_clk_scaling_start_busy(mq, mq->card->host, true);
+ /*
+ * When voltage corner in LSVS on low load scenario and
+ * there is sudden burst of requests device queue all
+ * slots are filled and it is needed to wait till all
+ * requests are completed to scale up frequency. This
+ * is leading to delay in scaling and impacting performance.
+ * Fix this issue by only allowing one request in request queue
+ * when device is running with lower speed mode.
+ */
+ if (host->clk_scaling.state == MMC_LOAD_LOW) {
+ err = host->cqe_ops->cqe_wait_for_idle(host);
+ if (err) {
+ pr_err("%s: %s: CQE went in recovery path.\n",
+ mmc_hostname(host), __func__);
+ goto stop_scaling;
+ }
+ }
err = mmc_blk_cqe_start_req(mq->card->host, &mqrq->brq.mrq);
+stop_scaling:
if (err)
mmc_cqe_clk_scaling_stop_busy(mq->card->host, true, false);
@@ -2189,7 +2208,6 @@ static int mmc_blk_mq_issue_rw_rq(struct mmc_queue *mq,
mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq);
mqrq->brq.mrq.done = mmc_blk_mq_req_done;
- mqrq->brq.mrq.req = req;
mmc_pre_req(host, &mqrq->brq.mrq);
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index b920ba7..f483926 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -384,8 +384,6 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
blk_queue_max_segments(mq->queue, host->max_segs);
- if (host->inlinecrypt_support)
- queue_flag_set_unlocked(QUEUE_FLAG_INLINECRYPT, mq->queue);
if (host->ops->init)
host->ops->init(host);
@@ -403,6 +401,9 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
mutex_init(&mq->complete_lock);
init_waitqueue_head(&mq->wait);
+
+ if (host->cqe_ops->cqe_crypto_update_queue)
+ host->cqe_ops->cqe_crypto_update_queue(host, mq->queue);
}
static int mmc_mq_init_queue(struct mmc_queue *mq, int q_depth,
diff --git a/drivers/mmc/core/sdio_cis.c b/drivers/mmc/core/sdio_cis.c
index db031f1..1a36d97 100644
--- a/drivers/mmc/core/sdio_cis.c
+++ b/drivers/mmc/core/sdio_cis.c
@@ -54,8 +54,9 @@ static int cistpl_vers_1(struct mmc_card *card, struct sdio_func *func,
string = (char*)(buffer + nr_strings);
for (i = 0; i < nr_strings; i++) {
+ size_t buf_len = strlen(buf);
buffer[i] = string;
- strlcpy(string, buf, strlen(buf) + 1);
+ strlcpy(string, buf, buf_len + 1);
string += strlen(string) + 1;
buf += strlen(buf) + 1;
}
diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
index 25a0a48..e9dcda1 100644
--- a/drivers/mmc/host/Kconfig
+++ b/drivers/mmc/host/Kconfig
@@ -151,17 +151,6 @@
help
This selects the Atmel SDMMC driver
-config MMC_SDHCI_MSM_ICE
- bool "Qualcomm Technologies, Inc Inline Crypto Engine for SDHCI core"
- depends on MMC_SDHCI_MSM && CRYPTO_DEV_QCOM_ICE
- help
- This selects the QTI specific additions to support Inline Crypto
- Engine (ICE). ICE accelerates the crypto operations and maintains
- the high SDHCI performance.
-
- Select this if you have ICE supported for SDHCI on QTI chipset.
- If unsure, say N.
-
config MMC_SDHCI_OF_ESDHC
tristate "SDHCI OF support for the Freescale eSDHC controller"
depends on MMC_SDHCI_PLTFM
@@ -957,3 +946,20 @@
If you have a controller with this interface, say Y or M here.
If unsure, say N.
+
+config MMC_CQHCI_CRYPTO
+ bool "CQHCI Crypto Engine Support"
+ depends on MMC_CQHCI && BLK_INLINE_ENCRYPTION
+ help
+ Enable Crypto Engine Support in CQHCI.
+ Enabling this makes it possible for the kernel to use the crypto
+ capabilities of the CQHCI device (if present) to perform crypto
+ operations on data being transferred to/from the device.
+
+config MMC_CQHCI_CRYPTO_QTI
+ bool "Vendor specific CQHCI Crypto Engine Support"
+ depends on MMC_CQHCI_CRYPTO
+ help
+ Enable Vendor Crypto Engine Support in CQHCI
+ Enabling this allows kernel to use CQHCI crypto operations defined
+ and implemented by QTI.
diff --git a/drivers/mmc/host/Makefile b/drivers/mmc/host/Makefile
index 72af5d8..d550cb2 100644
--- a/drivers/mmc/host/Makefile
+++ b/drivers/mmc/host/Makefile
@@ -87,12 +87,13 @@
obj-$(CONFIG_MMC_SDHCI_BCM_KONA) += sdhci-bcm-kona.o
obj-$(CONFIG_MMC_SDHCI_IPROC) += sdhci-iproc.o
obj-$(CONFIG_MMC_SDHCI_MSM) += sdhci-msm.o
-obj-$(CONFIG_MMC_SDHCI_MSM_ICE) += sdhci-msm-ice.o
obj-$(CONFIG_MMC_SDHCI_ST) += sdhci-st.o
obj-$(CONFIG_MMC_SDHCI_MICROCHIP_PIC32) += sdhci-pic32.o
obj-$(CONFIG_MMC_SDHCI_BRCMSTB) += sdhci-brcmstb.o
obj-$(CONFIG_MMC_SDHCI_OMAP) += sdhci-omap.o
obj-$(CONFIG_MMC_CQHCI) += cqhci.o
+obj-$(CONFIG_MMC_CQHCI_CRYPTO) += cqhci-crypto.o
+obj-$(CONFIG_MMC_CQHCI_CRYPTO_QTI) += cqhci-crypto-qti.o
ifeq ($(CONFIG_CB710_DEBUG),y)
CFLAGS-cb710-mmc += -DDEBUG
diff --git a/drivers/mmc/host/cqhci-crypto-qti.c b/drivers/mmc/host/cqhci-crypto-qti.c
new file mode 100644
index 0000000..7be5335
--- /dev/null
+++ b/drivers/mmc/host/cqhci-crypto-qti.c
@@ -0,0 +1,300 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2020, Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <crypto/algapi.h>
+#include "sdhci.h"
+#include "sdhci-pltfm.h"
+#include "sdhci-msm.h"
+#include "cqhci-crypto-qti.h"
+#include <linux/crypto-qti-common.h>
+
+#define RAW_SECRET_SIZE 32
+#define MINIMUM_DUN_SIZE 512
+#define MAXIMUM_DUN_SIZE 65536
+
+static struct cqhci_host_crypto_variant_ops cqhci_crypto_qti_variant_ops = {
+ .host_init_crypto = cqhci_crypto_qti_init_crypto,
+ .enable = cqhci_crypto_qti_enable,
+ .disable = cqhci_crypto_qti_disable,
+ .resume = cqhci_crypto_qti_resume,
+ .debug = cqhci_crypto_qti_debug,
+};
+
+static bool ice_cap_idx_valid(struct cqhci_host *host,
+ unsigned int cap_idx)
+{
+ return cap_idx < host->crypto_capabilities.num_crypto_cap;
+}
+
+static uint8_t get_data_unit_size_mask(unsigned int data_unit_size)
+{
+ if (data_unit_size < MINIMUM_DUN_SIZE ||
+ data_unit_size > MAXIMUM_DUN_SIZE ||
+ !is_power_of_2(data_unit_size))
+ return 0;
+
+ return data_unit_size / MINIMUM_DUN_SIZE;
+}
+
+
+void cqhci_crypto_qti_enable(struct cqhci_host *host)
+{
+ int err = 0;
+
+ if (!cqhci_host_is_crypto_supported(host))
+ return;
+
+ host->caps |= CQHCI_CAP_CRYPTO_SUPPORT;
+
+ err = crypto_qti_enable(host->crypto_vops->priv);
+ if (err) {
+ pr_err("%s: Error enabling crypto, err %d\n",
+ __func__, err);
+ cqhci_crypto_qti_disable(host);
+ }
+}
+
+void cqhci_crypto_qti_disable(struct cqhci_host *host)
+{
+ cqhci_crypto_disable_spec(host);
+ crypto_qti_disable(host->crypto_vops->priv);
+}
+
+static int cqhci_crypto_qti_keyslot_program(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot)
+{
+ struct cqhci_host *host = keyslot_manager_private(ksm);
+ int err = 0;
+ u8 data_unit_mask;
+ int crypto_alg_id;
+
+ crypto_alg_id = cqhci_crypto_cap_find(host, key->crypto_mode,
+ key->data_unit_size);
+
+ if (!cqhci_is_crypto_enabled(host) ||
+ !cqhci_keyslot_valid(host, slot) ||
+ !ice_cap_idx_valid(host, crypto_alg_id)) {
+ return -EINVAL;
+ }
+
+ data_unit_mask = get_data_unit_size_mask(key->data_unit_size);
+
+ if (!(data_unit_mask &
+ host->crypto_cap_array[crypto_alg_id].sdus_mask)) {
+ return -EINVAL;
+ }
+
+ err = crypto_qti_keyslot_program(host->crypto_vops->priv, key,
+ slot, data_unit_mask, crypto_alg_id);
+ if (err)
+ pr_err("%s: failed with error %d\n", __func__, err);
+
+ return err;
+}
+
+static int cqhci_crypto_qti_keyslot_evict(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot)
+{
+ int err = 0;
+ struct cqhci_host *host = keyslot_manager_private(ksm);
+
+ if (!cqhci_is_crypto_enabled(host) ||
+ !cqhci_keyslot_valid(host, slot))
+ return -EINVAL;
+
+ err = crypto_qti_keyslot_evict(host->crypto_vops->priv, slot);
+ if (err)
+ pr_err("%s: failed with error %d\n", __func__, err);
+
+ return err;
+}
+
+static int cqhci_crypto_qti_derive_raw_secret(struct keyslot_manager *ksm,
+ const u8 *wrapped_key, unsigned int wrapped_key_size,
+ u8 *secret, unsigned int secret_size)
+{
+ int err = 0;
+
+ if (wrapped_key_size <= RAW_SECRET_SIZE) {
+ pr_err("%s: Invalid wrapped_key_size: %u\n", __func__,
+ wrapped_key_size);
+ err = -EINVAL;
+ return err;
+ }
+ if (secret_size != RAW_SECRET_SIZE) {
+ pr_err("%s: Invalid secret size: %u\n", __func__, secret_size);
+ err = -EINVAL;
+ return err;
+ }
+ memcpy(secret, wrapped_key, secret_size);
+ return 0;
+}
+
+static const struct keyslot_mgmt_ll_ops cqhci_crypto_qti_ksm_ops = {
+ .keyslot_program = cqhci_crypto_qti_keyslot_program,
+ .keyslot_evict = cqhci_crypto_qti_keyslot_evict,
+ .derive_raw_secret = cqhci_crypto_qti_derive_raw_secret
+};
+
+enum blk_crypto_mode_num cqhci_blk_crypto_qti_mode_num_for_alg_dusize(
+ enum cqhci_crypto_alg cqhci_crypto_alg,
+ enum cqhci_crypto_key_size key_size)
+{
+ /*
+ * Currently the only mode that eMMC and blk-crypto both support.
+ */
+ if (cqhci_crypto_alg == CQHCI_CRYPTO_ALG_AES_XTS &&
+ key_size == CQHCI_CRYPTO_KEY_SIZE_256)
+ return BLK_ENCRYPTION_MODE_AES_256_XTS;
+
+ return BLK_ENCRYPTION_MODE_INVALID;
+}
+
+int cqhci_host_init_crypto_qti_spec(struct cqhci_host *host,
+ const struct keyslot_mgmt_ll_ops *ksm_ops)
+{
+ int cap_idx = 0;
+ int err = 0;
+ unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX];
+ enum blk_crypto_mode_num blk_mode_num;
+
+ /* Default to disabling crypto */
+ host->caps &= ~CQHCI_CAP_CRYPTO_SUPPORT;
+
+ if (!(cqhci_readl(host, CQHCI_CAP) & CQHCI_CAP_CS)) {
+ pr_debug("%s no crypto capability\n", __func__);
+ err = -ENODEV;
+ goto out;
+ }
+
+ /*
+ * Crypto Capabilities should never be 0, because the
+ * config_array_ptr > 04h. So we use a 0 value to indicate that
+ * crypto init failed, and can't be enabled.
+ */
+ host->crypto_capabilities.reg_val = cqhci_readl(host, CQHCI_CCAP);
+ host->crypto_cfg_register =
+ (u32)host->crypto_capabilities.config_array_ptr * 0x100;
+ host->crypto_cap_array =
+ devm_kcalloc(mmc_dev(host->mmc),
+ host->crypto_capabilities.num_crypto_cap,
+ sizeof(host->crypto_cap_array[0]), GFP_KERNEL);
+ if (!host->crypto_cap_array) {
+ err = -ENOMEM;
+ pr_err("%s failed to allocate memory\n", __func__);
+ goto out;
+ }
+
+ memset(crypto_modes_supported, 0, sizeof(crypto_modes_supported));
+
+ /*
+ * Store all the capabilities now so that we don't need to repeatedly
+ * access the device each time we want to know its capabilities
+ */
+ for (cap_idx = 0; cap_idx < host->crypto_capabilities.num_crypto_cap;
+ cap_idx++) {
+ host->crypto_cap_array[cap_idx].reg_val =
+ cpu_to_le32(cqhci_readl(host,
+ CQHCI_CRYPTOCAP +
+ cap_idx * sizeof(__le32)));
+ blk_mode_num = cqhci_blk_crypto_qti_mode_num_for_alg_dusize(
+ host->crypto_cap_array[cap_idx].algorithm_id,
+ host->crypto_cap_array[cap_idx].key_size);
+ if (blk_mode_num == BLK_ENCRYPTION_MODE_INVALID)
+ continue;
+ crypto_modes_supported[blk_mode_num] |=
+ host->crypto_cap_array[cap_idx].sdus_mask * 512;
+ }
+
+ host->ksm = keyslot_manager_create(cqhci_num_keyslots(host), ksm_ops,
+ crypto_modes_supported, host);
+
+ if (!host->ksm) {
+ err = -ENOMEM;
+ goto out;
+ }
+ /*
+ * In case host controller supports cryptographic operations
+ * then, it uses 128bit task descriptor. Upper 64 bits of task
+ * descriptor would be used to pass crypto specific informaton.
+ */
+ host->caps |= CQHCI_TASK_DESC_SZ_128;
+
+ return 0;
+
+out:
+ /* Indicate that init failed by setting crypto_capabilities to 0 */
+ host->crypto_capabilities.reg_val = 0;
+ return err;
+}
+
+int cqhci_crypto_qti_init_crypto(struct cqhci_host *host,
+ const struct keyslot_mgmt_ll_ops *ksm_ops)
+{
+ int err = 0;
+ struct sdhci_host *sdhci = mmc_priv(host->mmc);
+ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(sdhci);
+ struct sdhci_msm_host *msm_host = pltfm_host->priv;
+ struct resource *cqhci_ice_memres = NULL;
+
+ cqhci_ice_memres = platform_get_resource_byname(msm_host->pdev,
+ IORESOURCE_MEM,
+ "cqhci_ice");
+ if (!cqhci_ice_memres) {
+ pr_debug("%s ICE not supported\n", __func__);
+ host->icemmio = NULL;
+ return PTR_ERR(cqhci_ice_memres);
+ }
+
+ host->icemmio = devm_ioremap(&msm_host->pdev->dev,
+ cqhci_ice_memres->start,
+ resource_size(cqhci_ice_memres));
+ if (!host->icemmio) {
+ pr_err("%s failed to remap ice regs\n", __func__);
+ return PTR_ERR(host->icemmio);
+ }
+
+ err = cqhci_host_init_crypto_qti_spec(host, &cqhci_crypto_qti_ksm_ops);
+ if (err) {
+ pr_err("%s: Error initiating crypto capabilities, err %d\n",
+ __func__, err);
+ return err;
+ }
+
+ err = crypto_qti_init_crypto(&msm_host->pdev->dev,
+ host->icemmio, (void **)&host->crypto_vops->priv);
+ if (err) {
+ pr_err("%s: Error initiating crypto, err %d\n",
+ __func__, err);
+ }
+ return err;
+}
+
+int cqhci_crypto_qti_debug(struct cqhci_host *host)
+{
+ return crypto_qti_debug(host->crypto_vops->priv);
+}
+
+void cqhci_crypto_qti_set_vops(struct cqhci_host *host)
+{
+ return cqhci_crypto_set_vops(host, &cqhci_crypto_qti_variant_ops);
+}
+
+int cqhci_crypto_qti_resume(struct cqhci_host *host)
+{
+ return crypto_qti_resume(host->crypto_vops->priv);
+}
diff --git a/drivers/mmc/host/cqhci-crypto-qti.h b/drivers/mmc/host/cqhci-crypto-qti.h
new file mode 100644
index 0000000..2788e96b
--- /dev/null
+++ b/drivers/mmc/host/cqhci-crypto-qti.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef _UFSHCD_CRYPTO_QTI_H
+#define _UFSHCD_CRYPTO_QTI_H
+
+#include "cqhci-crypto.h"
+
+void cqhci_crypto_qti_enable(struct cqhci_host *host);
+
+void cqhci_crypto_qti_disable(struct cqhci_host *host);
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+int cqhci_crypto_qti_init_crypto(struct cqhci_host *host,
+ const struct keyslot_mgmt_ll_ops *ksm_ops);
+#endif
+
+int cqhci_crypto_qti_debug(struct cqhci_host *host);
+
+void cqhci_crypto_qti_set_vops(struct cqhci_host *host);
+
+int cqhci_crypto_qti_resume(struct cqhci_host *host);
+
+#endif /* _UFSHCD_ICE_QTI_H */
diff --git a/drivers/mmc/host/cqhci-crypto.c b/drivers/mmc/host/cqhci-crypto.c
new file mode 100644
index 0000000..5b06a6b
--- /dev/null
+++ b/drivers/mmc/host/cqhci-crypto.c
@@ -0,0 +1,528 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020 Google LLC
+ *
+ * Copyright (c) 2020 The Linux Foundation. All rights reserved.
+ *
+ * drivers/mmc/host/cqhci-crypto.c - Qualcomm Technologies, Inc.
+ *
+ * Original source is taken from:
+ * https://android.googlesource.com/kernel/common/+/4bac1109a10c55d49c0aa4f7ebdc4bc53cc368e8
+ * The driver caters to crypto engine support for UFS controllers.
+ * The crypto engine programming sequence, HW functionality and register
+ * offset is almost same in UFS and eMMC controllers.
+ */
+
+#include <crypto/algapi.h>
+#include "cqhci-crypto.h"
+#include "../core/queue.h"
+
+static bool cqhci_cap_idx_valid(struct cqhci_host *host, unsigned int cap_idx)
+{
+ return cap_idx < host->crypto_capabilities.num_crypto_cap;
+}
+
+static u8 get_data_unit_size_mask(unsigned int data_unit_size)
+{
+ if (data_unit_size < 512 || data_unit_size > 65536 ||
+ !is_power_of_2(data_unit_size))
+ return 0;
+
+ return data_unit_size / 512;
+}
+
+static size_t get_keysize_bytes(enum cqhci_crypto_key_size size)
+{
+ switch (size) {
+ case CQHCI_CRYPTO_KEY_SIZE_128:
+ return 16;
+ case CQHCI_CRYPTO_KEY_SIZE_192:
+ return 24;
+ case CQHCI_CRYPTO_KEY_SIZE_256:
+ return 32;
+ case CQHCI_CRYPTO_KEY_SIZE_512:
+ return 64;
+ default:
+ return 0;
+ }
+}
+
+int cqhci_crypto_cap_find(void *host_p, enum blk_crypto_mode_num crypto_mode,
+ unsigned int data_unit_size)
+{
+ struct cqhci_host *host = host_p;
+ enum cqhci_crypto_alg cqhci_alg;
+ u8 data_unit_mask;
+ int cap_idx;
+ enum cqhci_crypto_key_size cqhci_key_size;
+ union cqhci_crypto_cap_entry *ccap_array = host->crypto_cap_array;
+
+ if (!cqhci_host_is_crypto_supported(host))
+ return -EINVAL;
+
+ switch (crypto_mode) {
+ case BLK_ENCRYPTION_MODE_AES_256_XTS:
+ cqhci_alg = CQHCI_CRYPTO_ALG_AES_XTS;
+ cqhci_key_size = CQHCI_CRYPTO_KEY_SIZE_256;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ data_unit_mask = get_data_unit_size_mask(data_unit_size);
+
+ for (cap_idx = 0; cap_idx < host->crypto_capabilities.num_crypto_cap;
+ cap_idx++) {
+ if (ccap_array[cap_idx].algorithm_id == cqhci_alg &&
+ (ccap_array[cap_idx].sdus_mask & data_unit_mask) &&
+ ccap_array[cap_idx].key_size == cqhci_key_size)
+ return cap_idx;
+ }
+
+ return -EINVAL;
+}
+EXPORT_SYMBOL(cqhci_crypto_cap_find);
+
+/**
+ * cqhci_crypto_cfg_entry_write_key - Write a key into a crypto_cfg_entry
+ *
+ * Writes the key with the appropriate format - for AES_XTS,
+ * the first half of the key is copied as is, the second half is
+ * copied with an offset halfway into the cfg->crypto_key array.
+ * For the other supported crypto algs, the key is just copied.
+ *
+ * @cfg: The crypto config to write to
+ * @key: The key to write
+ * @cap: The crypto capability (which specifies the crypto alg and key size)
+ *
+ * Returns 0 on success, or -EINVAL
+ */
+static int cqhci_crypto_cfg_entry_write_key(union cqhci_crypto_cfg_entry *cfg,
+ const u8 *key,
+ union cqhci_crypto_cap_entry cap)
+{
+ size_t key_size_bytes = get_keysize_bytes(cap.key_size);
+
+ if (key_size_bytes == 0)
+ return -EINVAL;
+
+ switch (cap.algorithm_id) {
+ case CQHCI_CRYPTO_ALG_AES_XTS:
+ key_size_bytes *= 2;
+ if (key_size_bytes > CQHCI_CRYPTO_KEY_MAX_SIZE)
+ return -EINVAL;
+
+ memcpy(cfg->crypto_key, key, key_size_bytes/2);
+ memcpy(cfg->crypto_key + CQHCI_CRYPTO_KEY_MAX_SIZE/2,
+ key + key_size_bytes/2, key_size_bytes/2);
+ return 0;
+ case CQHCI_CRYPTO_ALG_BITLOCKER_AES_CBC:
+ /* fall through */
+ case CQHCI_CRYPTO_ALG_AES_ECB:
+ /* fall through */
+ case CQHCI_CRYPTO_ALG_ESSIV_AES_CBC:
+ memcpy(cfg->crypto_key, key, key_size_bytes);
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static void cqhci_program_key(struct cqhci_host *host,
+ const union cqhci_crypto_cfg_entry *cfg,
+ int slot)
+{
+ int i;
+ u32 slot_offset = host->crypto_cfg_register + slot * sizeof(*cfg);
+
+ if (host->crypto_vops && host->crypto_vops->program_key)
+ host->crypto_vops->program_key(host, cfg, slot);
+
+ /* Clear the dword 16 */
+ cqhci_writel(host, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
+ /* Ensure that CFGE is cleared before programming the key */
+ wmb();
+ for (i = 0; i < 16; i++) {
+ cqhci_writel(host, le32_to_cpu(cfg->reg_val[i]),
+ slot_offset + i * sizeof(cfg->reg_val[0]));
+ /* Spec says each dword in key must be written sequentially */
+ wmb();
+ }
+ /* Write dword 17 */
+ cqhci_writel(host, le32_to_cpu(cfg->reg_val[17]),
+ slot_offset + 17 * sizeof(cfg->reg_val[0]));
+ /* Dword 16 must be written last */
+ wmb();
+ /* Write dword 16 */
+ cqhci_writel(host, le32_to_cpu(cfg->reg_val[16]),
+ slot_offset + 16 * sizeof(cfg->reg_val[0]));
+ /*Ensure that dword 16 is written */
+ wmb();
+}
+
+static void cqhci_crypto_clear_keyslot(struct cqhci_host *host, int slot)
+{
+ union cqhci_crypto_cfg_entry cfg = { {0} };
+
+ cqhci_program_key(host, &cfg, slot);
+}
+
+static void cqhci_crypto_clear_all_keyslots(struct cqhci_host *host)
+{
+ int slot;
+
+ for (slot = 0; slot < cqhci_num_keyslots(host); slot++)
+ cqhci_crypto_clear_keyslot(host, slot);
+}
+
+static int cqhci_crypto_keyslot_program(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot)
+{
+ struct cqhci_host *host = keyslot_manager_private(ksm);
+ int err = 0;
+ u8 data_unit_mask;
+ union cqhci_crypto_cfg_entry cfg;
+ int cap_idx;
+
+ cap_idx = cqhci_crypto_cap_find(host, key->crypto_mode,
+ key->data_unit_size);
+
+ if (!cqhci_is_crypto_enabled(host) ||
+ !cqhci_keyslot_valid(host, slot) ||
+ !cqhci_cap_idx_valid(host, cap_idx))
+ return -EINVAL;
+
+ data_unit_mask = get_data_unit_size_mask(key->data_unit_size);
+
+ if (!(data_unit_mask & host->crypto_cap_array[cap_idx].sdus_mask))
+ return -EINVAL;
+
+ memset(&cfg, 0, sizeof(cfg));
+ cfg.data_unit_size = data_unit_mask;
+ cfg.crypto_cap_idx = cap_idx;
+ cfg.config_enable |= CQHCI_CRYPTO_CONFIGURATION_ENABLE;
+
+ err = cqhci_crypto_cfg_entry_write_key(&cfg, key->raw,
+ host->crypto_cap_array[cap_idx]);
+ if (err)
+ return err;
+
+ cqhci_program_key(host, &cfg, slot);
+
+ memzero_explicit(&cfg, sizeof(cfg));
+
+ return 0;
+}
+
+static int cqhci_crypto_keyslot_evict(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot)
+{
+ struct cqhci_host *host = keyslot_manager_private(ksm);
+
+ if (!cqhci_is_crypto_enabled(host) ||
+ !cqhci_keyslot_valid(host, slot))
+ return -EINVAL;
+
+ /*
+ * Clear the crypto cfg on the device. Clearing CFGE
+ * might not be sufficient, so just clear the entire cfg.
+ */
+ cqhci_crypto_clear_keyslot(host, slot);
+
+ return 0;
+}
+
+/* Functions implementing eMMC v5.2 specification behaviour */
+void cqhci_crypto_enable_spec(struct cqhci_host *host)
+{
+ if (!cqhci_host_is_crypto_supported(host))
+ return;
+
+ host->caps |= CQHCI_CAP_CRYPTO_SUPPORT;
+}
+EXPORT_SYMBOL(cqhci_crypto_enable_spec);
+
+void cqhci_crypto_disable_spec(struct cqhci_host *host)
+{
+ host->caps &= ~CQHCI_CAP_CRYPTO_SUPPORT;
+}
+EXPORT_SYMBOL(cqhci_crypto_disable_spec);
+
+static const struct keyslot_mgmt_ll_ops cqhci_ksm_ops = {
+ .keyslot_program = cqhci_crypto_keyslot_program,
+ .keyslot_evict = cqhci_crypto_keyslot_evict,
+};
+
+enum blk_crypto_mode_num cqhci_crypto_blk_crypto_mode_num_for_alg_dusize(
+ enum cqhci_crypto_alg cqhci_crypto_alg,
+ enum cqhci_crypto_key_size key_size)
+{
+ /*
+ * Currently the only mode that eMMC and blk-crypto both support.
+ */
+ if (cqhci_crypto_alg == CQHCI_CRYPTO_ALG_AES_XTS &&
+ key_size == CQHCI_CRYPTO_KEY_SIZE_256)
+ return BLK_ENCRYPTION_MODE_AES_256_XTS;
+
+ return BLK_ENCRYPTION_MODE_INVALID;
+}
+
+/**
+ * cqhci_host_init_crypto - Read crypto capabilities, init crypto fields in host
+ * @host: Per adapter instance
+ *
+ * Returns 0 on success. Returns -ENODEV if such capabilities don't exist, and
+ * -ENOMEM upon OOM.
+ */
+int cqhci_host_init_crypto_spec(struct cqhci_host *host,
+ const struct keyslot_mgmt_ll_ops *ksm_ops)
+{
+ int cap_idx = 0;
+ int err = 0;
+ unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX];
+ enum blk_crypto_mode_num blk_mode_num;
+
+ /* Default to disabling crypto */
+ host->caps &= ~CQHCI_CAP_CRYPTO_SUPPORT;
+
+ if (!(cqhci_readl(host, CQHCI_CAP) & CQHCI_CAP_CS)) {
+ pr_err("%s no crypto capability\n", __func__);
+ err = -ENODEV;
+ goto out;
+ }
+
+ /*
+ * Crypto Capabilities should never be 0, because the
+ * config_array_ptr > 04h. So we use a 0 value to indicate that
+ * crypto init failed, and can't be enabled.
+ */
+ host->crypto_capabilities.reg_val = cqhci_readl(host, CQHCI_CCAP);
+ host->crypto_cfg_register =
+ (u32)host->crypto_capabilities.config_array_ptr * 0x100;
+ host->crypto_cap_array =
+ devm_kcalloc(mmc_dev(host->mmc),
+ host->crypto_capabilities.num_crypto_cap,
+ sizeof(host->crypto_cap_array[0]), GFP_KERNEL);
+ if (!host->crypto_cap_array) {
+ err = -ENOMEM;
+ pr_err("%s no memory cap\n", __func__);
+ goto out;
+ }
+
+ memset(crypto_modes_supported, 0, sizeof(crypto_modes_supported));
+
+ /*
+ * Store all the capabilities now so that we don't need to repeatedly
+ * access the device each time we want to know its capabilities
+ */
+ for (cap_idx = 0; cap_idx < host->crypto_capabilities.num_crypto_cap;
+ cap_idx++) {
+ host->crypto_cap_array[cap_idx].reg_val =
+ cpu_to_le32(cqhci_readl(host,
+ CQHCI_CRYPTOCAP +
+ cap_idx * sizeof(__le32)));
+ blk_mode_num = cqhci_crypto_blk_crypto_mode_num_for_alg_dusize(
+ host->crypto_cap_array[cap_idx].algorithm_id,
+ host->crypto_cap_array[cap_idx].key_size);
+ if (blk_mode_num == BLK_ENCRYPTION_MODE_INVALID)
+ continue;
+ crypto_modes_supported[blk_mode_num] |=
+ host->crypto_cap_array[cap_idx].sdus_mask * 512;
+ }
+
+ cqhci_crypto_clear_all_keyslots(host);
+
+ host->ksm = keyslot_manager_create(cqhci_num_keyslots(host), ksm_ops,
+ crypto_modes_supported, host);
+
+ if (!host->ksm) {
+ err = -ENOMEM;
+ goto out_free_caps;
+ }
+ /*
+ * In case host controller supports cryptographic operations
+ * then, it uses 128bit task descriptor. Upper 64 bits of task
+ * descriptor would be used to pass crypto specific informaton.
+ */
+ host->caps |= CQHCI_TASK_DESC_SZ_128;
+
+ return 0;
+out_free_caps:
+ devm_kfree(mmc_dev(host->mmc), host->crypto_cap_array);
+out:
+ // TODO: print error?
+ /* Indicate that init failed by setting crypto_capabilities to 0 */
+ host->crypto_capabilities.reg_val = 0;
+ return err;
+}
+EXPORT_SYMBOL(cqhci_host_init_crypto_spec);
+
+void cqhci_crypto_setup_rq_keyslot_manager_spec(struct cqhci_host *host,
+ struct request_queue *q)
+{
+ if (!cqhci_host_is_crypto_supported(host) || !q)
+ return;
+
+ q->ksm = host->ksm;
+}
+EXPORT_SYMBOL(cqhci_crypto_setup_rq_keyslot_manager_spec);
+
+void cqhci_crypto_destroy_rq_keyslot_manager_spec(struct cqhci_host *host,
+ struct request_queue *q)
+{
+ keyslot_manager_destroy(host->ksm);
+}
+EXPORT_SYMBOL(cqhci_crypto_destroy_rq_keyslot_manager_spec);
+
+int cqhci_prepare_crypto_desc_spec(struct cqhci_host *host,
+ struct mmc_request *mrq,
+ u64 *ice_ctx)
+{
+ struct bio_crypt_ctx *bc;
+ struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
+ brq.mrq);
+ struct request *req = mmc_queue_req_to_req(mqrq);
+
+ if (!req->bio ||
+ !bio_crypt_should_process(req)) {
+ *ice_ctx = 0;
+ return 0;
+ }
+ if (WARN_ON(!cqhci_is_crypto_enabled(host))) {
+ /*
+ * Upper layer asked us to do inline encryption
+ * but that isn't enabled, so we fail this request.
+ */
+ return -EINVAL;
+ }
+
+ bc = req->bio->bi_crypt_context;
+
+ if (!cqhci_keyslot_valid(host, bc->bc_keyslot))
+ return -EINVAL;
+
+ if (ice_ctx) {
+ *ice_ctx = DATA_UNIT_NUM(bc->bc_dun[0]) |
+ CRYPTO_CONFIG_INDEX(bc->bc_keyslot) |
+ CRYPTO_ENABLE(true);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(cqhci_prepare_crypto_desc_spec);
+
+/* Crypto Variant Ops Support */
+
+void cqhci_crypto_enable(struct cqhci_host *host)
+{
+ if (host->crypto_vops && host->crypto_vops->enable)
+ return host->crypto_vops->enable(host);
+
+ return cqhci_crypto_enable_spec(host);
+}
+
+void cqhci_crypto_disable(struct cqhci_host *host)
+{
+ if (host->crypto_vops && host->crypto_vops->disable)
+ return host->crypto_vops->disable(host);
+
+ return cqhci_crypto_disable_spec(host);
+}
+
+int cqhci_host_init_crypto(struct cqhci_host *host)
+{
+ if (host->crypto_vops && host->crypto_vops->host_init_crypto)
+ return host->crypto_vops->host_init_crypto(host,
+ &cqhci_ksm_ops);
+
+ return cqhci_host_init_crypto_spec(host, &cqhci_ksm_ops);
+}
+
+void cqhci_crypto_setup_rq_keyslot_manager(struct cqhci_host *host,
+ struct request_queue *q)
+{
+ if (host->crypto_vops && host->crypto_vops->setup_rq_keyslot_manager)
+ return host->crypto_vops->setup_rq_keyslot_manager(host, q);
+
+ return cqhci_crypto_setup_rq_keyslot_manager_spec(host, q);
+}
+
+void cqhci_crypto_destroy_rq_keyslot_manager(struct cqhci_host *host,
+ struct request_queue *q)
+{
+ if (host->crypto_vops && host->crypto_vops->destroy_rq_keyslot_manager)
+ return host->crypto_vops->destroy_rq_keyslot_manager(host, q);
+
+ return cqhci_crypto_destroy_rq_keyslot_manager_spec(host, q);
+}
+
+int cqhci_crypto_get_ctx(struct cqhci_host *host,
+ struct mmc_request *mrq,
+ u64 *ice_ctx)
+{
+ if (host->crypto_vops && host->crypto_vops->prepare_crypto_desc)
+ return host->crypto_vops->prepare_crypto_desc(host, mrq,
+ ice_ctx);
+
+ return cqhci_prepare_crypto_desc_spec(host, mrq, ice_ctx);
+}
+
+int cqhci_complete_crypto_desc(struct cqhci_host *host,
+ struct mmc_request *mrq,
+ u64 *ice_ctx)
+{
+ if (host->crypto_vops && host->crypto_vops->complete_crypto_desc)
+ return host->crypto_vops->complete_crypto_desc(host, mrq,
+ ice_ctx);
+
+ return 0;
+}
+
+void cqhci_crypto_debug(struct cqhci_host *host)
+{
+ if (host->crypto_vops && host->crypto_vops->debug)
+ host->crypto_vops->debug(host);
+}
+
+void cqhci_crypto_set_vops(struct cqhci_host *host,
+ struct cqhci_host_crypto_variant_ops *crypto_vops)
+{
+ host->crypto_vops = crypto_vops;
+}
+
+int cqhci_crypto_suspend(struct cqhci_host *host)
+{
+ if (host->crypto_vops && host->crypto_vops->suspend)
+ return host->crypto_vops->suspend(host);
+
+ return 0;
+}
+
+int cqhci_crypto_resume(struct cqhci_host *host)
+{
+ if (host->crypto_vops && host->crypto_vops->resume)
+ return host->crypto_vops->resume(host);
+
+ return 0;
+}
+
+int cqhci_crypto_reset(struct cqhci_host *host)
+{
+ if (host->crypto_vops && host->crypto_vops->reset)
+ return host->crypto_vops->reset(host);
+
+ return 0;
+}
+
+int cqhci_crypto_recovery_finish(struct cqhci_host *host)
+{
+ if (host->crypto_vops && host->crypto_vops->recovery_finish)
+ return host->crypto_vops->recovery_finish(host);
+
+ /* Reset/Recovery might clear all keys, so reprogram all the keys. */
+ keyslot_manager_reprogram_all_keys(host->ksm);
+
+ return 0;
+}
diff --git a/drivers/mmc/host/cqhci-crypto.h b/drivers/mmc/host/cqhci-crypto.h
new file mode 100644
index 0000000..fefad90
--- /dev/null
+++ b/drivers/mmc/host/cqhci-crypto.h
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ *
+ * Copyright (c) 2020 The Linux Foundation. All rights reserved.
+ *
+ */
+
+#ifndef _CQHCI_CRYPTO_H
+#define _CQHCI_CRYPTO_H
+
+#ifdef CONFIG_MMC_CQHCI_CRYPTO
+#include <linux/mmc/host.h>
+#include "cqhci.h"
+
+static inline int cqhci_num_keyslots(struct cqhci_host *host)
+{
+ return host->crypto_capabilities.config_count + 1;
+}
+
+static inline bool cqhci_keyslot_valid(struct cqhci_host *host,
+ unsigned int slot)
+{
+ /*
+ * The actual number of configurations supported is (CFGC+1), so slot
+ * numbers range from 0 to config_count inclusive.
+ */
+ return slot < cqhci_num_keyslots(host);
+}
+
+static inline bool cqhci_host_is_crypto_supported(struct cqhci_host *host)
+{
+ return host->crypto_capabilities.reg_val != 0;
+}
+
+static inline bool cqhci_is_crypto_enabled(struct cqhci_host *host)
+{
+ return host->caps & CQHCI_CAP_CRYPTO_SUPPORT;
+}
+
+/* Functions implementing eMMC v5.2 specification behaviour */
+int cqhci_prepare_crypto_desc_spec(struct cqhci_host *host,
+ struct mmc_request *mrq,
+ u64 *ice_ctx);
+
+void cqhci_crypto_enable_spec(struct cqhci_host *host);
+
+void cqhci_crypto_disable_spec(struct cqhci_host *host);
+
+int cqhci_host_init_crypto_spec(struct cqhci_host *host,
+ const struct keyslot_mgmt_ll_ops *ksm_ops);
+
+void cqhci_crypto_setup_rq_keyslot_manager_spec(struct cqhci_host *host,
+ struct request_queue *q);
+
+void cqhci_crypto_destroy_rq_keyslot_manager_spec(struct cqhci_host *host,
+ struct request_queue *q);
+
+void cqhci_crypto_set_vops(struct cqhci_host *host,
+ struct cqhci_host_crypto_variant_ops *crypto_vops);
+
+/* Crypto Variant Ops Support */
+
+void cqhci_crypto_enable(struct cqhci_host *host);
+
+void cqhci_crypto_disable(struct cqhci_host *host);
+
+int cqhci_host_init_crypto(struct cqhci_host *host);
+
+void cqhci_crypto_setup_rq_keyslot_manager(struct cqhci_host *host,
+ struct request_queue *q);
+
+void cqhci_crypto_destroy_rq_keyslot_manager(struct cqhci_host *host,
+ struct request_queue *q);
+
+int cqhci_crypto_get_ctx(struct cqhci_host *host,
+ struct mmc_request *mrq,
+ u64 *ice_ctx);
+
+int cqhci_complete_crypto_desc(struct cqhci_host *host,
+ struct mmc_request *mrq,
+ u64 *ice_ctx);
+
+void cqhci_crypto_debug(struct cqhci_host *host);
+
+int cqhci_crypto_suspend(struct cqhci_host *host);
+
+int cqhci_crypto_resume(struct cqhci_host *host);
+
+int cqhci_crypto_reset(struct cqhci_host *host);
+
+int cqhci_crypto_recovery_finish(struct cqhci_host *host);
+
+int cqhci_crypto_cap_find(void *host_p, enum blk_crypto_mode_num crypto_mode,
+ unsigned int data_unit_size);
+
+#else /* CONFIG_MMC_CQHCI_CRYPTO */
+
+static inline bool cqhci_keyslot_valid(struct cqhci_host *host,
+ unsigned int slot)
+{
+ return false;
+}
+
+static inline bool cqhci_host_is_crypto_supported(struct cqhci_host *host)
+{
+ return false;
+}
+
+static inline bool cqhci_is_crypto_enabled(struct cqhci_host *host)
+{
+ return false;
+}
+
+static inline void cqhci_crypto_enable(struct cqhci_host *host) { }
+
+static inline int cqhci_crypto_cap_find(void *host_p,
+ enum blk_crypto_mode_num crypto_mode,
+ unsigned int data_unit_size)
+{
+ return 0;
+}
+
+static inline void cqhci_crypto_disable(struct cqhci_host *host) { }
+
+static inline int cqhci_host_init_crypto(struct cqhci_host *host)
+{
+ return 0;
+}
+
+static inline void cqhci_crypto_setup_rq_keyslot_manager(
+ struct cqhci_host *host,
+ struct request_queue *q) { }
+
+static inline void
+cqhci_crypto_destroy_rq_keyslot_manager(struct cqhci_host *host,
+ struct request_queue *q) { }
+
+static inline int cqhci_crypto_get_ctx(struct cqhci_host *host,
+ struct mmc_request *mrq,
+ u64 *ice_ctx)
+{
+ *ice_ctx = 0;
+ return 0;
+}
+
+static inline int cqhci_complete_crypto_desc(struct cqhci_host *host,
+ struct mmc_request *mrq,
+ u64 *ice_ctx)
+{
+ return 0;
+}
+
+static inline void cqhci_crypto_debug(struct cqhci_host *host) { }
+
+static inline void cqhci_crypto_set_vops(struct cqhci_host *host,
+ struct cqhci_host_crypto_variant_ops *crypto_vops) { }
+
+static inline int cqhci_crypto_suspend(struct cqhci_host *host)
+{
+ return 0;
+}
+
+static inline int cqhci_crypto_resume(struct cqhci_host *host)
+{
+ return 0;
+}
+
+static inline int cqhci_crypto_reset(struct cqhci_host *host)
+{
+ return 0;
+}
+
+static inline int cqhci_crypto_recovery_finish(struct cqhci_host *host)
+{
+ return 0;
+}
+
+#endif /* CONFIG_MMC_CQHCI_CRYPTO */
+#endif /* _CQHCI_CRYPTO_H */
+
+
diff --git a/drivers/mmc/host/cqhci.c b/drivers/mmc/host/cqhci.c
index b759421..55bfcfd 100644
--- a/drivers/mmc/host/cqhci.c
+++ b/drivers/mmc/host/cqhci.c
@@ -17,7 +17,10 @@
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
+#include "../core/queue.h"
#include "cqhci.h"
+#include "cqhci-crypto.h"
+
#include "sdhci-msm.h"
#define DCMD_SLOT 31
@@ -154,6 +157,8 @@ static void cqhci_dumpregs(struct cqhci_host *cq_host)
CQHCI_DUMP("Vendor cfg 0x%08x\n",
cqhci_readl(cq_host, CQHCI_VENDOR_CFG + offset));
+ cqhci_crypto_debug(cq_host);
+
if (cq_host->ops->dumpregs)
cq_host->ops->dumpregs(mmc);
else
@@ -257,7 +262,6 @@ static void __cqhci_enable(struct cqhci_host *cq_host)
{
struct mmc_host *mmc = cq_host->mmc;
u32 cqcfg;
- u32 cqcap = 0;
cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
@@ -275,16 +279,10 @@ static void __cqhci_enable(struct cqhci_host *cq_host)
if (cq_host->caps & CQHCI_TASK_DESC_SZ_128)
cqcfg |= CQHCI_TASK_DESC_SZ;
- cqcap = cqhci_readl(cq_host, CQHCI_CAP);
- if (cqcap & CQHCI_CAP_CS) {
- /*
- * In case host controller supports cryptographic operations
- * then, enable crypro support.
- */
- cq_host->caps |= CQHCI_CAP_CRYPTO_SUPPORT;
+ if (cqhci_host_is_crypto_supported(cq_host)) {
+ cqhci_crypto_enable(cq_host);
cqcfg |= CQHCI_ICE_ENABLE;
- /*
- * For SDHC v5.0 onwards, ICE 3.0 specific registers are added
+ /* For SDHC v5.0 onwards, ICE 3.0 specific registers are added
* in CQ register space, due to which few CQ registers are
* shifted. Set offset_changed boolean to use updated address.
*/
@@ -326,6 +324,9 @@ static void __cqhci_disable(struct cqhci_host *cq_host)
{
u32 cqcfg;
+ if (cqhci_host_is_crypto_supported(cq_host))
+ cqhci_crypto_disable(cq_host);
+
cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
cqcfg &= ~CQHCI_ENABLE;
cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
@@ -333,6 +334,7 @@ static void __cqhci_disable(struct cqhci_host *cq_host)
cq_host->mmc->cqe_on = false;
cq_host->activated = false;
+
mmc_log_string(cq_host->mmc, "CQ disabled\n");
}
@@ -340,6 +342,8 @@ int cqhci_suspend(struct mmc_host *mmc)
{
struct cqhci_host *cq_host = mmc->cqe_private;
+ cqhci_crypto_suspend(cq_host);
+
if (cq_host->enabled)
__cqhci_disable(cq_host);
@@ -584,16 +588,23 @@ static void cqhci_pm_qos_vote(struct sdhci_host *host, struct mmc_request *mrq)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
+ struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
+ brq.mrq);
+ struct request *req = mmc_queue_req_to_req(mqrq);
sdhci_msm_pm_qos_cpu_vote(host,
- msm_host->pdata->pm_qos_data.cmdq_latency, mrq->req->cpu);
+ msm_host->pdata->pm_qos_data.cmdq_latency, req->cpu);
}
static void cqhci_pm_qos_unvote(struct sdhci_host *host,
struct mmc_request *mrq)
{
+ struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
+ brq.mrq);
+ struct request *req = mmc_queue_req_to_req(mqrq);
+
/* use async as we're inside an atomic context (soft-irq) */
- sdhci_msm_pm_qos_cpu_unvote(host, mrq->req->cpu, true);
+ sdhci_msm_pm_qos_cpu_unvote(host, req->cpu, true);
}
static void cqhci_post_req(struct mmc_host *host, struct mmc_request *mrq)
@@ -618,7 +629,7 @@ static inline int cqhci_tag(struct mmc_request *mrq)
}
static inline
-void cqe_prep_crypto_desc(struct cqhci_host *cq_host, u64 *task_desc,
+void cqhci_prep_crypto_desc(struct cqhci_host *cq_host, u64 *task_desc,
u64 ice_ctx)
{
u64 *ice_desc = NULL;
@@ -629,8 +640,8 @@ void cqe_prep_crypto_desc(struct cqhci_host *cq_host, u64 *task_desc,
* ice context is present in the upper 64bits of task descriptor
* ice_conext_base_address = task_desc + 8-bytes
*/
- ice_desc = (__le64 __force *)((u8 *)task_desc +
- CQHCI_TASK_DESC_TASK_PARAMS_SIZE);
+ ice_desc = (u64 *)((u8 *)task_desc +
+ CQHCI_TASK_DESC_ICE_PARAM_OFFSET);
memset(ice_desc, 0, CQHCI_TASK_DESC_ICE_PARAMS_SIZE);
/*
@@ -675,25 +686,23 @@ static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
}
if (mrq->data) {
- if (cq_host->ops->crypto_cfg) {
- err = cq_host->ops->crypto_cfg(mmc, mrq, tag, &ice_ctx);
- if (err) {
- mmc->err_stats[MMC_ERR_ICE_CFG]++;
- pr_err("%s: failed to configure crypto: err %d tag %d\n",
- mmc_hostname(mmc), err, tag);
- goto out;
- }
+ err = cqhci_crypto_get_ctx(cq_host, mrq, &ice_ctx);
+ if (err) {
+ mmc->err_stats[MMC_ERR_ICE_CFG]++;
+ pr_err("%s: failed to retrieve crypto ctx for tag %d\n",
+ mmc_hostname(mmc), tag);
+ goto out;
}
task_desc = (__le64 __force *)get_desc(cq_host, tag);
cqhci_prep_task_desc(mrq, &data, 1);
*task_desc = cpu_to_le64(data);
- cqe_prep_crypto_desc(cq_host, task_desc, ice_ctx);
+ cqhci_prep_crypto_desc(cq_host, task_desc, ice_ctx);
err = cqhci_prep_tran_desc(mrq, cq_host, tag);
if (err) {
pr_err("%s: cqhci: failed to setup tx desc: %d\n",
mmc_hostname(mmc), err);
- goto end_crypto;
+ goto out;
}
/* PM QoS */
sdhci_msm_pm_qos_irq_vote(host);
@@ -735,23 +744,26 @@ static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
if (err)
cqhci_post_req(mmc, mrq);
- goto out;
-
-end_crypto:
- if (cq_host->ops->crypto_cfg_end && mrq->data) {
- err = cq_host->ops->crypto_cfg_end(mmc, mrq);
- if (err)
- pr_err("%s: failed to end ice config: err %d tag %d\n",
- mmc_hostname(mmc), err, tag);
- }
- if (!(cq_host->caps & CQHCI_CAP_CRYPTO_SUPPORT) &&
- cq_host->ops->crypto_cfg_reset && mrq->data)
- cq_host->ops->crypto_cfg_reset(mmc, tag);
-
+ if (mrq->data)
+ cqhci_complete_crypto_desc(cq_host, mrq, NULL);
out:
return err;
}
+static void cqhci_crypto_update_queue(struct mmc_host *mmc,
+ struct request_queue *queue)
+{
+ struct cqhci_host *cq_host = mmc->cqe_private;
+
+ if (cq_host->caps & CQHCI_CAP_CRYPTO_SUPPORT) {
+ if (queue)
+ cqhci_crypto_setup_rq_keyslot_manager(cq_host, queue);
+ else
+ pr_err("%s can not register keyslot manager\n",
+ mmc_hostname(mmc));
+ }
+}
+
static void cqhci_recovery_needed(struct mmc_host *mmc, struct mmc_request *mrq,
bool notify)
{
@@ -851,7 +863,7 @@ static void cqhci_finish_mrq(struct mmc_host *mmc, unsigned int tag)
struct cqhci_slot *slot = &cq_host->slot[tag];
struct mmc_request *mrq = slot->mrq;
struct mmc_data *data;
- int err = 0, offset = 0;
+ int offset = 0;
if (cq_host->offset_changed)
offset = CQE_V5_VENDOR_CFG;
@@ -873,13 +885,8 @@ static void cqhci_finish_mrq(struct mmc_host *mmc, unsigned int tag)
data = mrq->data;
if (data) {
- if (cq_host->ops->crypto_cfg_end) {
- err = cq_host->ops->crypto_cfg_end(mmc, mrq);
- if (err) {
- pr_err("%s: failed to end ice config: err %d tag %d\n",
- mmc_hostname(mmc), err, tag);
- }
- }
+ cqhci_complete_crypto_desc(cq_host, mrq, NULL);
+
if (data->error)
data->bytes_xfered = 0;
else
@@ -891,9 +898,6 @@ static void cqhci_finish_mrq(struct mmc_host *mmc, unsigned int tag)
CQHCI_VENDOR_CFG + offset);
}
- if (!(cq_host->caps & CQHCI_CAP_CRYPTO_SUPPORT) &&
- cq_host->ops->crypto_cfg_reset)
- cq_host->ops->crypto_cfg_reset(mmc, tag);
mmc_cqe_request_done(mmc, mrq);
}
@@ -1090,6 +1094,8 @@ static void cqhci_recovery_start(struct mmc_host *mmc)
pr_debug("%s: cqhci: %s\n", mmc_hostname(mmc), __func__);
+ cqhci_crypto_reset(cq_host);
+
WARN_ON(!cq_host->recovery_halt);
cqhci_halt(mmc, CQHCI_START_HALT_TIMEOUT);
@@ -1210,6 +1216,8 @@ static void cqhci_recovery_finish(struct mmc_host *mmc)
cqhci_set_irqs(cq_host, CQHCI_IS_MASK);
+ cqhci_crypto_recovery_finish(cq_host);
+
pr_debug("%s: cqhci: recovery done\n", mmc_hostname(mmc));
mmc_log_string(mmc, "recovery done\n");
}
@@ -1224,6 +1232,7 @@ static const struct mmc_cqe_ops cqhci_cqe_ops = {
.cqe_timeout = cqhci_timeout,
.cqe_recovery_start = cqhci_recovery_start,
.cqe_recovery_finish = cqhci_recovery_finish,
+ .cqe_crypto_update_queue = cqhci_crypto_update_queue,
};
struct cqhci_host *cqhci_pltfm_init(struct platform_device *pdev)
@@ -1287,14 +1296,6 @@ int cqhci_init(struct cqhci_host *cq_host, struct mmc_host *mmc,
mmc->cqe_qdepth -= 1;
cqcap = cqhci_readl(cq_host, CQHCI_CAP);
- if (cqcap & CQHCI_CAP_CS) {
- /*
- * In case host controller supports cryptographic operations
- * then, it uses 128bit task descriptor. Upper 64 bits of task
- * descriptor would be used to pass crypto specific informaton.
- */
- cq_host->caps |= CQHCI_TASK_DESC_SZ_128;
- }
cq_host->slot = devm_kcalloc(mmc_dev(mmc), cq_host->num_slots,
sizeof(*cq_host->slot), GFP_KERNEL);
@@ -1305,6 +1306,13 @@ int cqhci_init(struct cqhci_host *cq_host, struct mmc_host *mmc,
spin_lock_init(&cq_host->lock);
+ err = cqhci_host_init_crypto(cq_host);
+ if (err) {
+ pr_err("%s: CQHCI version %u.%02u Crypto init failed err %d\n",
+ mmc_hostname(mmc), cqhci_ver_major(cq_host),
+ cqhci_ver_minor(cq_host), err);
+ }
+
init_completion(&cq_host->halt_comp);
init_waitqueue_head(&cq_host->wait_queue);
diff --git a/drivers/mmc/host/cqhci.h b/drivers/mmc/host/cqhci.h
index 024c81b..8c54e97 100644
--- a/drivers/mmc/host/cqhci.h
+++ b/drivers/mmc/host/cqhci.h
@@ -21,6 +21,7 @@
#include <linux/wait.h>
#include <linux/irqreturn.h>
#include <asm/io.h>
+#include <linux/keyslot-manager.h>
/* registers */
/* version */
@@ -32,6 +33,9 @@
/* capabilities */
#define CQHCI_CAP 0x04
#define CQHCI_CAP_CS (1 << 28)
+#define CQHCI_CCAP 0x100
+#define CQHCI_CRYPTOCAP 0x104
+
/* configuration */
#define CQHCI_CFG 0x08
#define CQHCI_DCMD 0x00001000
@@ -164,17 +168,107 @@
#define CQHCI_DAT_LENGTH(x) (((x) & 0xFFFF) << 16)
#define CQHCI_DAT_ADDR_LO(x) (((x) & 0xFFFFFFFF) << 32)
#define CQHCI_DAT_ADDR_HI(x) (((x) & 0xFFFFFFFF) << 0)
+#define DATA_UNIT_NUM(x) (((u64)(x) & 0xFFFFFFFF) << 0)
+#define CRYPTO_CONFIG_INDEX(x) (((u64)(x) & 0xFF) << 32)
+#define CRYPTO_ENABLE(x) (((u64)(x) & 0x1) << 47)
-#define CQHCI_TASK_DESC_TASK_PARAMS_SIZE 8
-#define CQHCI_TASK_DESC_ICE_PARAMS_SIZE 8
+/* ICE context is present in the upper 64bits of task descriptor */
+#define CQHCI_TASK_DESC_ICE_PARAM_OFFSET 8
+/* ICE descriptor size */
+#define CQHCI_TASK_DESC_ICE_PARAMS_SIZE 8
struct cqhci_host_ops;
struct mmc_host;
struct cqhci_slot;
+struct cqhci_host;
+
+/* CCAP - Crypto Capability 100h */
+union cqhci_crypto_capabilities {
+ __le32 reg_val;
+ struct {
+ u8 num_crypto_cap;
+ u8 config_count;
+ u8 reserved;
+ u8 config_array_ptr;
+ };
+};
+
+enum cqhci_crypto_key_size {
+ CQHCI_CRYPTO_KEY_SIZE_INVALID = 0x0,
+ CQHCI_CRYPTO_KEY_SIZE_128 = 0x1,
+ CQHCI_CRYPTO_KEY_SIZE_192 = 0x2,
+ CQHCI_CRYPTO_KEY_SIZE_256 = 0x3,
+ CQHCI_CRYPTO_KEY_SIZE_512 = 0x4,
+};
+
+enum cqhci_crypto_alg {
+ CQHCI_CRYPTO_ALG_AES_XTS = 0x0,
+ CQHCI_CRYPTO_ALG_BITLOCKER_AES_CBC = 0x1,
+ CQHCI_CRYPTO_ALG_AES_ECB = 0x2,
+ CQHCI_CRYPTO_ALG_ESSIV_AES_CBC = 0x3,
+};
+
+/* x-CRYPTOCAP - Crypto Capability X */
+union cqhci_crypto_cap_entry {
+ __le32 reg_val;
+ struct {
+ u8 algorithm_id;
+ u8 sdus_mask; /* Supported data unit size mask */
+ u8 key_size;
+ u8 reserved;
+ };
+};
+
+#define CQHCI_CRYPTO_CONFIGURATION_ENABLE (1 << 7)
+#define CQHCI_CRYPTO_KEY_MAX_SIZE 64
+/* x-CRYPTOCFG - Crypto Configuration X */
+union cqhci_crypto_cfg_entry {
+ __le32 reg_val[32];
+ struct {
+ u8 crypto_key[CQHCI_CRYPTO_KEY_MAX_SIZE];
+ u8 data_unit_size;
+ u8 crypto_cap_idx;
+ u8 reserved_1;
+ u8 config_enable;
+ u8 reserved_multi_host;
+ u8 reserved_2;
+ u8 vsb[2];
+ u8 reserved_3[56];
+ };
+};
+
+struct cqhci_host_crypto_variant_ops {
+ void (*setup_rq_keyslot_manager)(struct cqhci_host *host,
+ struct request_queue *q);
+ void (*destroy_rq_keyslot_manager)(struct cqhci_host *host,
+ struct request_queue *q);
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+ int (*host_init_crypto)(struct cqhci_host *host,
+ const struct keyslot_mgmt_ll_ops *ksm_ops);
+#endif
+ void (*enable)(struct cqhci_host *host);
+ void (*disable)(struct cqhci_host *host);
+ int (*suspend)(struct cqhci_host *host);
+ int (*resume)(struct cqhci_host *host);
+ int (*debug)(struct cqhci_host *host);
+ int (*prepare_crypto_desc)(struct cqhci_host *host,
+ struct mmc_request *mrq,
+ u64 *ice_ctx);
+ int (*complete_crypto_desc)(struct cqhci_host *host,
+ struct mmc_request *mrq,
+ u64 *ice_ctx);
+ int (*reset)(struct cqhci_host *host);
+ int (*recovery_finish)(struct cqhci_host *host);
+ int (*program_key)(struct cqhci_host *host,
+ const union cqhci_crypto_cfg_entry *cfg,
+ int slot);
+ void *priv;
+};
struct cqhci_host {
const struct cqhci_host_ops *ops;
void __iomem *mmio;
+ void __iomem *icemmio;
struct mmc_host *mmc;
spinlock_t lock;
@@ -227,6 +321,16 @@ struct cqhci_host {
struct completion halt_comp;
wait_queue_head_t wait_queue;
struct cqhci_slot *slot;
+ const struct cqhci_host_crypto_variant_ops *crypto_vops;
+
+#ifdef CONFIG_MMC_CQHCI_CRYPTO
+ union cqhci_crypto_capabilities crypto_capabilities;
+ union cqhci_crypto_cap_entry *crypto_cap_array;
+ u32 crypto_cfg_register;
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+ struct keyslot_manager *ksm;
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+#endif /* CONFIG_SCSI_CQHCI_CRYPTO */
};
struct cqhci_host_ops {
@@ -235,10 +339,6 @@ struct cqhci_host_ops {
u32 (*read_l)(struct cqhci_host *host, int reg);
void (*enable)(struct mmc_host *mmc);
void (*disable)(struct mmc_host *mmc, bool recovery);
- int (*crypto_cfg)(struct mmc_host *mmc, struct mmc_request *mrq,
- u32 slot, u64 *ice_ctx);
- int (*crypto_cfg_end)(struct mmc_host *mmc, struct mmc_request *mrq);
- void (*crypto_cfg_reset)(struct mmc_host *mmc, unsigned int slot);
};
static inline void cqhci_writel(struct cqhci_host *host, u32 val, int reg)
diff --git a/drivers/mmc/host/sdhci-msm-ice.c b/drivers/mmc/host/sdhci-msm-ice.c
deleted file mode 100644
index 3bbb5b3..0000000
--- a/drivers/mmc/host/sdhci-msm-ice.c
+++ /dev/null
@@ -1,581 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2015, 2017-2019, The Linux Foundation. All rights reserved.
- */
-
-#include "sdhci-msm-ice.h"
-
-static void sdhci_msm_ice_error_cb(void *host_ctrl, u32 error)
-{
- struct sdhci_msm_host *msm_host = (struct sdhci_msm_host *)host_ctrl;
-
- dev_err(&msm_host->pdev->dev, "%s: Error in ice operation 0x%x\n",
- __func__, error);
-
- if (msm_host->ice.state == SDHCI_MSM_ICE_STATE_ACTIVE)
- msm_host->ice.state = SDHCI_MSM_ICE_STATE_DISABLED;
-}
-
-static struct platform_device *sdhci_msm_ice_get_pdevice(struct device *dev)
-{
- struct device_node *node;
- struct platform_device *ice_pdev = NULL;
-
- node = of_parse_phandle(dev->of_node, SDHC_MSM_CRYPTO_LABEL, 0);
- if (!node) {
- dev_dbg(dev, "%s: sdhc-msm-crypto property not specified\n",
- __func__);
- goto out;
- }
- ice_pdev = qcom_ice_get_pdevice(node);
-out:
- return ice_pdev;
-}
-
-static
-struct qcom_ice_variant_ops *sdhci_msm_ice_get_vops(struct device *dev)
-{
- struct qcom_ice_variant_ops *ice_vops = NULL;
- struct device_node *node;
-
- node = of_parse_phandle(dev->of_node, SDHC_MSM_CRYPTO_LABEL, 0);
- if (!node) {
- dev_dbg(dev, "%s: sdhc-msm-crypto property not specified\n",
- __func__);
- goto out;
- }
- ice_vops = qcom_ice_get_variant_ops(node);
- of_node_put(node);
-out:
- return ice_vops;
-}
-
-static
-void sdhci_msm_enable_ice_hci(struct sdhci_host *host, bool enable)
-{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
- u32 config = 0;
- u32 ice_cap = 0;
-
- /*
- * Enable the cryptographic support inside SDHC.
- * This is a global config which needs to be enabled
- * all the time.
- * Only when it it is enabled, the ICE_HCI capability
- * will get reflected in CQCAP register.
- */
- config = readl_relaxed(host->ioaddr + HC_VENDOR_SPECIFIC_FUNC4);
-
- if (enable)
- config &= ~DISABLE_CRYPTO;
- else
- config |= DISABLE_CRYPTO;
- writel_relaxed(config, host->ioaddr + HC_VENDOR_SPECIFIC_FUNC4);
-
- /*
- * CQCAP register is in different register space from above
- * ice global enable register. So a mb() is required to ensure
- * above write gets completed before reading the CQCAP register.
- */
- mb();
-
- /*
- * Check if ICE HCI capability support is present
- * If present, enable it.
- */
- ice_cap = readl_relaxed(msm_host->cryptoio + ICE_CQ_CAPABILITIES);
- if (ice_cap & ICE_HCI_SUPPORT) {
- config = readl_relaxed(msm_host->cryptoio + ICE_CQ_CONFIG);
-
- if (enable)
- config |= CRYPTO_GENERAL_ENABLE;
- else
- config &= ~CRYPTO_GENERAL_ENABLE;
- writel_relaxed(config, msm_host->cryptoio + ICE_CQ_CONFIG);
- }
-}
-
-int sdhci_msm_ice_get_dev(struct sdhci_host *host)
-{
- struct device *sdhc_dev;
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
-
- if (!msm_host || !msm_host->pdev) {
- pr_err("%s: invalid msm_host %p or msm_host->pdev\n",
- __func__, msm_host);
- return -EINVAL;
- }
-
- sdhc_dev = &msm_host->pdev->dev;
- msm_host->ice.vops = sdhci_msm_ice_get_vops(sdhc_dev);
- msm_host->ice.pdev = sdhci_msm_ice_get_pdevice(sdhc_dev);
-
- if (msm_host->ice.pdev == ERR_PTR(-EPROBE_DEFER)) {
- dev_err(sdhc_dev, "%s: ICE device not probed yet\n",
- __func__);
- msm_host->ice.pdev = NULL;
- msm_host->ice.vops = NULL;
- return -EPROBE_DEFER;
- }
-
- if (!msm_host->ice.pdev) {
- dev_dbg(sdhc_dev, "%s: invalid platform device\n", __func__);
- msm_host->ice.vops = NULL;
- return -ENODEV;
- }
- if (!msm_host->ice.vops) {
- dev_dbg(sdhc_dev, "%s: invalid ice vops\n", __func__);
- msm_host->ice.pdev = NULL;
- return -ENODEV;
- }
- msm_host->ice.state = SDHCI_MSM_ICE_STATE_DISABLED;
- return 0;
-}
-
-static
-int sdhci_msm_ice_pltfm_init(struct sdhci_msm_host *msm_host)
-{
- struct resource *ice_memres = NULL;
- struct platform_device *pdev = msm_host->pdev;
- int err = 0;
-
- if (!msm_host->ice_hci_support)
- goto out;
- /*
- * ICE HCI registers are present in cmdq register space.
- * So map the cmdq mem for accessing ICE HCI registers.
- */
- ice_memres = platform_get_resource_byname(pdev,
- IORESOURCE_MEM, "cqhci_mem");
- if (!ice_memres) {
- dev_err(&pdev->dev, "Failed to get iomem resource for ice\n");
- err = -EINVAL;
- goto out;
- }
- msm_host->cryptoio = devm_ioremap(&pdev->dev,
- ice_memres->start,
- resource_size(ice_memres));
- if (!msm_host->cryptoio) {
- dev_err(&pdev->dev, "Failed to remap registers\n");
- err = -ENOMEM;
- }
-out:
- return err;
-}
-
-int sdhci_msm_ice_init(struct sdhci_host *host)
-{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
- int err = 0;
-
- if (msm_host->ice.vops->init) {
- err = sdhci_msm_ice_pltfm_init(msm_host);
- if (err)
- goto out;
-
- if (msm_host->ice_hci_support)
- sdhci_msm_enable_ice_hci(host, true);
-
- err = msm_host->ice.vops->init(msm_host->ice.pdev,
- msm_host,
- sdhci_msm_ice_error_cb);
- if (err) {
- pr_err("%s: ice init err %d\n",
- mmc_hostname(host->mmc), err);
- sdhci_msm_ice_print_regs(host);
- if (msm_host->ice_hci_support)
- sdhci_msm_enable_ice_hci(host, false);
- goto out;
- }
- msm_host->ice.state = SDHCI_MSM_ICE_STATE_ACTIVE;
- }
-
-out:
- return err;
-}
-
-void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot)
-{
- writel_relaxed(SDHCI_MSM_ICE_ENABLE_BYPASS,
- host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n + 16 * slot);
-}
-
-static
-int sdhci_msm_ice_get_cfg(struct sdhci_msm_host *msm_host, struct request *req,
- unsigned int *bypass, short *key_index)
-{
- int err = 0;
- struct ice_data_setting ice_set;
-
- memset(&ice_set, 0, sizeof(struct ice_data_setting));
- if (msm_host->ice.vops->config_start) {
- err = msm_host->ice.vops->config_start(
- msm_host->ice.pdev,
- req, &ice_set, false);
- if (err) {
- pr_err("%s: ice config failed %d\n",
- mmc_hostname(msm_host->mmc), err);
- return err;
- }
- }
- /* if writing data command */
- if (rq_data_dir(req) == WRITE)
- *bypass = ice_set.encr_bypass ?
- SDHCI_MSM_ICE_ENABLE_BYPASS :
- SDHCI_MSM_ICE_DISABLE_BYPASS;
- /* if reading data command */
- else if (rq_data_dir(req) == READ)
- *bypass = ice_set.decr_bypass ?
- SDHCI_MSM_ICE_ENABLE_BYPASS :
- SDHCI_MSM_ICE_DISABLE_BYPASS;
- *key_index = ice_set.crypto_data.key_index;
- return err;
-}
-
-static
-void sdhci_msm_ice_update_cfg(struct sdhci_host *host, u64 lba, u32 slot,
- unsigned int bypass, short key_index, u32 cdu_sz)
-{
- unsigned int ctrl_info_val = 0;
-
- /* Configure ICE index */
- ctrl_info_val =
- (key_index &
- MASK_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX)
- << OFFSET_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX;
-
- /* Configure data unit size of transfer request */
- ctrl_info_val |=
- (cdu_sz &
- MASK_SDHCI_MSM_ICE_CTRL_INFO_CDU)
- << OFFSET_SDHCI_MSM_ICE_CTRL_INFO_CDU;
-
- /* Configure ICE bypass mode */
- ctrl_info_val |=
- (bypass & MASK_SDHCI_MSM_ICE_CTRL_INFO_BYPASS)
- << OFFSET_SDHCI_MSM_ICE_CTRL_INFO_BYPASS;
-
- writel_relaxed((lba & 0xFFFFFFFF),
- host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_1_n + 16 * slot);
- writel_relaxed(((lba >> 32) & 0xFFFFFFFF),
- host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_2_n + 16 * slot);
- writel_relaxed(ctrl_info_val,
- host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n + 16 * slot);
- /* Ensure ICE registers are configured before issuing SDHCI request */
- mb();
-}
-
-static inline
-void sdhci_msm_ice_hci_update_cqe_cfg(u64 dun, unsigned int bypass,
- short key_index, u64 *ice_ctx)
-{
- /*
- *
- * registers fields. Below is the equivalent names for
- * ICE3.0 Vs ICE2.0:
- * Data Unit Number(DUN) == Logical Base address(LBA)
- * Crypto Configuration index (CCI) == Key Index
- * Crypto Enable (CE) == !BYPASS
- */
- if (ice_ctx)
- *ice_ctx = DATA_UNIT_NUM(dun) |
- CRYPTO_CONFIG_INDEX(key_index) |
- CRYPTO_ENABLE(!bypass);
-}
-
-static
-void sdhci_msm_ice_hci_update_noncq_cfg(struct sdhci_host *host,
- u64 dun, unsigned int bypass, short key_index)
-{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
- unsigned int crypto_params = 0;
- /*
- * The naming convention got changed between ICE2.0 and ICE3.0
- * registers fields. Below is the equivalent names for
- * ICE3.0 Vs ICE2.0:
- * Data Unit Number(DUN) == Logical Base address(LBA)
- * Crypto Configuration index (CCI) == Key Index
- * Crypto Enable (CE) == !BYPASS
- */
- /* Configure ICE bypass mode */
- crypto_params |=
- ((!bypass) & MASK_SDHCI_MSM_ICE_HCI_PARAM_CE)
- << OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CE;
- /* Configure Crypto Configure Index (CCI) */
- crypto_params |= (key_index &
- MASK_SDHCI_MSM_ICE_HCI_PARAM_CCI)
- << OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CCI;
-
- writel_relaxed((crypto_params & 0xFFFFFFFF),
- msm_host->cryptoio + ICE_NONCQ_CRYPTO_PARAMS);
-
- /* Update DUN */
- writel_relaxed((dun & 0xFFFFFFFF),
- msm_host->cryptoio + ICE_NONCQ_CRYPTO_DUN);
- /* Ensure ICE registers are configured before issuing SDHCI request */
- mb();
-}
-
-int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
- u32 slot)
-{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
- int err = 0;
- short key_index = 0;
- u64 dun = 0;
- unsigned int bypass = SDHCI_MSM_ICE_ENABLE_BYPASS;
- u32 cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_512_B;
- struct request *req;
-
- if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
- pr_err("%s: ice is in invalid state %d\n",
- mmc_hostname(host->mmc), msm_host->ice.state);
- return -EINVAL;
- }
-
- WARN_ON(!mrq);
- if (!mrq)
- return -EINVAL;
- req = mrq->req;
- if (req && req->bio) {
-#ifdef CONFIG_PFK
- if (bio_dun(req->bio)) {
- dun = bio_dun(req->bio);
- cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB;
- } else {
- dun = req->__sector;
- }
-#else
- dun = req->__sector;
-#endif
- err = sdhci_msm_ice_get_cfg(msm_host, req, &bypass, &key_index);
- if (err)
- return err;
- pr_debug("%s: %s: slot %d bypass %d key_index %d\n",
- mmc_hostname(host->mmc),
- (rq_data_dir(req) == WRITE) ? "WRITE" : "READ",
- slot, bypass, key_index);
- }
-
- if (msm_host->ice_hci_support) {
- /* For ICE HCI / ICE3.0 */
- sdhci_msm_ice_hci_update_noncq_cfg(host, dun, bypass,
- key_index);
- } else {
- /* For ICE versions earlier to ICE3.0 */
- sdhci_msm_ice_update_cfg(host, dun, slot, bypass, key_index,
- cdu_sz);
- }
- return 0;
-}
-
-int sdhci_msm_ice_cqe_cfg(struct sdhci_host *host,
- struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
-{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
- int err = 0;
- short key_index = 0;
- u64 dun = 0;
- unsigned int bypass = SDHCI_MSM_ICE_ENABLE_BYPASS;
- struct request *req;
- u32 cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_512_B;
-
- if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
- pr_err("%s: ice is in invalid state %d\n",
- mmc_hostname(host->mmc), msm_host->ice.state);
- return -EINVAL;
- }
-
- WARN_ON(!mrq);
- if (!mrq)
- return -EINVAL;
- req = mrq->req;
- if (req && req->bio) {
-#ifdef CONFIG_PFK
- if (bio_dun(req->bio)) {
- dun = bio_dun(req->bio);
- cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB;
- } else {
- dun = req->__sector;
- }
-#else
- dun = req->__sector;
-#endif
- err = sdhci_msm_ice_get_cfg(msm_host, req, &bypass, &key_index);
- if (err)
- return err;
- pr_debug("%s: %s: slot %d bypass %d key_index %d\n",
- mmc_hostname(host->mmc),
- (rq_data_dir(req) == WRITE) ? "WRITE" : "READ",
- slot, bypass, key_index);
- }
-
- if (msm_host->ice_hci_support) {
- /* For ICE HCI / ICE3.0 */
- sdhci_msm_ice_hci_update_cqe_cfg(dun, bypass, key_index,
- ice_ctx);
- } else {
- /* For ICE versions earlier to ICE3.0 */
- sdhci_msm_ice_update_cfg(host, dun, slot, bypass, key_index,
- cdu_sz);
- }
-
- return 0;
-}
-
-int sdhci_msm_ice_cfg_end(struct sdhci_host *host, struct mmc_request *mrq)
-{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
- int err = 0;
- struct request *req;
-
- if (!host->is_crypto_en)
- return 0;
-
- if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
- pr_err("%s: ice is in invalid state %d\n",
- mmc_hostname(host->mmc), msm_host->ice.state);
- return -EINVAL;
- }
-
- req = mrq->req;
- if (req) {
- if (msm_host->ice.vops->config_end) {
- err = msm_host->ice.vops->config_end(
- msm_host->ice.pdev, req);
- if (err) {
- pr_err("%s: ice config end failed %d\n",
- mmc_hostname(host->mmc), err);
- return err;
- }
- }
- }
-
- return 0;
-}
-
-int sdhci_msm_ice_reset(struct sdhci_host *host)
-{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
- int err = 0;
-
- if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
- pr_err("%s: ice is in invalid state before reset %d\n",
- mmc_hostname(host->mmc), msm_host->ice.state);
- return -EINVAL;
- }
-
- if (msm_host->ice.vops->reset) {
- err = msm_host->ice.vops->reset(msm_host->ice.pdev);
- if (err) {
- pr_err("%s: ice reset failed %d\n",
- mmc_hostname(host->mmc), err);
- sdhci_msm_ice_print_regs(host);
- return err;
- }
- }
-
- /* If ICE HCI support is present then re-enable it */
- if (msm_host->ice_hci_support)
- sdhci_msm_enable_ice_hci(host, true);
-
- if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
- pr_err("%s: ice is in invalid state after reset %d\n",
- mmc_hostname(host->mmc), msm_host->ice.state);
- return -EINVAL;
- }
- return 0;
-}
-
-int sdhci_msm_ice_resume(struct sdhci_host *host)
-{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
- int err = 0;
-
- if (msm_host->ice.state !=
- SDHCI_MSM_ICE_STATE_SUSPENDED) {
- pr_err("%s: ice is in invalid state before resume %d\n",
- mmc_hostname(host->mmc), msm_host->ice.state);
- return -EINVAL;
- }
-
- if (msm_host->ice.vops->resume) {
- err = msm_host->ice.vops->resume(msm_host->ice.pdev);
- if (err) {
- pr_err("%s: ice resume failed %d\n",
- mmc_hostname(host->mmc), err);
- return err;
- }
- }
-
- msm_host->ice.state = SDHCI_MSM_ICE_STATE_ACTIVE;
- return 0;
-}
-
-int sdhci_msm_ice_suspend(struct sdhci_host *host)
-{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
- int err = 0;
-
- if (msm_host->ice.state !=
- SDHCI_MSM_ICE_STATE_ACTIVE) {
- pr_err("%s: ice is in invalid state before resume %d\n",
- mmc_hostname(host->mmc), msm_host->ice.state);
- return -EINVAL;
- }
-
- if (msm_host->ice.vops->suspend) {
- err = msm_host->ice.vops->suspend(msm_host->ice.pdev);
- if (err) {
- pr_err("%s: ice suspend failed %d\n",
- mmc_hostname(host->mmc), err);
- return -EINVAL;
- }
- }
- msm_host->ice.state = SDHCI_MSM_ICE_STATE_SUSPENDED;
- return 0;
-}
-
-int sdhci_msm_ice_get_status(struct sdhci_host *host, int *ice_status)
-{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
- int stat = -EINVAL;
-
- if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
- pr_err("%s: ice is in invalid state %d\n",
- mmc_hostname(host->mmc), msm_host->ice.state);
- return -EINVAL;
- }
-
- if (msm_host->ice.vops->status) {
- *ice_status = 0;
- stat = msm_host->ice.vops->status(msm_host->ice.pdev);
- if (stat < 0) {
- pr_err("%s: ice get sts failed %d\n",
- mmc_hostname(host->mmc), stat);
- return -EINVAL;
- }
- *ice_status = stat;
- }
- return 0;
-}
-
-void sdhci_msm_ice_print_regs(struct sdhci_host *host)
-{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
-
- if (msm_host->ice.vops->debug)
- msm_host->ice.vops->debug(msm_host->ice.pdev);
-}
diff --git a/drivers/mmc/host/sdhci-msm-ice.h b/drivers/mmc/host/sdhci-msm-ice.h
deleted file mode 100644
index c0df636..0000000
--- a/drivers/mmc/host/sdhci-msm-ice.h
+++ /dev/null
@@ -1,164 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2015, 2017, 2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef __SDHCI_MSM_ICE_H__
-#define __SDHCI_MSM_ICE_H__
-
-#include <linux/io.h>
-#include <linux/of.h>
-#include <linux/blkdev.h>
-#include <crypto/ice.h>
-
-#include "sdhci-msm.h"
-
-#define SDHC_MSM_CRYPTO_LABEL "sdhc-msm-crypto"
-/* Timeout waiting for ICE initialization, that requires TZ access */
-#define SDHCI_MSM_ICE_COMPLETION_TIMEOUT_MS 500
-
-/*
- * SDHCI host controller ICE registers. There are n [0..31]
- * of each of these registers
- */
-#define NUM_SDHCI_MSM_ICE_CTRL_INFO_n_REGS 32
-
-#define CORE_VENDOR_SPEC_ICE_CTRL 0x300
-#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_1_n 0x304
-#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_2_n 0x308
-#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n 0x30C
-
-/* ICE3.0 register which got added cmdq reg space */
-#define ICE_CQ_CAPABILITIES 0x04
-#define ICE_HCI_SUPPORT (1 << 28)
-#define ICE_CQ_CONFIG 0x08
-#define CRYPTO_GENERAL_ENABLE (1 << 1)
-#define ICE_NONCQ_CRYPTO_PARAMS 0x70
-#define ICE_NONCQ_CRYPTO_DUN 0x74
-
-/* ICE3.0 register which got added hc reg space */
-#define HC_VENDOR_SPECIFIC_FUNC4 0x260
-#define DISABLE_CRYPTO (1 << 15)
-#define HC_VENDOR_SPECIFIC_ICE_CTRL 0x800
-#define ICE_SW_RST_EN (1 << 0)
-
-/* SDHCI MSM ICE CTRL Info register offset */
-enum {
- OFFSET_SDHCI_MSM_ICE_CTRL_INFO_BYPASS = 0,
- OFFSET_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX = 1,
- OFFSET_SDHCI_MSM_ICE_CTRL_INFO_CDU = 6,
- OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CCI = 0,
- OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CE = 8,
-};
-
-/* SDHCI MSM ICE CTRL Info register masks */
-enum {
- MASK_SDHCI_MSM_ICE_CTRL_INFO_BYPASS = 0x1,
- MASK_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX = 0x1F,
- MASK_SDHCI_MSM_ICE_CTRL_INFO_CDU = 0x7,
- MASK_SDHCI_MSM_ICE_HCI_PARAM_CE = 0x1,
- MASK_SDHCI_MSM_ICE_HCI_PARAM_CCI = 0xff
-};
-
-/* SDHCI MSM ICE encryption/decryption bypass state */
-enum {
- SDHCI_MSM_ICE_DISABLE_BYPASS = 0,
- SDHCI_MSM_ICE_ENABLE_BYPASS = 1,
-};
-
-/* SDHCI MSM ICE Crypto Data Unit of target DUN of Transfer Request */
-enum {
- SDHCI_MSM_ICE_TR_DATA_UNIT_512_B = 0,
- SDHCI_MSM_ICE_TR_DATA_UNIT_1_KB = 1,
- SDHCI_MSM_ICE_TR_DATA_UNIT_2_KB = 2,
- SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB = 3,
- SDHCI_MSM_ICE_TR_DATA_UNIT_8_KB = 4,
- SDHCI_MSM_ICE_TR_DATA_UNIT_16_KB = 5,
- SDHCI_MSM_ICE_TR_DATA_UNIT_32_KB = 6,
- SDHCI_MSM_ICE_TR_DATA_UNIT_64_KB = 7,
-};
-
-/* SDHCI MSM ICE internal state */
-enum {
- SDHCI_MSM_ICE_STATE_DISABLED = 0,
- SDHCI_MSM_ICE_STATE_ACTIVE = 1,
- SDHCI_MSM_ICE_STATE_SUSPENDED = 2,
-};
-
-/* crypto context fields in cmdq data command task descriptor */
-#define DATA_UNIT_NUM(x) (((u64)(x) & 0xFFFFFFFF) << 0)
-#define CRYPTO_CONFIG_INDEX(x) (((u64)(x) & 0xFF) << 32)
-#define CRYPTO_ENABLE(x) (((u64)(x) & 0x1) << 47)
-
-#ifdef CONFIG_MMC_SDHCI_MSM_ICE
-int sdhci_msm_ice_get_dev(struct sdhci_host *host);
-int sdhci_msm_ice_init(struct sdhci_host *host);
-void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot);
-int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
- u32 slot);
-int sdhci_msm_ice_cqe_cfg(struct sdhci_host *host,
- struct mmc_request *mrq, u32 slot, u64 *ice_ctx);
-int sdhci_msm_ice_cfg_end(struct sdhci_host *host, struct mmc_request *mrq);
-int sdhci_msm_ice_reset(struct sdhci_host *host);
-int sdhci_msm_ice_resume(struct sdhci_host *host);
-int sdhci_msm_ice_suspend(struct sdhci_host *host);
-int sdhci_msm_ice_get_status(struct sdhci_host *host, int *ice_status);
-void sdhci_msm_ice_print_regs(struct sdhci_host *host);
-#else
-inline int sdhci_msm_ice_get_dev(struct sdhci_host *host)
-{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
-
- if (msm_host) {
- msm_host->ice.pdev = NULL;
- msm_host->ice.vops = NULL;
- }
- return -ENODEV;
-}
-inline int sdhci_msm_ice_init(struct sdhci_host *host)
-{
- return 0;
-}
-
-inline void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot)
-{
-}
-
-inline int sdhci_msm_ice_cfg(struct sdhci_host *host,
- struct mmc_request *mrq, u32 slot)
-{
- return 0;
-}
-inline int sdhci_msm_ice_cqe_cfg(struct sdhci_host *host,
- struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
-{
- return 0;
-}
-inline int sdhci_msm_ice_cfg_end(struct sdhci_host *host,
- struct mmc_request *mrq)
-{
- return 0;
-}
-inline int sdhci_msm_ice_reset(struct sdhci_host *host)
-{
- return 0;
-}
-inline int sdhci_msm_ice_resume(struct sdhci_host *host)
-{
- return 0;
-}
-inline int sdhci_msm_ice_suspend(struct sdhci_host *host)
-{
- return 0;
-}
-inline int sdhci_msm_ice_get_status(struct sdhci_host *host,
- int *ice_status)
-{
- return 0;
-}
-inline void sdhci_msm_ice_print_regs(struct sdhci_host *host)
-{
-}
-#endif /* CONFIG_MMC_SDHCI_MSM_ICE */
-#endif /* __SDHCI_MSM_ICE_H__ */
diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
index 74aa433..02b5509 100644
--- a/drivers/mmc/host/sdhci-msm.c
+++ b/drivers/mmc/host/sdhci-msm.c
@@ -34,9 +34,9 @@
#include <linux/clk/qcom.h>
#include "sdhci-msm.h"
-#include "sdhci-msm-ice.h"
#include "sdhci-pltfm.h"
#include "cqhci.h"
+#include "cqhci-crypto-qti.h"
#define QOS_REMOVE_DELAY_MS 10
#define CORE_POWER 0x0
@@ -2168,26 +2168,20 @@ struct sdhci_msm_pltfm_data *sdhci_msm_populate_pdata(struct device *dev,
}
}
- if (msm_host->ice.pdev) {
- if (sdhci_msm_dt_get_array(dev, "qcom,ice-clk-rates",
- &ice_clk_table, &ice_clk_table_len, 0)) {
- dev_err(dev, "failed parsing supported ice clock rates\n");
- goto out;
- }
- if (!ice_clk_table || !ice_clk_table_len) {
- dev_err(dev, "Invalid clock table\n");
- goto out;
- }
- if (ice_clk_table_len != 2) {
- dev_err(dev, "Need max and min frequencies in the table\n");
- goto out;
- }
- pdata->sup_ice_clk_table = ice_clk_table;
- pdata->sup_ice_clk_cnt = ice_clk_table_len;
- pdata->ice_clk_max = pdata->sup_ice_clk_table[0];
- pdata->ice_clk_min = pdata->sup_ice_clk_table[1];
- dev_dbg(dev, "supported ICE clock rates (Hz): max: %u min: %u\n",
+ if (!sdhci_msm_dt_get_array(dev, "qcom,ice-clk-rates",
+ &ice_clk_table, &ice_clk_table_len, 0)) {
+ if (ice_clk_table && ice_clk_table_len) {
+ if (ice_clk_table_len != 2) {
+ dev_err(dev, "Need max and min frequencies\n");
+ goto out;
+ }
+ pdata->sup_ice_clk_table = ice_clk_table;
+ pdata->sup_ice_clk_cnt = ice_clk_table_len;
+ pdata->ice_clk_max = pdata->sup_ice_clk_table[0];
+ pdata->ice_clk_min = pdata->sup_ice_clk_table[1];
+ dev_dbg(dev, "ICE clock rates (Hz): max: %u min: %u\n",
pdata->ice_clk_max, pdata->ice_clk_min);
+ }
}
if (sdhci_msm_dt_get_array(dev, "qcom,devfreq,freq-table",
@@ -2409,64 +2403,6 @@ void sdhci_msm_cqe_disable(struct mmc_host *mmc, bool recovery)
sdhci_cqe_disable(mmc, recovery);
}
-int sdhci_msm_cqe_crypto_cfg(struct mmc_host *mmc,
- struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
-{
- int err = 0;
- struct sdhci_host *host = mmc_priv(mmc);
-
- if (!host->is_crypto_en)
- return 0;
-
- if (host->mmc->inlinecrypt_reset_needed &&
- host->ops->crypto_engine_reset) {
- err = host->ops->crypto_engine_reset(host);
- if (err) {
- pr_err("%s: crypto reset failed\n",
- mmc_hostname(host->mmc));
- goto out;
- }
- host->mmc->inlinecrypt_reset_needed = false;
- }
-
- err = sdhci_msm_ice_cqe_cfg(host, mrq, slot, ice_ctx);
- if (err) {
- pr_err("%s: failed to configure crypto\n",
- mmc_hostname(host->mmc));
- goto out;
- }
-out:
- return err;
-}
-
-void sdhci_msm_cqe_crypto_cfg_reset(struct mmc_host *mmc, unsigned int slot)
-{
- struct sdhci_host *host = mmc_priv(mmc);
-
- if (!host->is_crypto_en)
- return;
-
- return sdhci_msm_ice_cfg_reset(host, slot);
-}
-
-int sdhci_msm_cqe_crypto_cfg_end(struct mmc_host *mmc,
- struct mmc_request *mrq)
-{
- int err = 0;
- struct sdhci_host *host = mmc_priv(mmc);
-
- if (!host->is_crypto_en)
- return 0;
-
- err = sdhci_msm_ice_cfg_end(host, mrq);
- if (err) {
- pr_err("%s: failed to configure crypto\n",
- mmc_hostname(host->mmc));
- return err;
- }
- return 0;
-}
-
void sdhci_msm_cqe_sdhci_dumpregs(struct mmc_host *mmc)
{
struct sdhci_host *host = mmc_priv(mmc);
@@ -2477,9 +2413,6 @@ void sdhci_msm_cqe_sdhci_dumpregs(struct mmc_host *mmc)
static const struct cqhci_host_ops sdhci_msm_cqhci_ops = {
.enable = sdhci_msm_cqe_enable,
.disable = sdhci_msm_cqe_disable,
- .crypto_cfg = sdhci_msm_cqe_crypto_cfg,
- .crypto_cfg_reset = sdhci_msm_cqe_crypto_cfg_reset,
- .crypto_cfg_end = sdhci_msm_cqe_crypto_cfg_end,
.dumpregs = sdhci_msm_cqe_sdhci_dumpregs,
};
@@ -2509,6 +2442,13 @@ static int sdhci_msm_cqe_add_host(struct sdhci_host *host,
msm_host->cq_host = cq_host;
dma64 = host->flags & SDHCI_USE_64_BIT_DMA;
+ /*
+ * Set the vendor specific ops needed for ICE.
+ * Default implementation if the ops are not set.
+ */
+#ifdef CONFIG_MMC_CQHCI_CRYPTO_QTI
+ cqhci_crypto_qti_set_vops(cq_host);
+#endif
ret = cqhci_init(cq_host, host->mmc, dma64);
if (ret) {
@@ -2725,7 +2665,7 @@ static int sdhci_msm_vreg_enable(struct sdhci_msm_reg_data *vreg)
if (!vreg->is_enabled) {
/* Set voltage level */
- ret = sdhci_msm_vreg_set_voltage(vreg, vreg->high_vol_level,
+ ret = sdhci_msm_vreg_set_voltage(vreg, vreg->low_vol_level,
vreg->high_vol_level);
if (ret)
return ret;
@@ -4179,7 +4119,6 @@ void sdhci_msm_dump_vendor_regs(struct sdhci_host *host)
int i, index = 0;
u32 test_bus_val = 0;
u32 debug_reg[MAX_TEST_BUS] = {0};
- u32 sts = 0;
sdhci_msm_cache_debug_data(host);
pr_info("----------- VENDOR REGISTER DUMP -----------\n");
@@ -4260,29 +4199,10 @@ void sdhci_msm_dump_vendor_regs(struct sdhci_host *host)
pr_info(" Test bus[%d to %d]: 0x%08x 0x%08x 0x%08x 0x%08x\n",
i, i + 3, debug_reg[i], debug_reg[i+1],
debug_reg[i+2], debug_reg[i+3]);
-
- if (host->is_crypto_en) {
- sdhci_msm_ice_get_status(host, &sts);
- pr_info("%s: ICE status %x\n", mmc_hostname(host->mmc), sts);
- sdhci_msm_ice_print_regs(host);
- }
}
static void sdhci_msm_reset(struct sdhci_host *host, u8 mask)
{
- struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
- struct sdhci_msm_host *msm_host = pltfm_host->priv;
-
- /* Set ICE core to be reset in sync with SDHC core */
- if (msm_host->ice.pdev) {
- if (msm_host->ice_hci_support)
- writel_relaxed(1, host->ioaddr +
- HC_VENDOR_SPECIFIC_ICE_CTRL);
- else
- writel_relaxed(1,
- host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL);
- }
-
sdhci_reset(host, mask);
if ((host->mmc->caps2 & MMC_CAP2_CQE) && (mask & SDHCI_RESET_ALL))
cqhci_suspend(host->mmc);
@@ -4974,9 +4894,6 @@ static void sdhci_msm_hw_reset(struct sdhci_host *host)
}
static struct sdhci_ops sdhci_msm_ops = {
- .crypto_engine_cfg = sdhci_msm_ice_cfg,
- .crypto_engine_cfg_end = sdhci_msm_ice_cfg_end,
- .crypto_engine_reset = sdhci_msm_ice_reset,
.set_uhs_signaling = sdhci_msm_set_uhs_signaling,
.check_power_status = sdhci_msm_check_power_status,
.platform_execute_tuning = sdhci_msm_execute_tuning,
@@ -5108,7 +5025,6 @@ static void sdhci_set_default_hw_caps(struct sdhci_msm_host *msm_host,
if ((major == 1) && (minor >= 0x6b)) {
host->cdr_support = true;
- msm_host->ice_hci_support = true;
}
/* 7FF projects with 7nm DLL */
@@ -5144,84 +5060,23 @@ static int sdhci_msm_setup_ice_clk(struct sdhci_msm_host *msm_host,
{
int ret = 0;
- if (msm_host->ice.pdev) {
- /* Setup SDC ICE clock */
- msm_host->ice_clk = devm_clk_get(&pdev->dev, "ice_core_clk");
- if (!IS_ERR(msm_host->ice_clk)) {
- /* ICE core has only one clock frequency for now */
- ret = clk_set_rate(msm_host->ice_clk,
- msm_host->pdata->ice_clk_max);
- if (ret) {
- dev_err(&pdev->dev, "ICE_CLK rate set failed (%d) for %u\n",
- ret,
- msm_host->pdata->ice_clk_max);
- return ret;
- }
- ret = clk_prepare_enable(msm_host->ice_clk);
- if (ret)
- return ret;
- ret = clk_set_flags(msm_host->ice_clk,
- CLKFLAG_RETAIN_MEM);
- if (ret)
- dev_err(&pdev->dev, "ICE_CLK set RETAIN_MEM failed: %d\n",
- ret);
-
- msm_host->ice_clk_rate =
- msm_host->pdata->ice_clk_max;
- }
- }
-
- return ret;
-}
-
-static int sdhci_msm_initialize_ice(struct sdhci_msm_host *msm_host,
- struct platform_device *pdev,
- struct sdhci_host *host)
-{
- int ret = 0;
-
- if (msm_host->ice.pdev) {
- ret = sdhci_msm_ice_init(host);
+ /* Setup SDC ICE clock */
+ msm_host->ice_clk = devm_clk_get(&pdev->dev, "ice_core_clk");
+ if (!IS_ERR(msm_host->ice_clk)) {
+ /* ICE core has only one clock frequency for now */
+ ret = clk_set_rate(msm_host->ice_clk,
+ msm_host->pdata->ice_clk_max);
if (ret) {
- dev_err(&pdev->dev, "%s: SDHCi ICE init failed (%d)\n",
- mmc_hostname(host->mmc), ret);
- return -EINVAL;
+ dev_err(&pdev->dev, "ICE_CLK rate set failed (%d) for %u\n",
+ ret,
+ msm_host->pdata->ice_clk_max);
+ return ret;
}
- host->is_crypto_en = true;
- msm_host->mmc->inlinecrypt_support = true;
- /* Packed commands cannot be encrypted/decrypted using ICE */
- msm_host->mmc->caps2 &= ~(MMC_CAP2_PACKED_WR |
- MMC_CAP2_PACKED_WR_CONTROL);
- }
-
- return 0;
-}
-
-static int sdhci_msm_get_ice_device_vops(struct sdhci_host *host,
- struct platform_device *pdev)
-{
- int ret = 0;
-
- ret = sdhci_msm_ice_get_dev(host);
- if (ret == -EPROBE_DEFER) {
- /*
- * SDHCI driver might be probed before ICE driver does.
- * In that case we would like to return EPROBE_DEFER code
- * in order to delay its probing.
- */
- dev_err(&pdev->dev, "%s: required ICE device not probed yet err = %d\n",
- __func__, ret);
- } else if (ret == -ENODEV) {
- /*
- * ICE device is not enabled in DTS file. No need for further
- * initialization of ICE driver.
- */
- dev_warn(&pdev->dev, "%s: ICE device is not enabled\n",
- __func__);
- ret = 0;
- } else if (ret) {
- dev_err(&pdev->dev, "%s: sdhci_msm_ice_get_dev failed %d\n",
- __func__, ret);
+ ret = clk_prepare_enable(msm_host->ice_clk);
+ if (ret)
+ return ret;
+ msm_host->ice_clk_rate =
+ msm_host->pdata->ice_clk_max;
}
return ret;
@@ -5311,11 +5166,6 @@ static int sdhci_msm_probe(struct platform_device *pdev)
msm_host->mmc = host->mmc;
msm_host->pdev = pdev;
- /* get the ice device vops if present */
- ret = sdhci_msm_get_ice_device_vops(host, pdev);
- if (ret)
- goto out_host_free;
-
/* Extract platform data */
if (pdev->dev.of_node) {
ret = of_alias_get_id(pdev->dev.of_node, "sdhc");
@@ -5653,11 +5503,6 @@ static int sdhci_msm_probe(struct platform_device *pdev)
if (msm_host->pdata->nonhotplug)
msm_host->mmc->caps2 |= MMC_CAP2_NONHOTPLUG;
- /* Initialize ICE if present */
- ret = sdhci_msm_initialize_ice(msm_host, pdev, host);
- if (ret == -EINVAL)
- goto vreg_deinit;
-
init_completion(&msm_host->pwr_irq_completion);
if (gpio_is_valid(msm_host->pdata->status_gpio)) {
@@ -5939,7 +5784,6 @@ static int sdhci_msm_runtime_suspend(struct device *dev)
struct sdhci_host *host = dev_get_drvdata(dev);
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
- int ret;
ktime_t start = ktime_get();
if (host->mmc->card && mmc_card_sdio(host->mmc->card))
@@ -5950,12 +5794,6 @@ static int sdhci_msm_runtime_suspend(struct device *dev)
defer_disable_host_irq:
disable_irq(msm_host->pwr_irq);
- if (host->is_crypto_en) {
- ret = sdhci_msm_ice_suspend(host);
- if (ret < 0)
- pr_err("%s: failed to suspend crypto engine %d\n",
- mmc_hostname(host->mmc), ret);
- }
sdhci_msm_disable_controller_clock(host);
trace_sdhci_msm_runtime_suspend(mmc_hostname(host->mmc), 0,
ktime_to_us(ktime_sub(ktime_get(), start)));
@@ -5974,21 +5812,11 @@ static int sdhci_msm_runtime_resume(struct device *dev)
if (ret) {
pr_err("%s: Failed to enable reqd clocks\n",
mmc_hostname(host->mmc));
- goto skip_ice_resume;
}
- if (host->mmc &&
- (host->mmc->ios.timing == MMC_TIMING_MMC_HS400))
+ if (host->mmc->ios.timing == MMC_TIMING_MMC_HS400)
sdhci_msm_toggle_fifo_write_clk(host);
- if (host->is_crypto_en) {
- ret = sdhci_msm_ice_resume(host);
- if (ret)
- pr_err("%s: failed to resume crypto engine %d\n",
- mmc_hostname(host->mmc), ret);
- }
-skip_ice_resume:
-
if (host->mmc->card && mmc_card_sdio(host->mmc->card))
goto defer_enable_host_irq;
diff --git a/drivers/mmc/host/sdhci-msm.h b/drivers/mmc/host/sdhci-msm.h
index fe20609..fa83f09 100644
--- a/drivers/mmc/host/sdhci-msm.h
+++ b/drivers/mmc/host/sdhci-msm.h
@@ -266,17 +266,9 @@ struct sdhci_msm_debug_data {
struct sdhci_host copy_host;
};
-struct sdhci_msm_ice_data {
- struct qcom_ice_variant_ops *vops;
- struct platform_device *pdev;
- int state;
-};
-
struct sdhci_msm_host {
struct platform_device *pdev;
void __iomem *core_mem; /* MSM SDCC mapped address */
- void __iomem *cryptoio; /* ICE HCI mapped address */
- bool ice_hci_support;
int pwr_irq; /* power irq */
struct clk *clk; /* main SD/MMC bus clock */
struct clk *pclk; /* SDHC peripheral bus clock */
@@ -327,7 +319,6 @@ struct sdhci_msm_host {
int soc_min_rev;
struct workqueue_struct *pm_qos_wq;
struct sdhci_msm_dll_hsr *dll_hsr;
- struct sdhci_msm_ice_data ice;
u32 ice_clk_rate;
bool debug_mode_enabled;
bool reg_store;
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 505fcf4..5e8263f 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -1922,50 +1922,6 @@ static int sdhci_get_tuning_cmd(struct sdhci_host *host)
return MMC_SEND_TUNING_BLOCK;
}
-static int sdhci_crypto_cfg(struct sdhci_host *host, struct mmc_request *mrq,
- u32 slot)
-{
- int err = 0;
-
- if (host->mmc->inlinecrypt_reset_needed &&
- host->ops->crypto_engine_reset) {
- err = host->ops->crypto_engine_reset(host);
- if (err) {
- pr_err("%s: crypto reset failed\n",
- mmc_hostname(host->mmc));
- goto out;
- }
- host->mmc->inlinecrypt_reset_needed = false;
- }
-
- if (host->ops->crypto_engine_cfg) {
- err = host->ops->crypto_engine_cfg(host, mrq, slot);
- if (err) {
- pr_err("%s: failed to configure crypto\n",
- mmc_hostname(host->mmc));
- goto out;
- }
- }
-out:
- return err;
-}
-
-static int sdhci_crypto_cfg_end(struct sdhci_host *host,
- struct mmc_request *mrq)
-{
- int err = 0;
-
- if (host->ops->crypto_engine_cfg_end) {
- err = host->ops->crypto_engine_cfg_end(host, mrq);
- if (err) {
- pr_err("%s: failed to configure crypto\n",
- mmc_hostname(host->mmc));
- return err;
- }
- }
- return 0;
-}
-
static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
{
struct sdhci_host *host;
@@ -2032,13 +1988,6 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
sdhci_get_tuning_cmd(host));
}
- if (host->is_crypto_en) {
- spin_unlock_irqrestore(&host->lock, flags);
- if (sdhci_crypto_cfg(host, mrq, 0))
- goto end_req;
- spin_lock_irqsave(&host->lock, flags);
- }
-
if (mrq->sbc && !(host->flags & SDHCI_AUTO_CMD23))
sdhci_send_command(host, mrq->sbc);
else
@@ -2048,13 +1997,6 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
mmiowb();
spin_unlock_irqrestore(&host->lock, flags);
return;
-end_req:
- mrq->cmd->error = -EIO;
- if (mrq->data)
- mrq->data->error = -EIO;
- host->mrq = NULL;
- sdhci_dumpregs(host);
- mmc_request_done(host->mmc, mrq);
}
void sdhci_set_bus_width(struct sdhci_host *host, int width)
@@ -3121,7 +3063,6 @@ static bool sdhci_request_done(struct sdhci_host *host)
mmiowb();
spin_unlock_irqrestore(&host->lock, flags);
- sdhci_crypto_cfg_end(host, mrq);
mmc_request_done(host->mmc, mrq);
return false;
diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index 84a98bc..ab25f13 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -671,7 +671,6 @@ struct sdhci_host {
enum sdhci_power_policy power_policy;
bool sdio_irq_async_status;
- bool is_crypto_en;
u32 auto_cmd_err_sts;
struct ratelimit_state dbg_dump_rs;
@@ -712,11 +711,6 @@ struct sdhci_ops {
unsigned int (*get_ro)(struct sdhci_host *host);
void (*reset)(struct sdhci_host *host, u8 mask);
int (*platform_execute_tuning)(struct sdhci_host *host, u32 opcode);
- int (*crypto_engine_cfg)(struct sdhci_host *host,
- struct mmc_request *mrq, u32 slot);
- int (*crypto_engine_cfg_end)(struct sdhci_host *host,
- struct mmc_request *mrq);
- int (*crypto_engine_reset)(struct sdhci_host *host);
void (*set_uhs_signaling)(struct sdhci_host *host, unsigned int uhs);
void (*hw_reset)(struct sdhci_host *host);
void (*adma_workaround)(struct sdhci_host *host, u32 intmask);
diff --git a/drivers/net/wireless/ath/wil6210/wil6210.h b/drivers/net/wireless/ath/wil6210/wil6210.h
index aa06969..f70fbc8 100644
--- a/drivers/net/wireless/ath/wil6210/wil6210.h
+++ b/drivers/net/wireless/ath/wil6210/wil6210.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: ISC */
/*
* Copyright (c) 2012-2017 Qualcomm Atheros, Inc.
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
*/
#ifndef __WIL6210_H__
@@ -1415,6 +1415,11 @@ void wil6210_debugfs_remove(struct wil6210_priv *wil);
#else
static inline int wil6210_debugfs_init(struct wil6210_priv *wil) { return 0; }
static inline void wil6210_debugfs_remove(struct wil6210_priv *wil) {}
+static inline int wil_led_blink_set(struct wil6210_priv *wil,
+ const char *buf)
+{
+ return 0;
+}
#endif
int wil6210_sysfs_init(struct wil6210_priv *wil);
diff --git a/drivers/net/wireless/cnss2/bus.c b/drivers/net/wireless/cnss2/bus.c
index c590f53..94e0a4d 100644
--- a/drivers/net/wireless/cnss2/bus.c
+++ b/drivers/net/wireless/cnss2/bus.c
@@ -418,6 +418,21 @@ int cnss_bus_is_device_down(struct cnss_plat_data *plat_priv)
}
}
+int cnss_bus_check_link_status(struct cnss_plat_data *plat_priv)
+{
+ if (!plat_priv)
+ return -ENODEV;
+
+ switch (plat_priv->bus_type) {
+ case CNSS_BUS_PCI:
+ return cnss_pci_check_link_status(plat_priv->bus_priv);
+ default:
+ cnss_pr_dbg("Unsupported bus type: %d\n",
+ plat_priv->bus_type);
+ return 0;
+ }
+}
+
int cnss_bus_debug_reg_read(struct cnss_plat_data *plat_priv, u32 offset,
u32 *val)
{
diff --git a/drivers/net/wireless/cnss2/bus.h b/drivers/net/wireless/cnss2/bus.h
index 4b9e91f..1e7cc0f 100644
--- a/drivers/net/wireless/cnss2/bus.h
+++ b/drivers/net/wireless/cnss2/bus.h
@@ -48,6 +48,7 @@ int cnss_bus_call_driver_modem_status(struct cnss_plat_data *plat_priv,
int cnss_bus_update_status(struct cnss_plat_data *plat_priv,
enum cnss_driver_status status);
int cnss_bus_is_device_down(struct cnss_plat_data *plat_priv);
+int cnss_bus_check_link_status(struct cnss_plat_data *plat_priv);
int cnss_bus_debug_reg_read(struct cnss_plat_data *plat_priv, u32 offset,
u32 *val);
int cnss_bus_debug_reg_write(struct cnss_plat_data *plat_priv, u32 offset,
diff --git a/drivers/net/wireless/cnss2/main.c b/drivers/net/wireless/cnss2/main.c
index 8efd309..ff15aed 100644
--- a/drivers/net/wireless/cnss2/main.c
+++ b/drivers/net/wireless/cnss2/main.c
@@ -1024,6 +1024,10 @@ static int cnss_do_recovery(struct cnss_plat_data *plat_priv,
switch (reason) {
case CNSS_REASON_LINK_DOWN:
+ if (!cnss_bus_check_link_status(plat_priv)) {
+ cnss_pr_dbg("Skip link down recovery as link is already up\n");
+ return 0;
+ }
if (test_bit(LINK_DOWN_SELF_RECOVERY,
&plat_priv->ctrl_params.quirks))
goto self_recovery;
@@ -1975,6 +1979,7 @@ static ssize_t shutdown_store(struct kobject *kobj,
set_bit(CNSS_IN_REBOOT, &plat_priv->driver_state);
del_timer(&plat_priv->fw_boot_timer);
complete_all(&plat_priv->power_up_complete);
+ complete_all(&plat_priv->cal_complete);
}
cnss_pr_dbg("Received shutdown notification\n");
@@ -2116,6 +2121,7 @@ static int cnss_reboot_notifier(struct notifier_block *nb,
set_bit(CNSS_IN_REBOOT, &plat_priv->driver_state);
del_timer(&plat_priv->fw_boot_timer);
complete_all(&plat_priv->power_up_complete);
+ complete_all(&plat_priv->cal_complete);
cnss_pr_dbg("Reboot is in progress with action %d\n", action);
return NOTIFY_DONE;
diff --git a/drivers/net/wireless/cnss2/main.h b/drivers/net/wireless/cnss2/main.h
index ea5385a..e6b4eba 100644
--- a/drivers/net/wireless/cnss2/main.h
+++ b/drivers/net/wireless/cnss2/main.h
@@ -23,6 +23,8 @@
#define RECOVERY_TIMEOUT 60000
#define WLAN_WD_TIMEOUT_MS 60000
#define TIME_CLOCK_FREQ_HZ 19200000
+#define CNSS_RAMDUMP_MAGIC 0x574C414E
+#define CNSS_RAMDUMP_VERSION 0
#define CNSS_EVENT_SYNC BIT(0)
#define CNSS_EVENT_UNINTERRUPTIBLE BIT(1)
@@ -167,6 +169,21 @@ enum cnss_fw_dump_type {
CNSS_FW_IMAGE,
CNSS_FW_RDDM,
CNSS_FW_REMOTE_HEAP,
+ CNSS_FW_DUMP_TYPE_MAX,
+};
+
+struct cnss_dump_entry {
+ u32 type;
+ u32 entry_start;
+ u32 entry_num;
+};
+
+struct cnss_dump_meta_info {
+ u32 magic;
+ u32 version;
+ u32 chipset;
+ u32 total_entries;
+ struct cnss_dump_entry entry[CNSS_FW_DUMP_TYPE_MAX];
};
enum cnss_driver_event_type {
diff --git a/drivers/net/wireless/cnss2/pci.c b/drivers/net/wireless/cnss2/pci.c
index 1d5030e..9bf676c 100644
--- a/drivers/net/wireless/cnss2/pci.c
+++ b/drivers/net/wireless/cnss2/pci.c
@@ -74,6 +74,13 @@ static DEFINE_SPINLOCK(time_sync_lock);
#define LINK_TRAINING_RETRY_MAX_TIMES 3
+#define CNSS_DEBUG_DUMP_SRAM_START 0x1403D58
+#define CNSS_DEBUG_DUMP_SRAM_SIZE 10
+
+#define HANG_DATA_LENGTH 384
+#define HST_HANG_DATA_OFFSET ((3 * 1024 * 1024) - HANG_DATA_LENGTH)
+#define HSP_HANG_DATA_OFFSET ((2 * 1024 * 1024) - HANG_DATA_LENGTH)
+
static struct cnss_pci_reg ce_src[] = {
{ "SRC_RING_BASE_LSB", QCA6390_CE_SRC_RING_BASE_LSB_OFFSET },
{ "SRC_RING_BASE_MSB", QCA6390_CE_SRC_RING_BASE_MSB_OFFSET },
@@ -234,6 +241,7 @@ static struct cnss_misc_reg pcie_reg_access_seq[] = {
{0, QCA6390_WFSS_PMM_WFSS_PMM_R0_PMM_WLAN1_CFG_REG1, 0},
{0, QCA6390_WFSS_PMM_WFSS_PMM_R0_WLAN2_APS_STATUS_REG1, 0},
{0, QCA6390_WFSS_PMM_WFSS_PMM_R0_WLAN1_APS_STATUS_REG1, 0},
+ {0, QCA6390_PCIE_PCIE_BHI_EXECENV_REG, 0},
};
static struct cnss_misc_reg wlaon_reg_access_seq[] = {
@@ -356,7 +364,7 @@ static struct cnss_misc_reg wlaon_reg_access_seq[] = {
#define PCIE_REG_SIZE ARRAY_SIZE(pcie_reg_access_seq)
#define WLAON_REG_SIZE ARRAY_SIZE(wlaon_reg_access_seq)
-static int cnss_pci_check_link_status(struct cnss_pci_data *pci_priv)
+int cnss_pci_check_link_status(struct cnss_pci_data *pci_priv)
{
u16 device_id;
@@ -1324,6 +1332,7 @@ static int cnss_pci_start_time_sync_update(struct cnss_pci_data *pci_priv)
switch (pci_priv->device_id) {
case QCA6390_DEVICE_ID:
+ case QCA6490_DEVICE_ID:
break;
default:
return -EOPNOTSUPP;
@@ -1343,6 +1352,7 @@ static void cnss_pci_stop_time_sync_update(struct cnss_pci_data *pci_priv)
{
switch (pci_priv->device_id) {
case QCA6390_DEVICE_ID:
+ case QCA6490_DEVICE_ID:
break;
default:
return;
@@ -1395,6 +1405,7 @@ int cnss_pci_call_driver_probe(struct cnss_pci_data *pci_priv)
clear_bit(CNSS_DRIVER_RECOVERY, &plat_priv->driver_state);
clear_bit(CNSS_DRIVER_LOADING, &plat_priv->driver_state);
set_bit(CNSS_DRIVER_PROBED, &plat_priv->driver_state);
+ complete_all(&plat_priv->power_up_complete);
} else if (test_bit(CNSS_DRIVER_IDLE_RESTART,
&plat_priv->driver_state)) {
ret = pci_priv->driver_ops->idle_restart(pci_priv->pci_dev,
@@ -1870,20 +1881,33 @@ static int cnss_qca6290_ramdump(struct cnss_pci_data *pci_priv)
struct cnss_dump_data *dump_data = &info_v2->dump_data;
struct cnss_dump_seg *dump_seg = info_v2->dump_data_vaddr;
struct ramdump_segment *ramdump_segs, *s;
+ struct cnss_dump_meta_info meta_info = {0};
int i, ret = 0;
if (!info_v2->dump_data_valid ||
dump_data->nentries == 0)
return 0;
- ramdump_segs = kcalloc(dump_data->nentries,
+ ramdump_segs = kcalloc(dump_data->nentries + 1,
sizeof(*ramdump_segs),
GFP_KERNEL);
if (!ramdump_segs)
return -ENOMEM;
- s = ramdump_segs;
+ s = ramdump_segs + 1;
for (i = 0; i < dump_data->nentries; i++) {
+ if (dump_seg->type >= CNSS_FW_DUMP_TYPE_MAX) {
+ cnss_pr_err("Unsupported dump type: %d",
+ dump_seg->type);
+ continue;
+ }
+
+ if (meta_info.entry[dump_seg->type].entry_start == 0) {
+ meta_info.entry[dump_seg->type].type = dump_seg->type;
+ meta_info.entry[dump_seg->type].entry_start = i + 1;
+ }
+ meta_info.entry[dump_seg->type].entry_num++;
+
s->address = dump_seg->address;
s->v_address = dump_seg->v_address;
s->size = dump_seg->size;
@@ -1891,8 +1915,16 @@ static int cnss_qca6290_ramdump(struct cnss_pci_data *pci_priv)
dump_seg++;
}
+ meta_info.magic = CNSS_RAMDUMP_MAGIC;
+ meta_info.version = CNSS_RAMDUMP_VERSION;
+ meta_info.chipset = pci_priv->device_id;
+ meta_info.total_entries = CNSS_FW_DUMP_TYPE_MAX;
+
+ ramdump_segs->v_address = &meta_info;
+ ramdump_segs->size = sizeof(meta_info);
+
ret = do_elf_ramdump(info_v2->ramdump_dev, ramdump_segs,
- dump_data->nentries);
+ dump_data->nentries + 1);
kfree(ramdump_segs);
cnss_pci_clear_dump_info(pci_priv);
@@ -2054,7 +2086,8 @@ int cnss_wlan_register_driver(struct cnss_wlan_driver *driver_ops)
msecs_to_jiffies(timeout) << 2);
if (!ret) {
cnss_pr_err("Timeout waiting for calibration to complete\n");
- CNSS_ASSERT(0);
+ if (!test_bit(CNSS_IN_REBOOT, &plat_priv->driver_state))
+ CNSS_ASSERT(0);
cal_info = kzalloc(sizeof(*cal_info), GFP_KERNEL);
if (!cal_info)
@@ -2066,10 +2099,16 @@ int cnss_wlan_register_driver(struct cnss_wlan_driver *driver_ops)
0, cal_info);
}
+ if (test_bit(CNSS_IN_REBOOT, &plat_priv->driver_state)) {
+ cnss_pr_dbg("Reboot or shutdown is in progress, ignore register driver\n");
+ return -EINVAL;
+ }
+
register_driver:
+ reinit_completion(&plat_priv->power_up_complete);
ret = cnss_driver_event_post(plat_priv,
CNSS_DRIVER_EVENT_REGISTER_DRIVER,
- CNSS_EVENT_SYNC_UNINTERRUPTIBLE,
+ CNSS_EVENT_SYNC_UNKILLABLE,
driver_ops);
return ret;
@@ -2088,19 +2127,20 @@ void cnss_wlan_unregister_driver(struct cnss_wlan_driver *driver_ops)
}
if (plat_priv->device_id == QCA6174_DEVICE_ID ||
- !test_bit(CNSS_DRIVER_IDLE_RESTART, &plat_priv->driver_state))
- goto skip_wait_idle_restart;
+ !(test_bit(CNSS_DRIVER_IDLE_RESTART, &plat_priv->driver_state) ||
+ test_bit(CNSS_DRIVER_LOADING, &plat_priv->driver_state)))
+ goto skip_wait_power_up;
timeout = cnss_get_qmi_timeout(plat_priv);
ret = wait_for_completion_timeout(&plat_priv->power_up_complete,
msecs_to_jiffies((timeout << 1) +
WLAN_WD_TIMEOUT_MS));
if (!ret) {
- cnss_pr_err("Timeout waiting for idle restart to complete\n");
+ cnss_pr_err("Timeout waiting for driver power up to complete\n");
CNSS_ASSERT(0);
}
-skip_wait_idle_restart:
+skip_wait_power_up:
if (!test_bit(CNSS_DRIVER_RECOVERY, &plat_priv->driver_state) &&
!test_bit(CNSS_DEV_ERR_NOTIFY, &plat_priv->driver_state))
goto skip_wait_recovery;
@@ -2116,7 +2156,7 @@ void cnss_wlan_unregister_driver(struct cnss_wlan_driver *driver_ops)
skip_wait_recovery:
cnss_driver_event_post(plat_priv,
CNSS_DRIVER_EVENT_UNREGISTER_DRIVER,
- CNSS_EVENT_SYNC_UNINTERRUPTIBLE, NULL);
+ CNSS_EVENT_SYNC_UNKILLABLE, NULL);
}
EXPORT_SYMBOL(cnss_wlan_unregister_driver);
@@ -2126,6 +2166,11 @@ int cnss_pci_register_driver_hdlr(struct cnss_pci_data *pci_priv,
int ret = 0;
struct cnss_plat_data *plat_priv = pci_priv->plat_priv;
+ if (test_bit(CNSS_IN_REBOOT, &plat_priv->driver_state)) {
+ cnss_pr_dbg("Reboot or shutdown is in progress, ignore register driver\n");
+ return -EINVAL;
+ }
+
set_bit(CNSS_DRIVER_LOADING, &plat_priv->driver_state);
pci_priv->driver_ops = data;
@@ -3332,7 +3377,14 @@ int cnss_get_soc_info(struct device *dev, struct cnss_soc_info *info)
info->va = pci_priv->bar;
info->pa = pci_resource_start(pci_priv->pci_dev, PCI_BAR_NUM);
-
+ info->chip_id = plat_priv->chip_info.chip_id;
+ info->chip_family = plat_priv->chip_info.chip_family;
+ info->board_id = plat_priv->board_info.board_id;
+ info->soc_id = plat_priv->soc_info.soc_id;
+ info->fw_version = plat_priv->fw_version_info.fw_version;
+ strlcpy(info->fw_build_timestamp,
+ plat_priv->fw_version_info.fw_build_timestamp,
+ sizeof(info->fw_build_timestamp));
memcpy(&info->device_version, &plat_priv->device_version,
sizeof(info->device_version));
@@ -3659,6 +3711,21 @@ static void cnss_pci_dump_ce_reg(struct cnss_pci_data *pci_priv,
}
}
+static void cnss_pci_dump_sram_mem(struct cnss_pci_data *pci_priv)
+{
+ int i;
+ u32 mem_addr, val;
+
+ if (cnss_pci_check_link_status(pci_priv))
+ return;
+ for (i = 0; i < CNSS_DEBUG_DUMP_SRAM_SIZE; i++) {
+ mem_addr = CNSS_DEBUG_DUMP_SRAM_START + i * 4;
+ if (cnss_pci_reg_read(pci_priv, mem_addr, &val))
+ return;
+ cnss_pr_dbg("SRAM[0x%x] = 0x%x\n", mem_addr, val);
+ }
+}
+
static void cnss_pci_dump_registers(struct cnss_pci_data *pci_priv)
{
cnss_pr_dbg("Start to dump debug registers\n");
@@ -3691,6 +3758,7 @@ int cnss_pci_force_fw_assert_hdlr(struct cnss_pci_data *pci_priv)
cnss_auto_resume(&pci_priv->pci_dev->dev);
cnss_pci_dump_misc_reg(pci_priv);
cnss_pci_dump_shadow_reg(pci_priv);
+ cnss_pci_dump_sram_mem(pci_priv);
ret = cnss_pci_set_mhi_state(pci_priv, CNSS_MHI_TRIGGER_RDDM);
if (ret) {
@@ -3750,6 +3818,73 @@ static void cnss_pci_remove_dump_seg(struct cnss_pci_data *pci_priv,
cnss_minidump_remove_region(plat_priv, type, seg_no, va, pa, size);
}
+int cnss_call_driver_uevent(struct cnss_pci_data *pci_priv,
+ enum cnss_driver_status status, void *data)
+{
+ struct cnss_uevent_data uevent_data;
+ struct cnss_wlan_driver *driver_ops;
+
+ driver_ops = pci_priv->driver_ops;
+ if (!driver_ops || !driver_ops->update_event) {
+ cnss_pr_dbg("Hang event driver ops is NULL\n");
+ return -EINVAL;
+ }
+
+ cnss_pr_dbg("Calling driver uevent: %d\n", status);
+
+ uevent_data.status = status;
+ uevent_data.data = data;
+
+ return driver_ops->update_event(pci_priv->pci_dev, &uevent_data);
+}
+
+static void cnss_pci_send_hang_event(struct cnss_pci_data *pci_priv)
+{
+ struct cnss_plat_data *plat_priv = pci_priv->plat_priv;
+ struct cnss_fw_mem *fw_mem = plat_priv->fw_mem;
+ struct cnss_hang_event hang_event = {0};
+ void *hang_data_va = NULL;
+ u64 offset = 0;
+ int i = 0;
+
+ if (!fw_mem || !plat_priv->fw_mem_seg_len)
+ return;
+
+ switch (pci_priv->device_id) {
+ case QCA6390_DEVICE_ID:
+ offset = HST_HANG_DATA_OFFSET;
+ break;
+ case QCA6490_DEVICE_ID:
+ offset = HSP_HANG_DATA_OFFSET;
+ break;
+ default:
+ cnss_pr_err("Skip Hang Event Data as unsupported Device ID received: %d\n",
+ pci_priv->device_id);
+ return;
+ }
+
+ for (i = 0; i < plat_priv->fw_mem_seg_len; i++) {
+ if (fw_mem[i].type == QMI_WLFW_MEM_TYPE_DDR_V01 &&
+ fw_mem[i].va) {
+ hang_data_va = fw_mem[i].va + offset;
+ hang_event.hang_event_data = kmemdup(hang_data_va,
+ HANG_DATA_LENGTH,
+ GFP_ATOMIC);
+ if (!hang_event.hang_event_data) {
+ cnss_pr_dbg("Hang data memory alloc failed\n");
+ return;
+ }
+ hang_event.hang_event_data_len = HANG_DATA_LENGTH;
+ break;
+ }
+ }
+
+ cnss_call_driver_uevent(pci_priv, CNSS_HANG_EVENT, &hang_event);
+
+ kfree(hang_event.hang_event_data);
+ hang_event.hang_event_data = NULL;
+}
+
void cnss_pci_collect_dump_info(struct cnss_pci_data *pci_priv, bool in_panic)
{
struct cnss_plat_data *plat_priv = pci_priv->plat_priv;
@@ -3761,6 +3896,9 @@ void cnss_pci_collect_dump_info(struct cnss_pci_data *pci_priv, bool in_panic)
struct cnss_fw_mem *fw_mem = plat_priv->fw_mem;
int ret, i, j;
+ if (test_bit(CNSS_DEV_ERR_NOTIFY, &plat_priv->driver_state))
+ cnss_pci_send_hang_event(pci_priv);
+
if (test_bit(CNSS_MHI_RDDM_DONE, &pci_priv->mhi_state)) {
cnss_pr_dbg("RAM dump is already collected, skip\n");
return;
@@ -3771,6 +3909,7 @@ void cnss_pci_collect_dump_info(struct cnss_pci_data *pci_priv, bool in_panic)
cnss_pci_dump_misc_reg(pci_priv);
cnss_pci_dump_qdss_reg(pci_priv);
+ cnss_pci_dump_sram_mem(pci_priv);
ret = mhi_download_rddm_img(pci_priv->mhi_ctrl, in_panic);
if (ret) {
@@ -4041,6 +4180,18 @@ static int cnss_pci_update_fw_name(struct cnss_pci_data *pci_priv)
sizeof(plat_priv->firmware_name), FW_V2_FILE_NAME);
mhi_ctrl->fw_image = plat_priv->firmware_name;
break;
+ case QCA6490_DEVICE_ID:
+ switch (plat_priv->device_version.major_version) {
+ case FW_V2_NUMBER:
+ scnprintf(plat_priv->firmware_name,
+ sizeof(plat_priv->firmware_name),
+ FW_V2_FILE_NAME);
+ break;
+ default:
+ break;
+ }
+
+ break;
default:
break;
}
@@ -4109,6 +4260,11 @@ static int cnss_pci_register_mhi(struct cnss_pci_data *pci_priv)
if (!mhi_ctrl->log_buf)
cnss_pr_err("Unable to create CNSS MHI IPC log context\n");
+ mhi_ctrl->cntrl_log_buf = ipc_log_context_create(CNSS_IPC_LOG_PAGES,
+ "cnss-mhi-cntrl", 0);
+ if (!mhi_ctrl->cntrl_log_buf)
+ cnss_pr_err("Unable to create CNSS MHICNTRL IPC log context\n");
+
ret = of_register_mhi_controller(mhi_ctrl);
if (ret) {
cnss_pr_err("Failed to register to MHI bus, err = %d\n", ret);
@@ -4126,6 +4282,8 @@ static int cnss_pci_register_mhi(struct cnss_pci_data *pci_priv)
destroy_ipc:
if (mhi_ctrl->log_buf)
ipc_log_context_destroy(mhi_ctrl->log_buf);
+ if (mhi_ctrl->cntrl_log_buf)
+ ipc_log_context_destroy(mhi_ctrl->cntrl_log_buf);
kfree(mhi_ctrl->irq);
free_mhi_ctrl:
mhi_free_controller(mhi_ctrl);
@@ -4140,6 +4298,8 @@ static void cnss_pci_unregister_mhi(struct cnss_pci_data *pci_priv)
mhi_unregister_mhi_controller(mhi_ctrl);
if (mhi_ctrl->log_buf)
ipc_log_context_destroy(mhi_ctrl->log_buf);
+ if (mhi_ctrl->cntrl_log_buf)
+ ipc_log_context_destroy(mhi_ctrl->cntrl_log_buf);
kfree(mhi_ctrl->irq);
mhi_free_controller(mhi_ctrl);
}
diff --git a/drivers/net/wireless/cnss2/pci.h b/drivers/net/wireless/cnss2/pci.h
index ab1d8cb..2984273 100644
--- a/drivers/net/wireless/cnss2/pci.h
+++ b/drivers/net/wireless/cnss2/pci.h
@@ -162,6 +162,7 @@ static inline int cnss_pci_get_drv_connected(void *bus_priv)
return atomic_read(&pci_priv->drv_connected);
}
+int cnss_pci_check_link_status(struct cnss_pci_data *pci_priv);
int cnss_suspend_pci_link(struct cnss_pci_data *pci_priv);
int cnss_resume_pci_link(struct cnss_pci_data *pci_priv);
int cnss_pci_init(struct cnss_plat_data *plat_priv);
@@ -199,6 +200,8 @@ void cnss_pci_pm_runtime_put_noidle(struct cnss_pci_data *pci_priv);
void cnss_pci_pm_runtime_mark_last_busy(struct cnss_pci_data *pci_priv);
int cnss_pci_update_status(struct cnss_pci_data *pci_priv,
enum cnss_driver_status status);
+int cnss_call_driver_uevent(struct cnss_pci_data *pci_priv,
+ enum cnss_driver_status status, void *data);
int cnss_pcie_is_device_down(struct cnss_pci_data *pci_priv);
int cnss_pci_suspend_bus(struct cnss_pci_data *pci_priv);
int cnss_pci_resume_bus(struct cnss_pci_data *pci_priv);
diff --git a/drivers/net/wireless/cnss2/reg.h b/drivers/net/wireless/cnss2/reg.h
index 6e7b709..69f22eb 100644
--- a/drivers/net/wireless/cnss2/reg.h
+++ b/drivers/net/wireless/cnss2/reg.h
@@ -102,6 +102,7 @@
#define QCA6390_PCIE_PCIE_LOCAL_REG_WCSSAON_PCIE_SR_STATUS_LOW 0x01E030CC
#define QCA6390_PCIE_PCIE_LOCAL_REG_WCSS_STATUS_FOR_DEBUG_HIGH 0x01E0313C
#define QCA6390_PCIE_PCIE_LOCAL_REG_WCSS_STATUS_FOR_DEBUG_LOW 0x01E03140
+#define QCA6390_PCIE_PCIE_BHI_EXECENV_REG 0x01E0E228
#define QCA6390_GCC_DEBUG_CLK_CTL 0x001E4025C
diff --git a/drivers/net/wireless/cnss_utils/wlan_firmware_service_v01.c b/drivers/net/wireless/cnss_utils/wlan_firmware_service_v01.c
index e094d34..1899d4a 100644
--- a/drivers/net/wireless/cnss_utils/wlan_firmware_service_v01.c
+++ b/drivers/net/wireless/cnss_utils/wlan_firmware_service_v01.c
@@ -748,6 +748,24 @@ struct qmi_elem_info wlfw_ind_register_req_msg_v01_ei[] = {
respond_get_info_enable),
},
{
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x20,
+ .offset = offsetof(struct wlfw_ind_register_req_msg_v01,
+ m3_dump_upload_req_enable_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x20,
+ .offset = offsetof(struct wlfw_ind_register_req_msg_v01,
+ m3_dump_upload_req_enable),
+ },
+ {
.data_type = QMI_EOTI,
.array_type = NO_ARRAY,
.tlv_type = QMI_COMMON_TLV_TYPE,
@@ -803,6 +821,42 @@ struct qmi_elem_info wlfw_fw_ready_ind_msg_v01_ei[] = {
struct qmi_elem_info wlfw_msa_ready_ind_msg_v01_ei[] = {
{
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct wlfw_msa_ready_ind_msg_v01,
+ hang_data_addr_offset_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct wlfw_msa_ready_ind_msg_v01,
+ hang_data_addr_offset),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct wlfw_msa_ready_ind_msg_v01,
+ hang_data_length_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_2_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u16),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct wlfw_msa_ready_ind_msg_v01,
+ hang_data_length),
+ },
+ {
.data_type = QMI_EOTI,
.array_type = NO_ARRAY,
.tlv_type = QMI_COMMON_TLV_TYPE,
@@ -1319,6 +1373,24 @@ struct qmi_elem_info wlfw_cap_resp_msg_v01_ei[] = {
otp_version),
},
{
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x19,
+ .offset = offsetof(struct wlfw_cap_resp_msg_v01,
+ eeprom_caldata_read_timeout_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x19,
+ .offset = offsetof(struct wlfw_cap_resp_msg_v01,
+ eeprom_caldata_read_timeout),
+ },
+ {
.data_type = QMI_EOTI,
.array_type = NO_ARRAY,
.tlv_type = QMI_COMMON_TLV_TYPE,
@@ -1516,6 +1588,24 @@ struct qmi_elem_info wlfw_cal_report_req_msg_v01_ei[] = {
xo_cal_data),
},
{
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct wlfw_cal_report_req_msg_v01,
+ cal_remove_supported_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct wlfw_cal_report_req_msg_v01,
+ cal_remove_supported),
+ },
+ {
.data_type = QMI_EOTI,
.array_type = NO_ARRAY,
.tlv_type = QMI_COMMON_TLV_TYPE,
@@ -2592,6 +2682,24 @@ struct qmi_elem_info wlfw_host_cap_req_msg_v01_ei[] = {
.array_type = NO_ARRAY,
.tlv_type = 0x1E,
.offset = offsetof(struct wlfw_host_cap_req_msg_v01,
+ platform_name_valid),
+ },
+ {
+ .data_type = QMI_STRING,
+ .elem_len = QMI_WLFW_MAX_PLATFORM_NAME_LEN_V01 + 1,
+ .elem_size = sizeof(char),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1E,
+ .offset = offsetof(struct wlfw_host_cap_req_msg_v01,
+ platform_name),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x1F,
+ .offset = offsetof(struct wlfw_host_cap_req_msg_v01,
ddr_range_valid),
},
{
@@ -2599,7 +2707,7 @@ struct qmi_elem_info wlfw_host_cap_req_msg_v01_ei[] = {
.elem_len = QMI_WLFW_MAX_HOST_DDR_RANGE_SIZE_V01,
.elem_size = sizeof(struct wlfw_host_ddr_range_s_v01),
.array_type = STATIC_ARRAY,
- .tlv_type = 0x1E,
+ .tlv_type = 0x1F,
.offset = offsetof(struct wlfw_host_cap_req_msg_v01,
ddr_range),
.ei_array = wlfw_host_ddr_range_s_v01_ei,
@@ -3874,3 +3982,159 @@ struct qmi_elem_info wlfw_device_info_resp_msg_v01_ei[] = {
},
};
+struct qmi_elem_info wlfw_m3_dump_upload_req_ind_msg_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct
+ wlfw_m3_dump_upload_req_ind_msg_v01,
+ pdev_id),
+ },
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u64),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct
+ wlfw_m3_dump_upload_req_ind_msg_v01,
+ addr),
+ },
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u64),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x03,
+ .offset = offsetof(struct
+ wlfw_m3_dump_upload_req_ind_msg_v01,
+ size),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+struct qmi_elem_info wlfw_m3_dump_upload_done_req_msg_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct
+ wlfw_m3_dump_upload_done_req_msg_v01,
+ pdev_id),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct
+ wlfw_m3_dump_upload_done_req_msg_v01,
+ status),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+struct qmi_elem_info wlfw_m3_dump_upload_done_resp_msg_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct
+ wlfw_m3_dump_upload_done_resp_msg_v01,
+ resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+struct qmi_elem_info wlfw_soc_wake_req_msg_v01_ei[] = {
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct
+ wlfw_soc_wake_req_msg_v01,
+ wake_valid),
+ },
+ {
+ .data_type = QMI_SIGNED_4_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof(enum wlfw_soc_wake_enum_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct wlfw_soc_wake_req_msg_v01,
+ wake),
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+struct qmi_elem_info wlfw_soc_wake_resp_msg_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct wlfw_soc_wake_resp_msg_v01,
+ resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+struct qmi_elem_info wlfw_exit_power_save_req_msg_v01_ei[] = {
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
+
+struct qmi_elem_info wlfw_exit_power_save_resp_msg_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct
+ wlfw_exit_power_save_resp_msg_v01,
+ resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .array_type = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ },
+};
diff --git a/drivers/net/wireless/cnss_utils/wlan_firmware_service_v01.h b/drivers/net/wireless/cnss_utils/wlan_firmware_service_v01.h
index fa64bc0..e105cd4 100644
--- a/drivers/net/wireless/cnss_utils/wlan_firmware_service_v01.h
+++ b/drivers/net/wireless/cnss_utils/wlan_firmware_service_v01.h
@@ -17,6 +17,7 @@
#define QMI_WLFW_GET_INFO_REQ_V01 0x004A
#define QMI_WLFW_INITIATE_CAL_UPDATE_IND_V01 0x002A
#define QMI_WLFW_CAL_DONE_IND_V01 0x003E
+#define QMI_WLFW_M3_DUMP_UPLOAD_REQ_IND_V01 0x004D
#define QMI_WLFW_WFC_CALL_STATUS_RESP_V01 0x0049
#define QMI_WLFW_HOST_CAP_REQ_V01 0x0034
#define QMI_WLFW_DYNAMIC_FEATURE_MASK_RESP_V01 0x003B
@@ -28,6 +29,7 @@
#define QMI_WLFW_RESPOND_GET_INFO_IND_V01 0x004B
#define QMI_WLFW_M3_INFO_RESP_V01 0x003C
#define QMI_WLFW_CAL_UPDATE_RESP_V01 0x0029
+#define QMI_WLFW_M3_DUMP_UPLOAD_DONE_RESP_V01 0x004E
#define QMI_WLFW_CAL_DOWNLOAD_RESP_V01 0x0027
#define QMI_WLFW_XO_CAL_IND_V01 0x003D
#define QMI_WLFW_INI_RESP_V01 0x002F
@@ -41,12 +43,14 @@
#define QMI_WLFW_HOST_CAP_RESP_V01 0x0034
#define QMI_WLFW_MSA_READY_IND_V01 0x002B
#define QMI_WLFW_ATHDIAG_WRITE_RESP_V01 0x0031
+#define QMI_WLFW_EXIT_POWER_SAVE_REQ_V01 0x0050
#define QMI_WLFW_WLAN_MODE_REQ_V01 0x0022
#define QMI_WLFW_IND_REGISTER_REQ_V01 0x0020
#define QMI_WLFW_WLAN_CFG_RESP_V01 0x0023
#define QMI_WLFW_QDSS_TRACE_MODE_REQ_V01 0x0045
#define QMI_WLFW_REQUEST_MEM_IND_V01 0x0035
#define QMI_WLFW_QDSS_TRACE_CONFIG_DOWNLOAD_RESP_V01 0x0044
+#define QMI_WLFW_SOC_WAKE_RESP_V01 0x004F
#define QMI_WLFW_REJUVENATE_IND_V01 0x0039
#define QMI_WLFW_DYNAMIC_FEATURE_MASK_REQ_V01 0x003B
#define QMI_WLFW_ATHDIAG_WRITE_REQ_V01 0x0031
@@ -68,7 +72,9 @@
#define QMI_WLFW_MSA_INFO_RESP_V01 0x002D
#define QMI_WLFW_MSA_READY_REQ_V01 0x002E
#define QMI_WLFW_QDSS_TRACE_DATA_RESP_V01 0x0042
+#define QMI_WLFW_M3_DUMP_UPLOAD_DONE_REQ_V01 0x004E
#define QMI_WLFW_CAP_RESP_V01 0x0024
+#define QMI_WLFW_SOC_WAKE_REQ_V01 0x004F
#define QMI_WLFW_REJUVENATE_ACK_REQ_V01 0x003A
#define QMI_WLFW_ATHDIAG_READ_RESP_V01 0x0030
#define QMI_WLFW_SHUTDOWN_REQ_V01 0x0043
@@ -76,6 +82,7 @@
#define QMI_WLFW_ANTENNA_SWITCH_RESP_V01 0x0047
#define QMI_WLFW_DEVICE_INFO_REQ_V01 0x004C
#define QMI_WLFW_MAC_ADDR_REQ_V01 0x0033
+#define QMI_WLFW_EXIT_POWER_SAVE_RESP_V01 0x0050
#define QMI_WLFW_RESPOND_MEM_RESP_V01 0x0036
#define QMI_WLFW_VBATT_RESP_V01 0x0032
#define QMI_WLFW_MSA_INFO_REQ_V01 0x002D
@@ -102,6 +109,7 @@
#define QMI_WLFW_MAX_NUM_SHADOW_REG_V01 24
#define QMI_WLFW_MAC_ADDR_SIZE_V01 6
#define QMI_WLFW_MAX_NUM_SHADOW_REG_V2_V01 36
+#define QMI_WLFW_MAX_PLATFORM_NAME_LEN_V01 64
#define QMI_WLFW_MAX_NUM_SVC_V01 24
enum wlfw_driver_mode_enum_v01 {
@@ -155,6 +163,13 @@ enum wlfw_qdss_trace_mode_enum_v01 {
WLFW_QDSS_TRACE_MODE_ENUM_MAX_VAL_V01 = INT_MAX,
};
+enum wlfw_soc_wake_enum_v01 {
+ WLFW_SOC_WAKE_ENUM_MIN_VAL_V01 = INT_MIN,
+ QMI_WLFW_WAKE_REQUEST_V01 = 0,
+ QMI_WLFW_WAKE_RELEASE_V01 = 1,
+ WLFW_SOC_WAKE_ENUM_MAX_VAL_V01 = INT_MAX,
+};
+
#define QMI_WLFW_CE_ATTR_FLAGS_V01 ((u32)0x00)
#define QMI_WLFW_CE_ATTR_NO_SNOOP_V01 ((u32)0x01)
#define QMI_WLFW_CE_ATTR_BYTE_SWAP_DATA_V01 ((u32)0x02)
@@ -285,9 +300,11 @@ struct wlfw_ind_register_req_msg_v01 {
u8 qdss_trace_free_enable;
u8 respond_get_info_enable_valid;
u8 respond_get_info_enable;
+ u8 m3_dump_upload_req_enable_valid;
+ u8 m3_dump_upload_req_enable;
};
-#define WLFW_IND_REGISTER_REQ_MSG_V01_MAX_MSG_LEN 70
+#define WLFW_IND_REGISTER_REQ_MSG_V01_MAX_MSG_LEN 74
extern struct qmi_elem_info wlfw_ind_register_req_msg_v01_ei[];
struct wlfw_ind_register_resp_msg_v01 {
@@ -307,10 +324,13 @@ struct wlfw_fw_ready_ind_msg_v01 {
extern struct qmi_elem_info wlfw_fw_ready_ind_msg_v01_ei[];
struct wlfw_msa_ready_ind_msg_v01 {
- char placeholder;
+ u8 hang_data_addr_offset_valid;
+ u32 hang_data_addr_offset;
+ u8 hang_data_length_valid;
+ u16 hang_data_length;
};
-#define WLFW_MSA_READY_IND_MSG_V01_MAX_MSG_LEN 0
+#define WLFW_MSA_READY_IND_MSG_V01_MAX_MSG_LEN 12
extern struct qmi_elem_info wlfw_msa_ready_ind_msg_v01_ei[];
struct wlfw_pin_connect_result_ind_msg_v01 {
@@ -402,9 +422,11 @@ struct wlfw_cap_resp_msg_v01 {
u32 time_freq_hz;
u8 otp_version_valid;
u32 otp_version;
+ u8 eeprom_caldata_read_timeout_valid;
+ u32 eeprom_caldata_read_timeout;
};
-#define WLFW_CAP_RESP_MSG_V01_MAX_MSG_LEN 228
+#define WLFW_CAP_RESP_MSG_V01_MAX_MSG_LEN 235
extern struct qmi_elem_info wlfw_cap_resp_msg_v01_ei[];
struct wlfw_bdf_download_req_msg_v01 {
@@ -439,9 +461,11 @@ struct wlfw_cal_report_req_msg_v01 {
enum wlfw_cal_temp_id_enum_v01 meta_data[QMI_WLFW_MAX_NUM_CAL_V01];
u8 xo_cal_data_valid;
u8 xo_cal_data;
+ u8 cal_remove_supported_valid;
+ u8 cal_remove_supported;
};
-#define WLFW_CAL_REPORT_REQ_MSG_V01_MAX_MSG_LEN 28
+#define WLFW_CAL_REPORT_REQ_MSG_V01_MAX_MSG_LEN 32
extern struct qmi_elem_info wlfw_cal_report_req_msg_v01_ei[];
struct wlfw_cal_report_resp_msg_v01 {
@@ -669,12 +693,14 @@ struct wlfw_host_cap_req_msg_v01 {
u8 mem_cfg_mode;
u8 cal_duration_valid;
u16 cal_duration;
+ u8 platform_name_valid;
+ char platform_name[QMI_WLFW_MAX_PLATFORM_NAME_LEN_V01 + 1];
u8 ddr_range_valid;
struct wlfw_host_ddr_range_s_v01
ddr_range[QMI_WLFW_MAX_HOST_DDR_RANGE_SIZE_V01];
};
-#define WLFW_HOST_CAP_REQ_MSG_V01_MAX_MSG_LEN 245
+#define WLFW_HOST_CAP_REQ_MSG_V01_MAX_MSG_LEN 312
extern struct qmi_elem_info wlfw_host_cap_req_msg_v01_ei[];
struct wlfw_host_cap_resp_msg_v01 {
@@ -1013,4 +1039,57 @@ struct wlfw_device_info_resp_msg_v01 {
#define WLFW_DEVICE_INFO_RESP_MSG_V01_MAX_MSG_LEN 25
extern struct qmi_elem_info wlfw_device_info_resp_msg_v01_ei[];
+struct wlfw_m3_dump_upload_req_ind_msg_v01 {
+ u32 pdev_id;
+ u64 addr;
+ u64 size;
+};
+
+#define WLFW_M3_DUMP_UPLOAD_REQ_IND_MSG_V01_MAX_MSG_LEN 29
+extern struct qmi_elem_info wlfw_m3_dump_upload_req_ind_msg_v01_ei[];
+
+struct wlfw_m3_dump_upload_done_req_msg_v01 {
+ u32 pdev_id;
+ u32 status;
+};
+
+#define WLFW_M3_DUMP_UPLOAD_DONE_REQ_MSG_V01_MAX_MSG_LEN 14
+extern struct qmi_elem_info wlfw_m3_dump_upload_done_req_msg_v01_ei[];
+
+struct wlfw_m3_dump_upload_done_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+};
+
+#define WLFW_M3_DUMP_UPLOAD_DONE_RESP_MSG_V01_MAX_MSG_LEN 7
+extern struct qmi_elem_info wlfw_m3_dump_upload_done_resp_msg_v01_ei[];
+
+struct wlfw_soc_wake_req_msg_v01 {
+ u8 wake_valid;
+ enum wlfw_soc_wake_enum_v01 wake;
+};
+
+#define WLFW_SOC_WAKE_REQ_MSG_V01_MAX_MSG_LEN 7
+extern struct qmi_elem_info wlfw_soc_wake_req_msg_v01_ei[];
+
+struct wlfw_soc_wake_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+};
+
+#define WLFW_SOC_WAKE_RESP_MSG_V01_MAX_MSG_LEN 7
+extern struct qmi_elem_info wlfw_soc_wake_resp_msg_v01_ei[];
+
+struct wlfw_exit_power_save_req_msg_v01 {
+ char placeholder;
+};
+
+#define WLFW_EXIT_POWER_SAVE_REQ_MSG_V01_MAX_MSG_LEN 0
+extern struct qmi_elem_info wlfw_exit_power_save_req_msg_v01_ei[];
+
+struct wlfw_exit_power_save_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+};
+
+#define WLFW_EXIT_POWER_SAVE_RESP_MSG_V01_MAX_MSG_LEN 7
+extern struct qmi_elem_info wlfw_exit_power_save_resp_msg_v01_ei[];
+
#endif
diff --git a/drivers/pci/controller/pci-msm.c b/drivers/pci/controller/pci-msm.c
index 71ebe4c..b30331f 100644
--- a/drivers/pci/controller/pci-msm.c
+++ b/drivers/pci/controller/pci-msm.c
@@ -4741,7 +4741,7 @@ static void msm_pcie_notify_client(struct msm_pcie_dev_t *dev,
struct msm_pcie_notify client_notify;
client_notify.event = event;
- client_notify.user = notify->user;
+ client_notify.user = dev->event_reg->user;
client_notify.data = notify->data;
client_notify.options = notify->options;
PCIE_DUMP(dev, "PCIe: callback RC%d for event %d\n",
diff --git a/drivers/perf/qcom_l2_counters.c b/drivers/perf/qcom_l2_counters.c
index 2430870..d1c93f2 100644
--- a/drivers/perf/qcom_l2_counters.c
+++ b/drivers/perf/qcom_l2_counters.c
@@ -147,14 +147,12 @@ struct l2cache_pmu {
struct list_head clusters;
};
-
static unsigned int which_cluster_tenure = 1;
static u32 l2_counter_present_mask;
#define to_l2cache_pmu(p) (container_of(p, struct l2cache_pmu, pmu))
#define to_cluster_device(d) container_of(d, struct cluster_pmu, dev)
-
static inline struct cluster_pmu *get_cluster_pmu(
struct l2cache_pmu *l2cache_pmu, int cpu)
{
@@ -408,7 +406,7 @@ static void l2_cache_event_update(struct perf_event *event, u32 ovsr)
u64 delta, prev, now;
u32 event_idx = hwc->config_base;
u32 event_grp;
- struct cluster_pmu *cluster;
+ struct cluster_pmu *cluster = event->pmu_private;
prev = local64_read(&hwc->prev_count);
if (ovsr) {
@@ -416,7 +414,6 @@ static void l2_cache_event_update(struct perf_event *event, u32 ovsr)
goto out;
}
- cluster = get_cluster_pmu(to_l2cache_pmu(event->pmu), event->cpu);
event_idx = (hwc->config_base & REGBIT_MASK) >> REGBIT_SHIFT;
event_grp = hwc->config_base & EVENT_GROUP_MASK;
do {
@@ -557,6 +554,7 @@ static int l2_cache_event_init(struct perf_event *event)
hwc->idx = -1;
hwc->config_base = event->attr.config;
event->readable_on_cpus = CPU_MASK_ALL;
+ event->pmu_private = cluster;
/*
* We are overiding event->cpu, as it is possible to enable events,
@@ -568,12 +566,11 @@ static int l2_cache_event_init(struct perf_event *event)
static void l2_cache_event_start(struct perf_event *event, int flags)
{
- struct cluster_pmu *cluster;
struct hw_perf_event *hwc = &event->hw;
+ struct cluster_pmu *cluster = event->pmu_private;
int event_idx;
hwc->state = 0;
- cluster = get_cluster_pmu(to_l2cache_pmu(event->pmu), event->cpu);
event_idx = (hwc->config_base & REGBIT_MASK) >> REGBIT_SHIFT;
if ((hwc->config_base & EVENT_GROUP_MASK) == TENURE_CNTRS_GROUP_ID) {
cluster_tenure_counter_enable(cluster, event_idx);
@@ -586,11 +583,10 @@ static void l2_cache_event_start(struct perf_event *event, int flags)
static void l2_cache_event_stop(struct perf_event *event, int flags)
{
struct hw_perf_event *hwc = &event->hw;
- struct cluster_pmu *cluster;
+ struct cluster_pmu *cluster = event->pmu_private;
int event_idx;
u32 ovsr;
- cluster = get_cluster_pmu(to_l2cache_pmu(event->pmu), event->cpu);
if (hwc->state & PERF_HES_STOPPED)
return;
@@ -615,10 +611,9 @@ static void l2_cache_event_stop(struct perf_event *event, int flags)
static int l2_cache_event_add(struct perf_event *event, int flags)
{
struct hw_perf_event *hwc = &event->hw;
+ struct cluster_pmu *cluster = event->pmu_private;
int idx;
- struct cluster_pmu *cluster;
- cluster = get_cluster_pmu(to_l2cache_pmu(event->pmu), event->cpu);
idx = l2_cache_get_event_idx(cluster, event);
if (idx < 0)
return idx;
@@ -640,11 +635,9 @@ static int l2_cache_event_add(struct perf_event *event, int flags)
static void l2_cache_event_del(struct perf_event *event, int flags)
{
struct hw_perf_event *hwc = &event->hw;
- struct cluster_pmu *cluster;
int idx = hwc->idx;
unsigned long intr_flag;
-
- cluster = get_cluster_pmu(to_l2cache_pmu(event->pmu), event->cpu);
+ struct cluster_pmu *cluster = event->pmu_private;
/*
* We could race here with overflow interrupt of this event.
@@ -913,13 +906,26 @@ static struct cluster_pmu *l2_cache_associate_cpu_with_cluster(
return cluster;
}
+static void clusters_initialization(struct l2cache_pmu *l2cache_pmu,
+ unsigned int cpu)
+{
+ struct cluster_pmu *temp_cluster = NULL;
+
+ list_for_each_entry(temp_cluster, &l2cache_pmu->clusters, next) {
+ cluster_pmu_reset(temp_cluster);
+ enable_irq(temp_cluster->irq);
+ temp_cluster->on_cpu = cpu;
+ }
+}
+
static int l2cache_pmu_online_cpu(unsigned int cpu, struct hlist_node *node)
{
struct cluster_pmu *cluster;
struct l2cache_pmu *l2cache_pmu;
+ cpumask_t cluster_online_cpus;
if (!node)
- return 0;
+ goto out;
l2cache_pmu = hlist_entry_safe(node, struct l2cache_pmu, node);
cluster = get_cluster_pmu(l2cache_pmu, cpu);
@@ -929,45 +935,59 @@ static int l2cache_pmu_online_cpu(unsigned int cpu, struct hlist_node *node)
if (!cluster) {
/* Only if broken firmware doesn't list every cluster */
WARN_ONCE(1, "No L2 cache cluster for CPU%d\n", cpu);
- return 0;
+ goto out;
}
}
- /* If another CPU is managing this cluster, we're done */
- if (cluster->on_cpu != -1)
- return 0;
-
/*
- * All CPUs on this cluster were down, use this one.
- * Reset to put it into sane state.
+ * If another CPU is managing this cluster, whether that cpu is
+ * from the same cluster.
*/
+ if (cluster->on_cpu != -1) {
+ cpumask_and(&cluster_online_cpus, &cluster->cluster_cpus,
+ get_cpu_mask(cluster->on_cpu));
+ if (cpumask_test_cpu(cluster->on_cpu, &cluster_online_cpus))
+ goto out;
+ } else {
+ clusters_initialization(l2cache_pmu, cpu);
+ cpumask_set_cpu(cpu, &l2cache_pmu->cpumask);
+ goto out;
+ }
+
cluster->on_cpu = cpu;
cpumask_set_cpu(cpu, &l2cache_pmu->cpumask);
- cluster_pmu_reset(cluster);
- enable_irq(cluster->irq);
-
+out:
return 0;
}
+static void disable_clusters_interrupt(struct l2cache_pmu *l2cache_pmu)
+{
+ struct cluster_pmu *temp_cluster = NULL;
+
+ list_for_each_entry(temp_cluster, &l2cache_pmu->clusters, next)
+ disable_irq(temp_cluster->irq);
+}
+
static int l2cache_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
{
struct cluster_pmu *cluster;
struct l2cache_pmu *l2cache_pmu;
cpumask_t cluster_online_cpus;
unsigned int target;
+ struct cluster_pmu *temp_cluster = NULL;
if (!node)
- return 0;
+ goto out;
l2cache_pmu = hlist_entry_safe(node, struct l2cache_pmu, node);
cluster = get_cluster_pmu(l2cache_pmu, cpu);
if (!cluster)
- return 0;
+ goto out;
/* If this CPU is not managing the cluster, we're done */
if (cluster->on_cpu != cpu)
- return 0;
+ goto out;
/* Give up ownership of cluster */
cpumask_clear_cpu(cpu, &l2cache_pmu->cpumask);
@@ -975,17 +995,31 @@ static int l2cache_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
/* Any other CPU for this cluster which is still online */
cpumask_and(&cluster_online_cpus, &cluster->cluster_cpus,
- cpu_online_mask);
+ cpu_online_mask);
target = cpumask_any_but(&cluster_online_cpus, cpu);
if (target >= nr_cpu_ids) {
- disable_irq(cluster->irq);
- return 0;
+ cpumask_and(&cluster_online_cpus, &l2cache_pmu->cpumask,
+ cpu_online_mask);
+ target = cpumask_first(&cluster_online_cpus);
+ if (target >= nr_cpu_ids) {
+ disable_clusters_interrupt(l2cache_pmu);
+ goto out;
+ }
+ }
+
+ cluster->on_cpu = target;
+ if (cpumask_first(&l2cache_pmu->cpumask) >= nr_cpu_ids) {
+ list_for_each_entry(temp_cluster,
+ &l2cache_pmu->clusters, next) {
+ if (temp_cluster->cluster_id != cluster->cluster_id)
+ temp_cluster->on_cpu = target;
+ }
}
perf_pmu_migrate_context(&l2cache_pmu->pmu, cpu, target);
- cluster->on_cpu = target;
cpumask_set_cpu(target, &l2cache_pmu->cpumask);
+out:
return 0;
}
diff --git a/drivers/platform/msm/gsi/Makefile b/drivers/platform/msm/gsi/Makefile
index be6968e..23d7c17 100644
--- a/drivers/platform/msm/gsi/Makefile
+++ b/drivers/platform/msm/gsi/Makefile
@@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
-gsidbg-$(CONFIG_DEBUG_FS) += gsi_dbg.o
+gsidbg-$(CONFIG_GSI) += gsi_dbg.o
obj-$(CONFIG_GSI) += gsi.o gsidbg.o
obj-$(CONFIG_IPA_EMULATION) += gsi_emulation.o
diff --git a/drivers/platform/msm/gsi/gsi_dbg.c b/drivers/platform/msm/gsi/gsi_dbg.c
index dd5802b..15f0aae 100644
--- a/drivers/platform/msm/gsi/gsi_dbg.c
+++ b/drivers/platform/msm/gsi/gsi_dbg.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2015-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/completion.h>
@@ -19,7 +19,9 @@
#define PRT_STAT(fmt, args...) \
pr_err(fmt, ## args)
+#ifdef CONFIG_DEBUG_FS
static struct dentry *dent;
+#endif
static char dbg_buff[4096];
static void *gsi_ipc_logbuf_low;
@@ -670,6 +672,7 @@ const struct file_operations gsi_ipc_low_ops = {
.write = gsi_enable_ipc_low,
};
+#ifdef CONFIG_DEBUG_FS
void gsi_debugfs_init(void)
{
static struct dentry *dfile;
@@ -741,4 +744,5 @@ void gsi_debugfs_init(void)
fail:
debugfs_remove_recursive(dent);
}
+#endif
diff --git a/drivers/platform/msm/ipa/ipa_api.c b/drivers/platform/msm/ipa/ipa_api.c
index 7ee84e6..6392315 100644
--- a/drivers/platform/msm/ipa/ipa_api.c
+++ b/drivers/platform/msm/ipa/ipa_api.c
@@ -3740,8 +3740,8 @@ int ipa_get_prot_id(enum ipa_client_type client)
EXPORT_SYMBOL(ipa_get_prot_id);
static const struct dev_pm_ops ipa_pm_ops = {
- .suspend = ipa_ap_suspend,
- .resume_noirq = ipa_ap_resume,
+ .suspend_late = ipa_ap_suspend,
+ .resume_early = ipa_ap_resume,
};
static struct platform_driver ipa_plat_drv = {
diff --git a/drivers/platform/msm/ipa/ipa_clients/ipa_uc_offload.c b/drivers/platform/msm/ipa/ipa_clients/ipa_uc_offload.c
index 807c75a..4519473 100644
--- a/drivers/platform/msm/ipa/ipa_clients/ipa_uc_offload.c
+++ b/drivers/platform/msm/ipa/ipa_clients/ipa_uc_offload.c
@@ -1,10 +1,11 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2015-2019, 2020, The Linux Foundation. All rights reserved.
*/
#include <linux/ipa_uc_offload.h>
#include <linux/msm_ipa.h>
+#include <linux/if_vlan.h>
#include "../ipa_common_i.h"
#include "../ipa_v3/ipa_pm.h"
@@ -160,7 +161,7 @@ static int ipa_uc_offload_ntn_reg_intf(
struct ipa_ioc_rx_intf_prop rx_prop[2];
int ret = 0;
u32 len;
-
+ bool is_vlan_mode;
IPA_UC_OFFLOAD_DBG("register interface for netdev %s\n",
inp->netdev_name);
@@ -182,6 +183,41 @@ static int ipa_uc_offload_ntn_reg_intf(
goto fail_alloc;
}
+ ret = ipa_is_vlan_mode(IPA_VLAN_IF_ETH, &is_vlan_mode);
+ if (ret) {
+ IPA_UC_OFFLOAD_ERR("get vlan mode failed\n");
+ goto fail;
+ }
+
+ if (is_vlan_mode) {
+ if ((inp->hdr_info[0].hdr_type != IPA_HDR_L2_802_1Q) ||
+ (inp->hdr_info[1].hdr_type != IPA_HDR_L2_802_1Q)) {
+ IPA_UC_OFFLOAD_ERR(
+ "hdr_type mismatch in vlan mode\n");
+ WARN_ON_RATELIMIT_IPA(1);
+ ret = -EFAULT;
+ goto fail;
+ }
+ IPA_UC_OFFLOAD_DBG("vlan HEADER type compatible\n");
+
+ if ((inp->hdr_info[0].hdr_len <
+ (ETH_HLEN + VLAN_HLEN)) ||
+ (inp->hdr_info[1].hdr_len <
+ (ETH_HLEN + VLAN_HLEN))) {
+ IPA_UC_OFFLOAD_ERR(
+ "hdr_len shorter than vlan len (%u) (%u)\n"
+ , inp->hdr_info[0].hdr_len
+ , inp->hdr_info[1].hdr_len);
+ WARN_ON_RATELIMIT_IPA(1);
+ ret = -EFAULT;
+ goto fail;
+ }
+
+ IPA_UC_OFFLOAD_DBG("vlan HEADER len compatible (%u) (%u)\n",
+ inp->hdr_info[0].hdr_len,
+ inp->hdr_info[1].hdr_len);
+ }
+
if (ipa_commit_partial_hdr(hdr, ntn_ctx->netdev_name, inp->hdr_info)) {
IPA_UC_OFFLOAD_ERR("fail to commit partial headers\n");
ret = -EFAULT;
diff --git a/drivers/platform/msm/ipa/ipa_clients/ipa_wdi3.c b/drivers/platform/msm/ipa/ipa_clients/ipa_wdi3.c
index 16de3d9..c9b7a82 100644
--- a/drivers/platform/msm/ipa/ipa_clients/ipa_wdi3.c
+++ b/drivers/platform/msm/ipa/ipa_clients/ipa_wdi3.c
@@ -254,7 +254,7 @@ int ipa_wdi_reg_intf(struct ipa_wdi_reg_intf_in_params *in)
goto fail_commit_hdr;
}
tx.num_props = 2;
- memset(tx_prop, 0, sizeof(*tx_prop));
+ memset(tx_prop, 0, sizeof(*tx_prop) * IPA_TX_MAX_INTF_PROP);
tx.prop = tx_prop;
tx_prop[0].ip = IPA_IP_v4;
@@ -286,7 +286,7 @@ int ipa_wdi_reg_intf(struct ipa_wdi_reg_intf_in_params *in)
goto fail_commit_hdr;
}
rx.num_props = 2;
- memset(rx_prop, 0, sizeof(*rx_prop));
+ memset(rx_prop, 0, sizeof(*rx_prop) * IPA_RX_MAX_INTF_PROP);
rx.prop = rx_prop;
rx_prop[0].ip = IPA_IP_v4;
if (!ipa3_ctx->ipa_wdi3_over_gsi)
diff --git a/drivers/platform/msm/ipa/ipa_clients/rndis_ipa.c b/drivers/platform/msm/ipa/ipa_clients/rndis_ipa.c
index 5ef6ee4..f716452 100644
--- a/drivers/platform/msm/ipa/ipa_clients/rndis_ipa.c
+++ b/drivers/platform/msm/ipa/ipa_clients/rndis_ipa.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2013-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2013-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/atomic.h>
@@ -437,8 +437,10 @@ static struct ipa_ep_cfg usb_to_ipa_ep_cfg_deaggr_en = {
},
.deaggr = {
.deaggr_hdr_len = sizeof(struct rndis_pkt_hdr),
+ .syspipe_err_detection = true,
.packet_offset_valid = true,
.packet_offset_location = 8,
+ .ignore_min_pkt_err = true,
.max_packet_len = 8192, /* Will be overridden*/
},
.route = {
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_client.c b/drivers/platform/msm/ipa/ipa_v3/ipa_client.c
index 36c18e5..871b3d9 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_client.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_client.c
@@ -61,11 +61,22 @@ int ipa3_enable_data_path(u32 clnt_hdl)
* on other end from IPA hw.
*/
if ((ep->client == IPA_CLIENT_USB_DPL_CONS) ||
- (ep->client == IPA_CLIENT_MHI_DPL_CONS))
+ (ep->client == IPA_CLIENT_MHI_DPL_CONS)) {
+ holb_cfg.tmr_val = 0;
holb_cfg.en = IPA_HOLB_TMR_EN;
- else
+ } else if ((ipa3_ctx->ipa_hw_type == IPA_HW_v4_2 ||
+ ipa3_ctx->ipa_hw_type == IPA_HW_v4_7) &&
+ (ep->client == IPA_CLIENT_WLAN1_CONS ||
+ ep->client == IPA_CLIENT_USB_CONS)) {
+ holb_cfg.en = IPA_HOLB_TMR_EN;
+ if (ipa3_ctx->ipa_hw_type < IPA_HW_v4_5)
+ holb_cfg.tmr_val = IPA_HOLB_TMR_VAL;
+ else
+ holb_cfg.tmr_val = IPA_HOLB_TMR_VAL_4_5;
+ } else {
holb_cfg.en = IPA_HOLB_TMR_DIS;
- holb_cfg.tmr_val = 0;
+ holb_cfg.tmr_val = 0;
+ }
res = ipa3_cfg_ep_holb(clnt_hdl, &holb_cfg);
}
@@ -1400,8 +1411,6 @@ int ipa3_xdci_disconnect(u32 clnt_hdl, bool should_force_clear, u32 qmi_req_id)
if (!ep->keep_ipa_awake)
IPA_ACTIVE_CLIENTS_INC_EP(ipa3_get_client_mapping(clnt_hdl));
- ipa3_disable_data_path(clnt_hdl);
-
if (!IPA_CLIENT_IS_CONS(ep->client)) {
IPADBG("Stopping PROD channel - hdl=%d clnt=%d\n",
clnt_hdl, ep->client);
@@ -1425,6 +1434,9 @@ int ipa3_xdci_disconnect(u32 clnt_hdl, bool should_force_clear, u32 qmi_req_id)
goto stop_chan_fail;
}
}
+
+ ipa3_disable_data_path(clnt_hdl);
+
IPA_ACTIVE_CLIENTS_DEC_EP(ipa3_get_client_mapping(clnt_hdl));
IPADBG("exit\n");
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_debugfs.c b/drivers/platform/msm/ipa/ipa_v3/ipa_debugfs.c
index c6301ab..bcf58a7 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_debugfs.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_debugfs.c
@@ -2963,6 +2963,15 @@ struct dentry *ipa_debugfs_get_root(void)
EXPORT_SYMBOL(ipa_debugfs_get_root);
#else /* !CONFIG_DEBUG_FS */
+#define INVALID_NO_OF_CHAR (-1)
void ipa3_debugfs_init(void) {}
void ipa3_debugfs_remove(void) {}
+int _ipa_read_ep_reg_v3_0(char *buf, int max_len, int pipe)
+{
+ return INVALID_NO_OF_CHAR;
+}
+int _ipa_read_ep_reg_v4_0(char *buf, int max_len, int pipe)
+{
+ return INVALID_NO_OF_CHAR;
+}
#endif
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_dp.c b/drivers/platform/msm/ipa/ipa_v3/ipa_dp.c
index e897ae2..4fe58bd 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_dp.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_dp.c
@@ -87,6 +87,8 @@
#define IPA_QMAP_ID_BYTE 0
+#define IPA_TX_MAX_DESC (20)
+
static struct sk_buff *ipa3_get_skb_ipa_rx(unsigned int len, gfp_t flags);
static void ipa3_replenish_wlan_rx_cache(struct ipa3_sys_context *sys);
static void ipa3_replenish_rx_cache(struct ipa3_sys_context *sys);
@@ -212,6 +214,7 @@ static void ipa3_tasklet_write_done(unsigned long data)
struct ipa3_sys_context *sys;
struct ipa3_tx_pkt_wrapper *this_pkt;
bool xmit_done = false;
+ unsigned int max_tx_pkt = 0;
sys = (struct ipa3_sys_context *)data;
spin_lock_bh(&sys->spinlock);
@@ -223,9 +226,17 @@ static void ipa3_tasklet_write_done(unsigned long data)
spin_unlock_bh(&sys->spinlock);
ipa3_wq_write_done_common(sys, this_pkt);
spin_lock_bh(&sys->spinlock);
+ max_tx_pkt++;
if (xmit_done)
break;
}
+ /* If TX packets processing continuously in tasklet other
+ * softirqs are not able to run on that core which is leading
+ * to watchdog bark. For avoiding these scenarios exit from
+ * tasklet after reaching max limit.
+ */
+ if (max_tx_pkt == IPA_TX_MAX_DESC)
+ break;
}
spin_unlock_bh(&sys->spinlock);
}
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_flt.c b/drivers/platform/msm/ipa/ipa_v3/ipa_flt.c
index d8b7bf1..8071128 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_flt.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_flt.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2012-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2020, The Linux Foundation. All rights reserved.
*/
#include "ipa_i.h"
@@ -1693,17 +1693,23 @@ int ipa3_mdfy_flt_rule(struct ipa_ioc_mdfy_flt_rule *hdls)
}
mutex_lock(&ipa3_ctx->lock);
+
for (i = 0; i < hdls->num_rules; i++) {
/* if hashing not supported, all tables are non-hash tables*/
if (ipa3_ctx->ipa_fltrt_not_hashable)
hdls->rules[i].rule.hashable = false;
+
__ipa_convert_flt_mdfy_in(hdls->rules[i], &rule);
- if (__ipa_mdfy_flt_rule(&rule, hdls->ip)) {
- IPAERR_RL("failed to mdfy flt rule %i\n", i);
+
+ result = __ipa_mdfy_flt_rule(&rule, hdls->ip);
+
+ __ipa_convert_flt_mdfy_out(rule, &hdls->rules[i]);
+
+ if (result) {
+ IPAERR_RL("failed to mdfy flt rule %d\n", i);
hdls->rules[i].status = IPA_FLT_STATUS_OF_MDFY_FAILED;
} else {
hdls->rules[i].status = 0;
- __ipa_convert_flt_mdfy_out(rule, &hdls->rules[i]);
}
}
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_i.h b/drivers/platform/msm/ipa/ipa_v3/ipa_i.h
index 13bc68a..bb6f7a0 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_i.h
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_i.h
@@ -69,6 +69,8 @@
#define IPA_HOLB_TMR_DIS 0x0
#define IPA_HOLB_TMR_EN 0x1
+#define IPA_HOLB_TMR_VAL 65535
+#define IPA_HOLB_TMR_VAL_4_5 31
/*
* The transport descriptor size was changed to GSI_CHAN_RE_SIZE_16B, but
* IPA users still use sps_iovec size as FIFO element size.
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_intf.c b/drivers/platform/msm/ipa/ipa_v3/ipa_intf.c
index df9aceb..bb6942f 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_intf.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_intf.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2013-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2013-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/fs.h>
@@ -114,6 +114,7 @@ int ipa3_register_intf_ext(const char *name, const struct ipa_tx_intf *tx,
kfree(intf);
return -ENOMEM;
}
+ memcpy(intf->tx, tx->prop, len);
}
if (rx) {
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_mpm.c b/drivers/platform/msm/ipa/ipa_v3/ipa_mpm.c
index 5c90b0e..f2cdd9b 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_mpm.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_mpm.c
@@ -2779,12 +2779,7 @@ static void ipa_mpm_mhi_status_cb(struct mhi_device *mhi_dev,
IPA_MPM_DBG("Already out of lpm\n");
}
break;
- case MHI_CB_EE_RDDM:
- case MHI_CB_PENDING_DATA:
- case MHI_CB_SYS_ERROR:
- case MHI_CB_FATAL_ERROR:
- case MHI_CB_EE_MISSION_MODE:
- case MHI_CB_DTR_SIGNAL:
+ default:
IPA_MPM_ERR("unexpected event %d\n", mhi_cb);
break;
}
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_nat.c b/drivers/platform/msm/ipa/ipa_v3/ipa_nat.c
index 461d77c8..b41b177 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_nat.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_nat.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2012-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/device.h>
@@ -1108,7 +1108,7 @@ static void ipa3_nat_create_init_cmd(
IPADBG("return\n");
}
-static void ipa3_nat_create_modify_pdn_cmd(
+static int ipa3_nat_create_modify_pdn_cmd(
struct ipahal_imm_cmd_dma_shared_mem *mem_cmd, bool zero_mem)
{
size_t pdn_entry_size, mem_size;
@@ -1118,6 +1118,10 @@ static void ipa3_nat_create_modify_pdn_cmd(
ipahal_nat_entry_size(IPAHAL_NAT_IPV4_PDN, &pdn_entry_size);
mem_size = pdn_entry_size * IPA_MAX_PDN_NUM;
+ /* Before providing physical base address check pointer exist or not*/
+ if (!ipa3_ctx->nat_mem.pdn_mem.base)
+ return -EFAULT;
+
if (zero_mem && ipa3_ctx->nat_mem.pdn_mem.base)
memset(ipa3_ctx->nat_mem.pdn_mem.base, 0, mem_size);
@@ -1131,6 +1135,7 @@ static void ipa3_nat_create_modify_pdn_cmd(
IPA_MEM_PART(pdn_config_ofst);
IPADBG("return\n");
+ return 0;
}
static int ipa3_nat_send_init_cmd(struct ipahal_imm_cmd_ip_v4_nat_init *cmd,
@@ -1202,7 +1207,12 @@ static int ipa3_nat_send_init_cmd(struct ipahal_imm_cmd_ip_v4_nat_init *cmd,
}
/* Copy the PDN config table to SRAM */
- ipa3_nat_create_modify_pdn_cmd(&mem_cmd, zero_pdn_table);
+ result = ipa3_nat_create_modify_pdn_cmd(&mem_cmd,
+ zero_pdn_table);
+ if (result) {
+ IPAERR(" Fail to create modify pdn command\n");
+ goto destroy_imm_cmd;
+ }
cmd_pyld[num_cmd] = ipahal_construct_imm_cmd(
IPA_IMM_CMD_DMA_SHARED_MEM, &mem_cmd, false);
if (!cmd_pyld[num_cmd]) {
@@ -1694,7 +1704,12 @@ int ipa3_nat_mdfy_pdn(
/*
* Copy the PDN config table to SRAM
*/
- ipa3_nat_create_modify_pdn_cmd(&mem_cmd, false);
+ result = ipa3_nat_create_modify_pdn_cmd(&mem_cmd, false);
+
+ if (result) {
+ IPAERR(" Fail to create modify pdn command\n");
+ goto bail;
+ }
cmd_pyld = ipahal_construct_imm_cmd(
IPA_IMM_CMD_DMA_SHARED_MEM, &mem_cmd, false);
@@ -2095,6 +2110,8 @@ static void ipa3_nat_ipv6ct_free_mem(
mld_ptr->index_table_expansion_addr = NULL;
}
+ dev->is_hw_init = false;
+ dev->is_mapped = false;
memset(nm_ptr->mem_loc, 0, sizeof(nm_ptr->mem_loc));
}
}
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_utils.c b/drivers/platform/msm/ipa/ipa_v3/ipa_utils.c
index a14b0f7..95b059b 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_utils.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_utils.c
@@ -5349,6 +5349,9 @@ int ipa3_cfg_ep_deaggr(u32 clnt_hdl,
clnt_hdl,
ep_deaggr->deaggr_hdr_len);
+ IPADBG("syspipe_err_detection=%d\n",
+ ep_deaggr->syspipe_err_detection);
+
IPADBG("packet_offset_valid=%d\n",
ep_deaggr->packet_offset_valid);
@@ -5356,6 +5359,9 @@ int ipa3_cfg_ep_deaggr(u32 clnt_hdl,
ep_deaggr->packet_offset_location,
ep_deaggr->max_packet_len);
+ IPADBG("ignore_min_pkt_err=%d\n",
+ ep_deaggr->ignore_min_pkt_err);
+
ep = &ipa3_ctx->ep[clnt_hdl];
/* copy over EP cfg */
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_wdi3_i.c b/drivers/platform/msm/ipa/ipa_v3/ipa_wdi3_i.c
index 463a3d3..f404f5c 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_wdi3_i.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_wdi3_i.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2018 - 2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2019 - 2020, The Linux Foundation. All rights reserved.
*/
#include "ipa_i.h"
@@ -673,6 +673,7 @@ int ipa3_disconn_wdi3_pipes(int ipa_ep_idx_tx, int ipa_ep_idx_rx)
IPAERR("failed to release gsi channel: %d\n", result);
goto exit;
}
+ ipa3_release_wdi3_gsi_smmu_mappings(IPA_WDI3_TX_DIR);
memset(ep_tx, 0, sizeof(struct ipa3_ep_context));
IPADBG("tx client (ep: %d) disconnected\n", ipa_ep_idx_tx);
@@ -693,6 +694,7 @@ int ipa3_disconn_wdi3_pipes(int ipa_ep_idx_tx, int ipa_ep_idx_rx)
IPAERR("failed to release gsi channel: %d\n", result);
goto exit;
}
+ ipa3_release_wdi3_gsi_smmu_mappings(IPA_WDI3_RX_DIR);
if (ipa3_ctx->ipa_hw_type >= IPA_HW_v4_5)
ipa3_uc_debug_stats_dealloc(IPA_HW_PROTOCOL_WDI3);
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal.c b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal.c
index 0bef801..965e48a 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2016-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2016-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/debugfs.h>
@@ -1422,12 +1422,16 @@ static int ipahal_cp_proc_ctx_to_hw_buff_v3(enum ipa_hdr_proc_type type,
(base + offset);
ctx->hdr_add.tlv.type = IPA_PROC_CTX_TLV_TYPE_HDR_ADD;
- ctx->hdr_add.tlv.length = 1;
+ ctx->hdr_add.tlv.length = 2;
ctx->hdr_add.tlv.value = hdr_len;
- ctx->hdr_add.hdr_addr = is_hdr_proc_ctx ? phys_base :
+ hdr_addr = is_hdr_proc_ctx ? phys_base :
hdr_base_addr + offset_entry->offset;
IPAHAL_DBG("header address 0x%x\n",
ctx->hdr_add.hdr_addr);
+ IPAHAL_CP_PROC_CTX_HEADER_UPDATE(ctx->hdr_add.hdr_addr,
+ ctx->hdr_add.hdr_addr_hi, hdr_addr);
+ if (!is_64)
+ ctx->hdr_add.hdr_addr_hi = 0;
ctx->hdr_add_ex.tlv.type = IPA_PROC_CTX_TLV_TYPE_PROC_CMD;
ctx->hdr_add_ex.tlv.length = 1;
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_reg.c b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_reg.c
index 2c0cc1f..4d1527b 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_reg.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_reg.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2012-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/init.h>
@@ -1554,6 +1554,37 @@ static void ipareg_construct_endp_init_deaggr_n(
IPA_ENDP_INIT_DEAGGR_n_MAX_PACKET_LEN_BMSK);
}
+static void ipareg_construct_endp_init_deaggr_n_v4_5(
+ enum ipahal_reg_name reg, const void *fields, u32 *val)
+{
+ struct ipa_ep_cfg_deaggr *ep_deaggr =
+ (struct ipa_ep_cfg_deaggr *)fields;
+
+ IPA_SETFIELD_IN_REG(*val, ep_deaggr->deaggr_hdr_len,
+ IPA_ENDP_INIT_DEAGGR_n_DEAGGR_HDR_LEN_SHFT,
+ IPA_ENDP_INIT_DEAGGR_n_DEAGGR_HDR_LEN_BMSK);
+
+ IPA_SETFIELD_IN_REG(*val, ep_deaggr->syspipe_err_detection,
+ IPA_ENDP_INIT_DEAGGR_n_SYSPIPE_ERR_DETECTION_SHFT,
+ IPA_ENDP_INIT_DEAGGR_n_SYSPIPE_ERR_DETECTION_BMSK);
+
+ IPA_SETFIELD_IN_REG(*val, ep_deaggr->packet_offset_valid,
+ IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_VALID_SHFT,
+ IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_VALID_BMSK);
+
+ IPA_SETFIELD_IN_REG(*val, ep_deaggr->packet_offset_location,
+ IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_LOCATION_SHFT,
+ IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_LOCATION_BMSK);
+
+ IPA_SETFIELD_IN_REG(*val, ep_deaggr->ignore_min_pkt_err,
+ IPA_ENDP_INIT_DEAGGR_n_IGNORE_MIN_PKT_ERR_SHFT,
+ IPA_ENDP_INIT_DEAGGR_n_IGNORE_MIN_PKT_ERR_BMSK);
+
+ IPA_SETFIELD_IN_REG(*val, ep_deaggr->max_packet_len,
+ IPA_ENDP_INIT_DEAGGR_n_MAX_PACKET_LEN_SHFT,
+ IPA_ENDP_INIT_DEAGGR_n_MAX_PACKET_LEN_BMSK);
+}
+
static void ipareg_construct_endp_init_hol_block_en_n(
enum ipahal_reg_name reg, const void *fields, u32 *val)
{
@@ -3167,7 +3198,7 @@ static struct ipahal_reg_obj ipahal_reg_objs[IPA_HW_MAX][IPA_REG_MAX] = {
ipareg_construct_endp_init_cfg_n, ipareg_parse_dummy,
0x00000808, 0x70, 0, 30, 1},
[IPA_HW_v4_5][IPA_ENDP_INIT_DEAGGR_n] = {
- ipareg_construct_endp_init_deaggr_n,
+ ipareg_construct_endp_init_deaggr_n_v4_5,
ipareg_parse_dummy,
0x00000834, 0x70, 0, 12, 1},
[IPA_HW_v4_5][IPA_ENDP_INIT_CTRL_n] = {
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_reg_i.h b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_reg_i.h
index 44ecc90..cec3183 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_reg_i.h
+++ b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal_reg_i.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
- * Copyright (c) 2012-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2020, The Linux Foundation. All rights reserved.
*/
#ifndef _IPAHAL_REG_I_H_
@@ -215,10 +215,14 @@ int ipahal_reg_init(enum ipa_hw_type ipa_hw_type);
/* IPA_ENDP_INIT_DEAGGR_n register */
#define IPA_ENDP_INIT_DEAGGR_n_MAX_PACKET_LEN_BMSK 0xFFFF0000
#define IPA_ENDP_INIT_DEAGGR_n_MAX_PACKET_LEN_SHFT 0x10
+#define IPA_ENDP_INIT_DEAGGR_n_IGNORE_MIN_PKT_ERR_BMSK 0x4000
+#define IPA_ENDP_INIT_DEAGGR_n_IGNORE_MIN_PKT_ERR_SHFT 0xe
#define IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_LOCATION_BMSK 0x3F00
#define IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_LOCATION_SHFT 0x8
#define IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_VALID_BMSK 0x80
#define IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_VALID_SHFT 0x7
+#define IPA_ENDP_INIT_DEAGGR_n_SYSPIPE_ERR_DETECTION_BMSK 0x40
+#define IPA_ENDP_INIT_DEAGGR_n_SYSPIPE_ERR_DETECTION_SHFT 0x6
#define IPA_ENDP_INIT_DEAGGR_n_DEAGGR_HDR_LEN_BMSK 0x3F
#define IPA_ENDP_INIT_DEAGGR_n_DEAGGR_HDR_LEN_SHFT 0x0
diff --git a/drivers/platform/msm/ipa/ipa_v3/rmnet_ipa.c b/drivers/platform/msm/ipa/ipa_v3/rmnet_ipa.c
index 26511ed..cad2f4a 100644
--- a/drivers/platform/msm/ipa/ipa_v3/rmnet_ipa.c
+++ b/drivers/platform/msm/ipa/ipa_v3/rmnet_ipa.c
@@ -2100,7 +2100,7 @@ static void ipa3_wwan_setup(struct net_device *dev)
dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST);
dev->needed_headroom = HEADROOM_FOR_QMAP;
dev->needed_tailroom = TAILROOM;
- dev->watchdog_timeo = 1000;
+ dev->watchdog_timeo = msecs_to_jiffies(10000);
}
/**
@@ -2697,8 +2697,8 @@ static const struct of_device_id rmnet_ipa_dt_match[] = {
MODULE_DEVICE_TABLE(of, rmnet_ipa_dt_match);
static const struct dev_pm_ops rmnet_ipa_pm_ops = {
- .suspend = rmnet_ipa_ap_suspend,
- .resume_noirq = rmnet_ipa_ap_resume,
+ .suspend_late = rmnet_ipa_ap_suspend,
+ .resume_early = rmnet_ipa_ap_resume,
};
static struct platform_driver rmnet_ipa_driver = {
diff --git a/drivers/platform/msm/ipa/test/ipa_ut_framework.c b/drivers/platform/msm/ipa/test/ipa_ut_framework.c
index 0dfb8184..ba05c61 100644
--- a/drivers/platform/msm/ipa/test/ipa_ut_framework.c
+++ b/drivers/platform/msm/ipa/test/ipa_ut_framework.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/mutex.h>
@@ -922,7 +922,9 @@ static int ipa_ut_framework_init(void)
ipa_assert_on(!ipa_ut_ctx);
+#ifdef CONFIG_DEBUG_FS
ipa_ut_ctx->ipa_dbgfs_root = ipa_debugfs_get_root();
+#endif
if (!ipa_ut_ctx->ipa_dbgfs_root) {
IPA_UT_ERR("No IPA debugfs root entry\n");
return -EFAULT;
diff --git a/drivers/platform/msm/sps/spsi.h b/drivers/platform/msm/sps/spsi.h
index a2a9a846..45ad64b 100644
--- a/drivers/platform/msm/sps/spsi.h
+++ b/drivers/platform/msm/sps/spsi.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
- * Copyright (c) 2011-2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2011-2018, 2020, The Linux Foundation. All rights reserved.
*/
/**
* Smart-Peripheral-Switch (SPS) internal API.
@@ -212,13 +212,13 @@ extern u8 print_limit_option;
} \
} while (0)
#else
-#define SPS_DBG3(x...) pr_debug(x)
-#define SPS_DBG2(x...) pr_debug(x)
-#define SPS_DBG1(x...) pr_debug(x)
-#define SPS_DBG(x...) pr_debug(x)
-#define SPS_INFO(x...) pr_info(x)
-#define SPS_ERR(x...) pr_err(x)
-#define SPS_DUMP(x...) pr_info(x)
+#define SPS_DBG3(dev, msg, args...) pr_debug(msg, ##args)
+#define SPS_DBG2(dev, msg, args...) pr_debug(msg, ##args)
+#define SPS_DBG1(dev, msg, args...) pr_debug(msg, ##args)
+#define SPS_DBG(dev, msg, args...) pr_debug(msg, ##args)
+#define SPS_INFO(dev, msg, args...) pr_info(msg, ##args)
+#define SPS_ERR(dev, msg, args...) pr_err(msg, ##args)
+#define SPS_DUMP(msg, args...) pr_info(msg, ##args)
#endif
/* End point parameters */
diff --git a/drivers/power/supply/power_supply_sysfs.c b/drivers/power/supply/power_supply_sysfs.c
index f98116d..bcacfd9 100644
--- a/drivers/power/supply/power_supply_sysfs.c
+++ b/drivers/power/supply/power_supply_sysfs.c
@@ -482,6 +482,7 @@ static struct device_attribute power_supply_attrs[] = {
POWER_SUPPLY_ATTR(irq_status),
POWER_SUPPLY_ATTR(parallel_output_mode),
POWER_SUPPLY_ATTR(fg_type),
+ POWER_SUPPLY_ATTR(charger_status),
/* Local extensions of type int64_t */
POWER_SUPPLY_ATTR(charge_counter_ext),
/* Properties of type `const char *' */
diff --git a/drivers/power/supply/qcom/fg-core.h b/drivers/power/supply/qcom/fg-core.h
index 6bd4ae8..da5e919 100644
--- a/drivers/power/supply/qcom/fg-core.h
+++ b/drivers/power/supply/qcom/fg-core.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
- * Copyright (c) 2016-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2016-2020, The Linux Foundation. All rights reserved.
*/
#ifndef __FG_CORE_H__
@@ -175,6 +175,7 @@ enum fg_sram_param_id {
FG_SRAM_VBAT_FINAL,
FG_SRAM_IBAT_FINAL,
FG_SRAM_IBAT_FLT,
+ FG_SRAM_RCONN,
FG_SRAM_ESR,
FG_SRAM_ESR_MDL,
FG_SRAM_ESR_ACT,
diff --git a/drivers/power/supply/qcom/qg-soc.c b/drivers/power/supply/qcom/qg-soc.c
index 4071edcd..5aee222 100644
--- a/drivers/power/supply/qcom/qg-soc.c
+++ b/drivers/power/supply/qcom/qg-soc.c
@@ -486,8 +486,9 @@ static bool is_scaling_required(struct qpnp_qg *chip)
if (chip->catch_up_soc > chip->msoc && input_present &&
(chip->charge_status != POWER_SUPPLY_STATUS_CHARGING &&
- chip->charge_status != POWER_SUPPLY_STATUS_FULL))
- /* USB is present, but not charging */
+ chip->charge_status != POWER_SUPPLY_STATUS_FULL
+ && chip->msoc != 0))
+ /* USB is present, but not charging. Ignore when msoc = 0 */
return false;
return true;
diff --git a/drivers/power/supply/qcom/qg-util.c b/drivers/power/supply/qcom/qg-util.c
index 8a54554..170ca87 100644
--- a/drivers/power/supply/qcom/qg-util.c
+++ b/drivers/power/supply/qcom/qg-util.c
@@ -455,6 +455,15 @@ int qg_get_ibat_avg(struct qpnp_qg *chip, int *ibat_ua)
return rc;
}
+ if (last_ibat == FIFO_I_RESET_VAL) {
+ /* First FIFO is not complete, read instantaneous IBAT */
+ rc = qg_get_battery_current(chip, ibat_ua);
+ if (rc < 0)
+ pr_err("Failed to read inst. IBAT rc=%d\n", rc);
+
+ return rc;
+ }
+
last_ibat = sign_extend32(last_ibat, 15);
*ibat_ua = qg_iraw_to_ua(chip, last_ibat);
diff --git a/drivers/power/supply/qcom/qpnp-fg-gen4.c b/drivers/power/supply/qcom/qpnp-fg-gen4.c
index 7ae08c7..9c179be 100644
--- a/drivers/power/supply/qcom/qpnp-fg-gen4.c
+++ b/drivers/power/supply/qcom/qpnp-fg-gen4.c
@@ -395,6 +395,8 @@ static struct fg_sram_param pm8150b_v1_sram_params[] = {
0, NULL, fg_decode_voltage_15b),
PARAM(IBAT_FINAL, IBAT_FINAL_WORD, IBAT_FINAL_OFFSET, 2, 1000, 488282,
0, NULL, fg_decode_current_16b),
+ PARAM(RCONN, RCONN_WORD, RCONN_OFFSET, 2, 1000, 122070, 0,
+ fg_encode_default, fg_decode_value_16b),
PARAM(ESR, ESR_WORD, ESR_OFFSET, 2, 1000, 244141, 0, fg_encode_default,
fg_decode_value_16b),
PARAM(ESR_MDL, ESR_MDL_WORD, ESR_MDL_OFFSET, 2, 1000, 244141, 0,
@@ -495,6 +497,8 @@ static struct fg_sram_param pm8150b_v2_sram_params[] = {
0, NULL, fg_decode_current_16b),
PARAM(IBAT_FLT, IBAT_FLT_WORD, IBAT_FLT_OFFSET, 4, 10000, 19073, 0,
NULL, fg_decode_current_24b),
+ PARAM(RCONN, RCONN_WORD, RCONN_OFFSET, 2, 1000, 122070, 0,
+ fg_encode_default, fg_decode_value_16b),
PARAM(ESR, ESR_WORD, ESR_OFFSET, 2, 1000, 244141, 0, fg_encode_default,
fg_decode_value_16b),
PARAM(ESR_MDL, ESR_MDL_WORD, ESR_MDL_OFFSET, 2, 1000, 244141, 0,
@@ -5454,8 +5458,7 @@ static int fg_gen4_hw_init(struct fg_gen4_chip *chip)
}
if (!buf[0] && !buf[1]) {
- /* Rconn has same encoding as ESR */
- fg_encode(fg->sp, FG_SRAM_ESR, chip->dt.rconn_uohms,
+ fg_encode(fg->sp, FG_SRAM_RCONN, chip->dt.rconn_uohms,
buf);
rc = fg_sram_write(fg, RCONN_WORD, RCONN_OFFSET, buf, 2,
FG_IMA_DEFAULT);
@@ -6370,12 +6373,7 @@ static int fg_gen4_probe(struct platform_device *pdev)
/* Keep MEM_ATTN_IRQ disabled until we require it */
vote(chip->mem_attn_irq_en_votable, MEM_ATTN_IRQ_VOTER, false, 0);
- rc = fg_debugfs_create(fg);
- if (rc < 0) {
- dev_err(fg->dev, "Error in creating debugfs entries, rc:%d\n",
- rc);
- goto exit;
- }
+ fg_debugfs_create(fg);
rc = sysfs_create_groups(&fg->dev->kobj, fg_groups);
if (rc < 0) {
diff --git a/drivers/power/supply/qcom/qpnp-qg.c b/drivers/power/supply/qcom/qpnp-qg.c
index 6bab75d..0acae67 100644
--- a/drivers/power/supply/qcom/qpnp-qg.c
+++ b/drivers/power/supply/qcom/qpnp-qg.c
@@ -2028,6 +2028,7 @@ static int qg_reset(struct qpnp_qg *chip)
static int qg_setprop_batt_age_level(struct qpnp_qg *chip, int batt_age_level)
{
int rc = 0;
+ u16 data = 0;
if (!chip->dt.multi_profile_load)
return 0;
@@ -2053,6 +2054,13 @@ static int qg_setprop_batt_age_level(struct qpnp_qg *chip, int batt_age_level)
pr_err("error in storing batt_age_level rc =%d\n", rc);
}
+ /* Clear the learned capacity on loading a new profile */
+ rc = qg_sdam_multibyte_write(QG_SDAM_LEARNED_CAPACITY_OFFSET,
+ (u8 *)&data, 2);
+
+ if (rc < 0)
+ pr_err("Failed to clear SDAM learnt capacity rc=%d\n", rc);
+
qg_dbg(chip, QG_DEBUG_PROFILE, "Profile with batt_age_level = %d loaded\n",
chip->batt_age_level);
diff --git a/drivers/power/supply/qcom/qpnp-smb5.c b/drivers/power/supply/qcom/qpnp-smb5.c
index ca027b3..103ca3a 100644
--- a/drivers/power/supply/qcom/qpnp-smb5.c
+++ b/drivers/power/supply/qcom/qpnp-smb5.c
@@ -730,6 +730,46 @@ static int smb5_parse_dt_voltages(struct smb5 *chip, struct device_node *node)
return 0;
}
+static int smb5_parse_sdam(struct smb5 *chip, struct device_node *node)
+{
+ struct device_node *child;
+ struct smb_charger *chg = &chip->chg;
+ struct property *prop;
+ const char *name;
+ int rc;
+ u32 base;
+ u8 type;
+
+ for_each_available_child_of_node(node, child) {
+ of_property_for_each_string(child, "reg", prop, name) {
+ rc = of_property_read_u32(child, "reg", &base);
+ if (rc < 0) {
+ pr_err("Failed to read base rc=%d\n", rc);
+ return rc;
+ }
+
+ rc = smblib_read(chg, base + PERPH_TYPE_OFFSET, &type);
+ if (rc < 0) {
+ pr_err("Failed to read type rc=%d\n", rc);
+ return rc;
+ }
+
+ switch (type) {
+ case SDAM_TYPE:
+ chg->sdam_base = base;
+ break;
+ default:
+ break;
+ }
+ }
+ }
+
+ if (!chg->sdam_base)
+ pr_debug("SDAM node not defined\n");
+
+ return 0;
+}
+
static int smb5_parse_dt(struct smb5 *chip)
{
struct smb_charger *chg = &chip->chg;
@@ -757,6 +797,10 @@ static int smb5_parse_dt(struct smb5 *chip)
if (rc < 0)
return rc;
+ rc = smb5_parse_sdam(chip, node);
+ if (rc < 0)
+ return rc;
+
return 0;
}
@@ -828,6 +872,8 @@ static enum power_supply_property smb5_usb_props[] = {
POWER_SUPPLY_PROP_SKIN_HEALTH,
POWER_SUPPLY_PROP_APSD_RERUN,
POWER_SUPPLY_PROP_APSD_TIMEOUT,
+ POWER_SUPPLY_PROP_CHARGER_STATUS,
+ POWER_SUPPLY_PROP_INPUT_VOLTAGE_SETTLED,
};
static int smb5_usb_get_prop(struct power_supply *psy,
@@ -837,6 +883,7 @@ static int smb5_usb_get_prop(struct power_supply *psy,
struct smb5 *chip = power_supply_get_drvdata(psy);
struct smb_charger *chg = &chip->chg;
int rc = 0;
+ u8 reg = 0, buff[2] = {0};
val->intval = 0;
switch (psp) {
@@ -977,6 +1024,24 @@ static int smb5_usb_get_prop(struct power_supply *psy,
case POWER_SUPPLY_PROP_APSD_TIMEOUT:
val->intval = chg->apsd_ext_timeout;
break;
+ case POWER_SUPPLY_PROP_CHARGER_STATUS:
+ val->intval = 0;
+ if (chg->sdam_base) {
+ rc = smblib_read(chg,
+ chg->sdam_base + SDAM_QC_DET_STATUS_REG, ®);
+ if (!rc)
+ val->intval = reg;
+ }
+ break;
+ case POWER_SUPPLY_PROP_INPUT_VOLTAGE_SETTLED:
+ val->intval = 0;
+ if (chg->sdam_base) {
+ rc = smblib_batch_read(chg,
+ chg->sdam_base + SDAM_QC_ADC_LSB_REG, buff, 2);
+ if (!rc)
+ val->intval = (buff[1] << 8 | buff[0]) * 1038;
+ }
+ break;
default:
pr_err("get prop %d is not supported in usb\n", psp);
rc = -EINVAL;
@@ -2576,11 +2641,23 @@ static int smb5_init_hw(struct smb5 *chip)
{
struct smb_charger *chg = &chip->chg;
int rc;
- u8 val = 0, mask = 0;
+ u8 val = 0, mask = 0, buf[2] = {0};
if (chip->dt.no_battery)
chg->fake_capacity = 50;
+ if (chg->sdam_base) {
+ rc = smblib_write(chg,
+ chg->sdam_base + SDAM_QC_DET_STATUS_REG, 0);
+ if (rc < 0)
+ pr_err("Couldn't clear SDAM QC status rc=%d\n", rc);
+
+ rc = smblib_batch_write(chg,
+ chg->sdam_base + SDAM_QC_ADC_LSB_REG, buf, 2);
+ if (rc < 0)
+ pr_err("Couldn't clear SDAM ADC status rc=%d\n", rc);
+ }
+
if (chip->dt.batt_profile_fcc_ua < 0)
smblib_get_charge_param(chg, &chg->param.fcc,
&chg->batt_profile_fcc_ua);
@@ -3673,6 +3750,9 @@ static void smb5_shutdown(struct platform_device *pdev)
/* disable all interrupts */
smb5_disable_interrupts(chg);
+ /* disable the SMB_EN configuration */
+ smblib_masked_write(chg, MISC_SMB_EN_CMD_REG, EN_CP_CMD_BIT, 0);
+
/* configure power role for UFP */
if (chg->connector_type == POWER_SUPPLY_CONNECTOR_TYPEC)
smblib_masked_write(chg, TYPE_C_MODE_CFG_REG,
diff --git a/drivers/power/supply/qcom/qpnp-smblite.c b/drivers/power/supply/qcom/qpnp-smblite.c
index c2867bc..5f8c714 100644
--- a/drivers/power/supply/qcom/qpnp-smblite.c
+++ b/drivers/power/supply/qcom/qpnp-smblite.c
@@ -11,6 +11,7 @@
#include <linux/regmap.h>
#include <linux/power_supply.h>
#include <linux/of.h>
+#include <linux/of_gpio.h>
#include <linux/of_irq.h>
#include <linux/log2.h>
#include <linux/qpnp/qpnp-revid.h>
@@ -432,6 +433,9 @@ static int smblite_usb_get_prop(struct power_supply *psy,
case POWER_SUPPLY_PROP_SCOPE:
rc = smblite_lib_get_prop_scope(chg, val);
break;
+ case POWER_SUPPLY_PROP_FLASH_TRIGGER:
+ rc = schgm_flashlite_get_vreg_ok(chg, &val->intval);
+ break;
default:
pr_err("get prop %d is not supported in usb\n", psp);
rc = -EINVAL;
@@ -525,6 +529,7 @@ static enum power_supply_property smblite_usb_main_props[] = {
POWER_SUPPLY_PROP_INPUT_VOLTAGE_SETTLED,
POWER_SUPPLY_PROP_FCC_DELTA,
POWER_SUPPLY_PROP_CURRENT_MAX,
+ POWER_SUPPLY_PROP_FLASH_TRIGGER,
};
static int smblite_usb_main_get_prop(struct power_supply *psy,
@@ -959,6 +964,13 @@ static int smblite_configure_typec(struct smb_charger *chg)
return rc;
}
+ rc = smblite_lib_write(chg, TYPE_C_INTERRUPT_EN_CFG_2_REG, 0);
+ if (rc < 0) {
+ dev_err(chg->dev,
+ "Couldn't configure Type-C interrupts rc=%d\n", rc);
+ return rc;
+ }
+
rc = smblite_lib_masked_write(chg, TYPE_C_MODE_CFG_REG,
EN_SNK_ONLY_BIT, 0);
if (rc < 0) {
@@ -968,17 +980,6 @@ static int smblite_configure_typec(struct smb_charger *chg)
return rc;
}
- /* Enable detection of unoriented debug accessory in source mode */
- rc = smblite_lib_masked_write(chg, DEBUG_ACCESS_SRC_CFG_REG,
- EN_UNORIENTED_DEBUG_ACCESS_SRC_BIT,
- EN_UNORIENTED_DEBUG_ACCESS_SRC_BIT);
- if (rc < 0) {
- dev_err(chg->dev,
- "Couldn't configure TYPE_C_DEBUG_ACCESS_SRC_CFG_REG rc=%d\n",
- rc);
- return rc;
- }
-
rc = smblite_lib_masked_write(chg, TYPE_C_EXIT_STATE_CFG_REG,
SEL_SRC_UPPER_REF_BIT, SEL_SRC_UPPER_REF_BIT);
if (rc < 0)
@@ -1156,6 +1157,28 @@ static int smblite_init_connector_type(struct smb_charger *chg)
return 0;
}
+static int smblite_init_otg(struct smblite *chip)
+{
+ struct smb_charger *chg = &chip->chg;
+
+ chg->usb_id_gpio = chg->usb_id_irq = -EINVAL;
+
+ if (chg->connector_type == POWER_SUPPLY_CONNECTOR_TYPEC)
+ return 0;
+
+ if (of_find_property(chg->dev->of_node, "qcom,usb-id-gpio", NULL))
+ chg->usb_id_gpio = of_get_named_gpio(chg->dev->of_node,
+ "qcom,usb-id-gpio", 0);
+
+ chg->usb_id_irq = of_irq_get_byname(chg->dev->of_node,
+ "usb_id_irq");
+ if (chg->usb_id_irq < 0 || chg->usb_id_gpio < 0)
+ pr_err("OTG irq (%d) / gpio (%d) not defined\n",
+ chg->usb_id_irq, chg->usb_id_gpio);
+
+ return 0;
+}
+
static int smblite_init_hw(struct smblite *chip)
{
struct smb_charger *chg = &chip->chg;
@@ -1185,6 +1208,12 @@ static int smblite_init_hw(struct smblite *chip)
return rc;
}
+ rc = smblite_init_otg(chip);
+ if (rc < 0) {
+ dev_err(chg->dev, "Couldn't init otg rc=%d\n", rc);
+ return rc;
+ }
+
rc = schgm_flashlite_init(chg);
if (rc < 0) {
pr_err("Couldn't configure flash rc=%d\n", rc);
@@ -1393,6 +1422,10 @@ static int smblite_determine_initial_status(struct smblite *chip)
smblite_wdog_bark_irq_handler(0, &irq_data);
smblite_typec_or_rid_detection_change_irq_handler(0, &irq_data);
+ if (chg->usb_id_gpio > 0 &&
+ chg->connector_type == POWER_SUPPLY_CONNECTOR_MICRO_USB)
+ smblite_usb_id_irq_handler(0, chg);
+
return 0;
}
@@ -1644,6 +1677,22 @@ static int smblite_request_interrupts(struct smblite *chip)
}
}
+ /* register the USB-id irq */
+ if (chg->usb_id_irq > 0 && chg->usb_id_gpio > 0) {
+ rc = devm_request_threaded_irq(chg->dev,
+ chg->usb_id_irq, NULL,
+ smblite_usb_id_irq_handler,
+ IRQF_ONESHOT
+ | IRQF_TRIGGER_FALLING
+ | IRQF_TRIGGER_RISING,
+ "smblite_id_irq", chg);
+ if (rc < 0) {
+ pr_err("Failed to register id-irq rc=%d\n", rc);
+ return rc;
+ }
+ enable_irq_wake(chg->usb_id_irq);
+ }
+
return rc;
}
@@ -1658,6 +1707,11 @@ static void smblite_disable_interrupts(struct smb_charger *chg)
disable_irq(smblite_irqs[i].irq);
}
}
+
+ if (chg->usb_id_irq > 0 && chg->usb_id_gpio > 0) {
+ disable_irq_wake(chg->usb_id_irq);
+ disable_irq(chg->usb_id_irq);
+ }
}
#if defined(CONFIG_DEBUG_FS)
@@ -1753,11 +1807,11 @@ static int smblite_show_charger_status(struct smblite *chip)
}
batt_charge_type = val.intval;
- pr_info("SMBLITE: Mode=%s Conn=%s USB Present=%d Batt preset=%d health=%d charge=%d\n",
+ pr_info("SMBLITE: Mode=%s Conn=%s USB Present=%d Battery present=%d health=%d charge=%d\n",
chg->ldo_mode ? "LDO" : "SMBC",
(chg->connector_type == POWER_SUPPLY_CONNECTOR_TYPEC) ?
- "TYPEC" : "uUSB", batt_present, batt_health,
- batt_charge_type);
+ "TYPEC" : "uUSB", usb_present, batt_present,
+ batt_health, batt_charge_type);
return rc;
}
diff --git a/drivers/power/supply/qcom/schgm-flashlite.c b/drivers/power/supply/qcom/schgm-flashlite.c
index 78f3ef3..37ecb83 100644
--- a/drivers/power/supply/qcom/schgm-flashlite.c
+++ b/drivers/power/supply/qcom/schgm-flashlite.c
@@ -85,13 +85,6 @@ static void schgm_flashlite_parse_dt(struct smb_charger *chg)
if (IS_BETWEEN(0, 100, val))
chg->flash_disable_soc = (val * 255) / 100;
}
-
- chg->headroom_mode = -EINVAL;
- rc = of_property_read_u32(node, "qcom,headroom-mode", &val);
- if (!rc) {
- if (IS_BETWEEN(FIXED_MODE, ADAPTIVE_MODE, val))
- chg->headroom_mode = val;
- }
}
bool is_flashlite_active(struct smb_charger *chg)
@@ -151,13 +144,6 @@ void schgm_flashlite_torch_priority(struct smb_charger *chg,
int rc;
u8 reg;
- /*
- * If torch is configured in default BOOST mode, skip any update in the
- * mode configuration.
- */
- if (chg->headroom_mode == FIXED_MODE)
- return;
-
if ((mode != TORCH_BOOST_MODE) && (mode != TORCH_BUCK_MODE))
return;
@@ -209,31 +195,6 @@ int schgm_flashlite_init(struct smb_charger *chg)
}
}
- if (chg->headroom_mode != -EINVAL) {
- /*
- * configure headroom management policy for
- * flash and torch mode.
- */
- reg = (chg->headroom_mode == FIXED_MODE)
- ? FORCE_FLASH_BOOST_5V_BIT : 0;
- rc = smblite_lib_write(chg, SCHGM_FORCE_BOOST_CONTROL, reg);
- if (rc < 0) {
- pr_err("Couldn't write force boost control reg rc=%d\n",
- rc);
- return rc;
- }
-
- reg = (chg->headroom_mode == FIXED_MODE)
- ? TORCH_PRIORITY_CONTROL_BIT : 0;
- rc = smblite_lib_write(chg,
- SCHGM_TORCH_PRIORITY_CONTROL_REG, reg);
- if (rc < 0) {
- pr_err("Couldn't force 5V boost in torch mode rc=%d\n",
- rc);
- return rc;
- }
- }
-
if ((chg->flash_derating_soc != -EINVAL)
|| (chg->flash_disable_soc != -EINVAL)) {
/* Check if SOC based derating/disable is enabled */
diff --git a/drivers/power/supply/qcom/schgm-flashlite.h b/drivers/power/supply/qcom/schgm-flashlite.h
index d43e545..a994a42 100644
--- a/drivers/power/supply/qcom/schgm-flashlite.h
+++ b/drivers/power/supply/qcom/schgm-flashlite.h
@@ -21,9 +21,6 @@
#define SCHGM_FLASH_STATUS_5_REG (SCHGM_FLASH_BASE + 0x0B)
-#define SCHGM_FORCE_BOOST_CONTROL (SCHGM_FLASH_BASE + 0x41)
-#define FORCE_FLASH_BOOST_5V_BIT BIT(0)
-
#define SCHGM_FLASH_S2_LATCH_RESET_CMD_REG (SCHGM_FLASH_BASE + 0x44)
#define FLASH_S2_LATCH_RESET_BIT BIT(0)
diff --git a/drivers/power/supply/qcom/smb1398-charger.c b/drivers/power/supply/qcom/smb1398-charger.c
index 9999181..2c2e339 100644
--- a/drivers/power/supply/qcom/smb1398-charger.c
+++ b/drivers/power/supply/qcom/smb1398-charger.c
@@ -205,9 +205,8 @@
#define TAPER_MAIN_ICL_LIMIT_VOTER "TAPER_MAIN_ICL_LIMIT_VOTER"
/* Constant definitions */
-/* Need to define max ILIM for smb1398 */
-#define DIV2_MAX_ILIM_UA 3200000
-#define DIV2_MAX_ILIM_DUAL_CP_UA 6400000
+#define DIV2_MAX_ILIM_UA 5000000
+#define DIV2_MAX_ILIM_DUAL_CP_UA 10000000
#define DIV2_ILIM_CFG_PCT 105
#define TAPER_STEPPER_UA_DEFAULT 100000
@@ -1625,7 +1624,7 @@ static void smb1398_status_change_work(struct work_struct *work)
* valid due to the battery discharging later, remove
* vote from CUTOFF_SOC_VOTER.
*/
- if (is_cutoff_soc_reached(chip))
+ if (!is_cutoff_soc_reached(chip))
vote(chip->div2_cp_disable_votable, CUTOFF_SOC_VOTER, false, 0);
rc = power_supply_get_property(chip->usb_psy,
@@ -1947,7 +1946,7 @@ static int smb1398_div2_cp_parse_dt(struct smb1398_chip *chip)
return rc;
}
- chip->div2_cp_min_ilim_ua = 1000000;
+ chip->div2_cp_min_ilim_ua = 750000;
of_property_read_u32(chip->dev->of_node, "qcom,div2-cp-min-ilim-ua",
&chip->div2_cp_min_ilim_ua);
diff --git a/drivers/power/supply/qcom/smb5-lib.c b/drivers/power/supply/qcom/smb5-lib.c
index ea8e826..651d212 100644
--- a/drivers/power/supply/qcom/smb5-lib.c
+++ b/drivers/power/supply/qcom/smb5-lib.c
@@ -1963,13 +1963,14 @@ static bool is_charging_paused(struct smb_charger *chg)
return val & CHARGING_PAUSE_CMD_BIT;
}
+#define CUTOFF_COUNT 3
int smblib_get_prop_batt_status(struct smb_charger *chg,
union power_supply_propval *val)
{
union power_supply_propval pval = {0, };
bool usb_online, dc_online;
u8 stat;
- int rc, suspend = 0;
+ int rc, suspend = 0, input_present = 0;
if (chg->fake_chg_status_on_debug_batt) {
rc = smblib_get_prop_from_bms(chg,
@@ -1983,6 +1984,44 @@ int smblib_get_prop_batt_status(struct smb_charger *chg,
}
}
+ rc = smblib_get_prop_batt_health(chg, &pval);
+ if (rc < 0) {
+ smblib_err(chg, "Couldn't get batt health rc=%d\n", rc);
+ return rc;
+ }
+ /*
+ * The charger status register shows charging even though the battery
+ * is discharging when the over voltage condition is hit. Report power
+ * supply state as NOT_CHARGING when the battery health reports
+ * over voltage.
+ */
+ if (pval.intval == POWER_SUPPLY_HEALTH_OVERVOLTAGE) {
+ val->intval = POWER_SUPPLY_STATUS_NOT_CHARGING;
+ return 0;
+ }
+
+ /*
+ * If SOC = 0 and we are discharging with input connected, report
+ * the battery status as DISCHARGING.
+ */
+ smblib_is_input_present(chg, &input_present);
+ rc = smblib_get_prop_from_bms(chg, POWER_SUPPLY_PROP_CAPACITY, &pval);
+ if (!rc && pval.intval == 0 && input_present) {
+ rc = smblib_get_prop_from_bms(chg,
+ POWER_SUPPLY_PROP_CURRENT_NOW, &pval);
+ if (!rc && pval.intval > 0) {
+ if (chg->cutoff_count > CUTOFF_COUNT) {
+ val->intval = POWER_SUPPLY_STATUS_DISCHARGING;
+ return 0;
+ }
+ chg->cutoff_count++;
+ } else {
+ chg->cutoff_count = 0;
+ }
+ } else {
+ chg->cutoff_count = 0;
+ }
+
if (chg->dbc_usbov) {
rc = smblib_get_prop_usb_present(chg, &pval);
if (rc < 0) {
@@ -6125,6 +6164,7 @@ static void typec_src_removal(struct smb_charger *chg)
struct smb_irq_data *data;
struct storm_watch *wdata;
int sec_charger;
+ u8 val[2] = {0};
sec_charger = chg->sec_pl_present ? POWER_SUPPLY_CHARGER_SEC_PL :
POWER_SUPPLY_CHARGER_SEC_NONE;
@@ -6215,6 +6255,18 @@ static void typec_src_removal(struct smb_charger *chg)
smblib_err(chg, "Couldn't write float charger options rc=%d\n",
rc);
+ if (chg->sdam_base) {
+ rc = smblib_write(chg,
+ chg->sdam_base + SDAM_QC_DET_STATUS_REG, 0);
+ if (rc < 0)
+ pr_err("Couldn't clear SDAM QC status rc=%d\n", rc);
+
+ rc = smblib_batch_write(chg,
+ chg->sdam_base + SDAM_QC_ADC_LSB_REG, val, 2);
+ if (rc < 0)
+ pr_err("Couldn't clear SDAM ADC status rc=%d\n", rc);
+ }
+
if (!chg->pr_swap_in_progress) {
rc = smblib_usb_pd_adapter_allowance_override(chg, FORCE_NULL);
if (rc < 0)
diff --git a/drivers/power/supply/qcom/smb5-lib.h b/drivers/power/supply/qcom/smb5-lib.h
index 13b7cef..a80982b 100644
--- a/drivers/power/supply/qcom/smb5-lib.h
+++ b/drivers/power/supply/qcom/smb5-lib.h
@@ -385,6 +385,7 @@ struct smb_charger {
struct smb_chg_freq chg_freq;
int otg_delay_ms;
int weak_chg_icl_ua;
+ u32 sdam_base;
bool pd_not_supported;
/* locks */
@@ -574,6 +575,7 @@ struct smb_charger {
int init_thermal_ua;
u32 comp_clamp_level;
int wls_icl_ua;
+ int cutoff_count;
bool dcin_aicl_done;
bool hvdcp3_standalone_config;
bool dcin_icl_user_set;
diff --git a/drivers/power/supply/qcom/smb5-reg.h b/drivers/power/supply/qcom/smb5-reg.h
index a5fe691..d77c8cb 100644
--- a/drivers/power/supply/qcom/smb5-reg.h
+++ b/drivers/power/supply/qcom/smb5-reg.h
@@ -22,6 +22,7 @@
#define PERPH_SUBTYPE_OFFSET 0x05
#define SUBTYPE_MASK GENMASK(7, 0)
#define INT_RT_STS_OFFSET 0x10
+#define SDAM_TYPE 0x2E
/********************************
* CHGR Peripheral Registers *
@@ -549,4 +550,8 @@ enum {
/* SDAM regs */
#define MISC_PBS_RT_STS_REG (MISC_PBS_BASE + 0x10)
#define PULSE_SKIP_IRQ_BIT BIT(4)
+
+#define SDAM_QC_DET_STATUS_REG 0x58
+#define SDAM_QC_ADC_LSB_REG 0x54
+
#endif /* __SMB5_CHARGER_REG_H */
diff --git a/drivers/power/supply/qcom/smblite-lib.c b/drivers/power/supply/qcom/smblite-lib.c
index a46c3e0..5c127f0 100644
--- a/drivers/power/supply/qcom/smblite-lib.c
+++ b/drivers/power/supply/qcom/smblite-lib.c
@@ -8,6 +8,7 @@
#include <linux/delay.h>
#include <linux/power_supply.h>
#include <linux/qpnp/qpnp-revid.h>
+#include <linux/gpio.h>
#include <linux/irq.h>
#include <linux/iio/consumer.h>
#include <linux/pmic-voter.h>
@@ -15,10 +16,9 @@
#include <linux/ktime.h>
#include "smblite-lib.h"
#include "smblite-reg.h"
-#include "schgm-flash.h"
#include "step-chg-jeita.h"
#include "storm-watch.h"
-#include "schgm-flash.h"
+#include "schgm-flashlite.h"
#define smblite_lib_err(chg, fmt, ...) \
pr_err("%s: %s: " fmt, chg->name, \
@@ -432,7 +432,7 @@ static void smblite_lib_uusb_removal(struct smb_charger *chg)
vote(chg->pl_enable_votable_indirect, USBIN_I_VOTER, false, 0);
vote(chg->pl_enable_votable_indirect, USBIN_V_VOTER, false, 0);
vote(chg->usb_icl_votable, SW_ICL_MAX_VOTER, true,
- is_flash_active(chg) ? USBIN_500UA : USBIN_100UA);
+ is_flashlite_active(chg) ? USBIN_500UA : USBIN_100UA);
/* Remove SW thermal regulation votes */
vote(chg->usb_icl_votable, SW_THERM_REGULATION_VOTER, false, 0);
@@ -521,7 +521,7 @@ int smblite_lib_set_icl_current(struct smb_charger *chg, int icl_ua)
/* suspend if 25mA or less is requested */
bool suspend = (icl_ua <= USBIN_25UA);
- schgm_flash_torch_priority(chg, suspend ? TORCH_BOOST_MODE :
+ schgm_flashlite_torch_priority(chg, suspend ? TORCH_BOOST_MODE :
TORCH_BUCK_MODE);
/* Do not configure ICL from SW for DAM */
if (smblite_lib_get_prop_typec_mode(chg) ==
@@ -720,13 +720,14 @@ static bool is_charging_paused(struct smb_charger *chg)
return val & CHARGING_PAUSE_CMD_BIT;
}
+#define CUTOFF_COUNT 3
int smblite_lib_get_prop_batt_status(struct smb_charger *chg,
union power_supply_propval *val)
{
union power_supply_propval pval = {0, };
bool usb_online;
u8 stat;
- int rc;
+ int rc, input_present = 0;
if (chg->fake_chg_status_on_debug_batt) {
rc = smblite_lib_get_prop_from_bms(chg,
@@ -740,6 +741,29 @@ int smblite_lib_get_prop_batt_status(struct smb_charger *chg,
}
}
+ /*
+ * If SOC = 0 and we are discharging with input connected, report
+ * the battery status as DISCHARGING.
+ */
+ smblite_lib_is_input_present(chg, &input_present);
+ rc = smblite_lib_get_prop_from_bms(chg,
+ POWER_SUPPLY_PROP_CAPACITY, &pval);
+ if (!rc && pval.intval == 0 && input_present) {
+ rc = smblite_lib_get_prop_from_bms(chg,
+ POWER_SUPPLY_PROP_CURRENT_NOW, &pval);
+ if (!rc && pval.intval > 0) {
+ if (chg->cutoff_count > CUTOFF_COUNT) {
+ val->intval = POWER_SUPPLY_STATUS_DISCHARGING;
+ return 0;
+ }
+ chg->cutoff_count++;
+ } else {
+ chg->cutoff_count = 0;
+ }
+ } else {
+ chg->cutoff_count = 0;
+ }
+
rc = smblite_lib_get_prop_usb_online(chg, &pval);
if (rc < 0) {
smblite_lib_err(chg, "Couldn't get usb online property rc=%d\n",
@@ -1992,7 +2016,7 @@ irqreturn_t smblite_usbin_uv_irq_handler(int irq, void *data)
unsuspend_input:
/* Force torch in boost mode to ensure it works with low ICL */
- schgm_flash_torch_priority(chg, TORCH_BOOST_MODE);
+ schgm_flashlite_torch_priority(chg, TORCH_BOOST_MODE);
if (chg->aicl_max_reached) {
smblite_lib_dbg(chg, PR_MISC,
@@ -2185,7 +2209,7 @@ static void update_sw_icl_max(struct smb_charger *chg,
USB_PSY_VOTER)) {
/* if flash is active force 500mA */
vote(chg->usb_icl_votable, USB_PSY_VOTER, true,
- is_flash_active(chg) ?
+ is_flashlite_active(chg) ?
USBIN_500UA : USBIN_100UA);
}
vote(chg->usb_icl_votable, SW_ICL_MAX_VOTER, false, 0);
@@ -2491,7 +2515,7 @@ static void typec_src_removal(struct smb_charger *chg)
/* reset input current limit voters */
vote(chg->usb_icl_votable, SW_ICL_MAX_VOTER, true,
- is_flash_active(chg) ? USBIN_500UA : USBIN_100UA);
+ is_flashlite_active(chg) ? USBIN_500UA : USBIN_100UA);
vote(chg->usb_icl_votable, USB_PSY_VOTER, false, 0);
/* reset parallel voters */
@@ -2772,6 +2796,21 @@ irqreturn_t smblite_usbin_ov_irq_handler(int irq, void *data)
return IRQ_HANDLED;
}
+irqreturn_t smblite_usb_id_irq_handler(int irq, void *data)
+{
+ struct smb_charger *chg = data;
+ bool id_state;
+
+ id_state = gpio_get_value(chg->usb_id_gpio);
+
+ smblite_lib_dbg(chg, PR_INTERRUPT, "IRQ: %s, id_state=%d\n",
+ "usb-id-irq", id_state);
+
+ smblite_lib_notify_usb_host(chg, !id_state);
+
+ return IRQ_HANDLED;
+}
+
/***************
* Work Queues *
***************/
@@ -2853,23 +2892,23 @@ static void smblite_lib_thermal_regulation_work(struct work_struct *work)
}
if (stat & DIE_TEMP_UB_BIT) {
- icl_ua = get_effective_result(chg->usb_icl_votable)
- - THERM_REGULATION_STEP_UA;
-
- /* Decrement ICL by one step */
- vote(chg->usb_icl_votable, SW_THERM_REGULATION_VOTER,
- true, icl_ua - THERM_REGULATION_STEP_UA);
-
/* Check if we reached minimum ICL limit */
if (icl_ua < USBIN_500UA + THERM_REGULATION_STEP_UA)
goto exit;
+ /* Decrement ICL by one step */
+ icl_ua -= THERM_REGULATION_STEP_UA;
+ vote(chg->usb_icl_votable, SW_THERM_REGULATION_VOTER,
+ true, icl_ua);
+
goto reschedule;
}
- if (stat & DIE_TEMP_LB_BIT) {
+ /* check if DIE_TEMP is below LB */
+ if (!(stat & DIE_TEMP_MASK)) {
+ icl_ua += THERM_REGULATION_STEP_UA;
vote(chg->usb_icl_votable, SW_THERM_REGULATION_VOTER,
- true, icl_ua + THERM_REGULATION_STEP_UA);
+ true, icl_ua);
/*
* Check if we need further increments:
diff --git a/drivers/power/supply/qcom/smblite-lib.h b/drivers/power/supply/qcom/smblite-lib.h
index 4e5208d..0d6e8c3 100644
--- a/drivers/power/supply/qcom/smblite-lib.h
+++ b/drivers/power/supply/qcom/smblite-lib.h
@@ -144,12 +144,9 @@ enum smb_irq_index {
VREG_OK_IRQ,
ILIM_S2_IRQ,
ILIM_S1_IRQ,
- VOUT_DOWN_IRQ,
- VOUT_UP_IRQ,
FLASH_STATE_CHANGE_IRQ,
TORCH_REQ_IRQ,
FLASH_EN_IRQ,
- SDAM_STS_IRQ,
/* END */
SMB_IRQ_MAX,
};
@@ -310,9 +307,12 @@ struct smb_charger {
int jeita_soft_fv[2];
int aicl_5v_threshold_mv;
int default_aicl_5v_threshold_mv;
+ int cutoff_count;
bool aicl_max_reached;
bool pr_swap_in_progress;
bool ldo_mode;
+ int usb_id_gpio;
+ int usb_id_irq;
/* workaround flag */
u32 wa_flags;
@@ -365,6 +365,7 @@ irqreturn_t smblite_typec_or_rid_detection_change_irq_handler(int irq,
void *data);
irqreturn_t smblite_temp_change_irq_handler(int irq, void *data);
irqreturn_t smblite_usbin_ov_irq_handler(int irq, void *data);
+irqreturn_t smblite_usb_id_irq_handler(int irq, void *data);
int smblite_lib_get_prop_input_suspend(struct smb_charger *chg,
union power_supply_propval *val);
diff --git a/drivers/power/supply/qcom/smblite-reg.h b/drivers/power/supply/qcom/smblite-reg.h
index 199922e..361d0c2f 100644
--- a/drivers/power/supply/qcom/smblite-reg.h
+++ b/drivers/power/supply/qcom/smblite-reg.h
@@ -266,6 +266,7 @@ enum {
#define THERMREG_DISABLED_BIT BIT(0)
#define DIE_TEMP_STATUS_REG (MISC_BASE + 0x09)
+#define DIE_TEMP_MASK GENMASK(3, 0)
#define DIE_TEMP_SHDN_BIT BIT(3)
#define DIE_TEMP_RST_BIT BIT(2)
#define DIE_TEMP_UB_BIT BIT(1)
diff --git a/drivers/regulator/rpm-smd-regulator.c b/drivers/regulator/rpm-smd-regulator.c
index 9bc11fe..044a4d6 100644
--- a/drivers/regulator/rpm-smd-regulator.c
+++ b/drivers/regulator/rpm-smd-regulator.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-only
-/* Copyright (c) 2012-2015, 2018-2019, The Linux Foundation. All rights reserved. */
+/* Copyright (c) 2012-2015, 2018-2020, The Linux Foundation. All rights reserved. */
#define pr_fmt(fmt) "%s: " fmt, __func__
@@ -28,7 +28,10 @@ enum {
};
static int rpm_vreg_debug_mask;
+
+#ifdef CONFIG_DEBUG_FS
static bool is_debugfs_created;
+#endif
#define vreg_err(req, fmt, ...) \
pr_err("%s: " fmt, req->rdesc.name, ##__VA_ARGS__)
@@ -1661,6 +1664,7 @@ static int rpm_vreg_device_set_voltage_index(struct device *dev,
return rc;
}
+#ifdef CONFIG_DEBUG_FS
static void rpm_vreg_create_debugfs(struct rpm_regulator *reg)
{
struct dentry *entry;
@@ -1682,6 +1686,7 @@ static void rpm_vreg_create_debugfs(struct rpm_regulator *reg)
is_debugfs_created = true;
}
}
+#endif
/*
* This probe is called for child rpm-regulator devices which have
diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
index 514c542..4e127b2 100644
--- a/drivers/rpmsg/qcom_glink_native.c
+++ b/drivers/rpmsg/qcom_glink_native.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2016-2017, Linaro Ltd
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/idr.h>
@@ -121,6 +121,8 @@ struct glink_core_rx_intent {
* @in_reset: reset status of this edge
* @features: remote features
* @intentless: flag to indicate that there is no intent
+ * @tx_avail_notify: Waitqueue for pending tx tasks
+ * @sent_read_notify: flag to check cmd sent or not
* @ilc: ipc logging context reference
*/
struct qcom_glink {
@@ -154,6 +156,9 @@ struct qcom_glink {
bool intentless;
+ wait_queue_head_t tx_avail_notify;
+ bool sent_read_notify;
+
void *ilc;
};
@@ -357,6 +362,22 @@ static void qcom_glink_pipe_reset(struct qcom_glink *glink)
glink->rx_pipe->reset(glink->rx_pipe);
}
+static void qcom_glink_send_read_notify(struct qcom_glink *glink)
+{
+ struct glink_msg msg;
+
+ msg.cmd = cpu_to_le16(RPM_CMD_READ_NOTIF);
+ msg.param1 = 0;
+ msg.param2 = 0;
+
+ GLINK_INFO(glink->ilc, "send READ NOTIFY cmd\n");
+
+ qcom_glink_tx_write(glink, &msg, sizeof(msg), NULL, 0);
+
+ mbox_send_message(glink->mbox_chan, NULL);
+ mbox_client_txdone(glink->mbox_chan, 0);
+}
+
static int qcom_glink_tx(struct qcom_glink *glink,
const void *hdr, size_t hlen,
const void *data, size_t dlen, bool wait)
@@ -380,17 +401,27 @@ static int qcom_glink_tx(struct qcom_glink *glink,
goto out;
}
- if (atomic_read(&glink->in_reset)) {
- ret = -ECONNRESET;
- goto out;
+ if (!glink->sent_read_notify) {
+ glink->sent_read_notify = true;
+ qcom_glink_send_read_notify(glink);
}
/* Wait without holding the tx_lock */
spin_unlock_irqrestore(&glink->tx_lock, flags);
- usleep_range(10000, 15000);
+ wait_event_timeout(glink->tx_avail_notify,
+ (qcom_glink_tx_avail(glink) >= tlen
+ || atomic_read(&glink->in_reset)), 10 * HZ);
spin_lock_irqsave(&glink->tx_lock, flags);
+
+ if (atomic_read(&glink->in_reset)) {
+ ret = -ECONNRESET;
+ goto out;
+ }
+
+ if (qcom_glink_tx_avail(glink) >= tlen)
+ glink->sent_read_notify = false;
}
qcom_glink_tx_write(glink, hdr, hlen, data, dlen);
@@ -1158,6 +1189,9 @@ static irqreturn_t qcom_glink_native_intr(int irq, void *data)
unsigned int cmd;
int ret = 0;
+ /* To wakeup any blocking writers */
+ wake_up_all(&glink->tx_avail_notify);
+
for (;;) {
avail = qcom_glink_rx_avail(glink);
if (avail < sizeof(msg))
@@ -1904,6 +1938,9 @@ static void qcom_glink_notif_reset(void *data)
return;
atomic_inc(&glink->in_reset);
+ /* To wakeup any blocking writers */
+ wake_up_all(&glink->tx_avail_notify);
+
spin_lock_irqsave(&glink->idr_lock, flags);
idr_for_each_entry(&glink->lcids, channel, cid) {
wake_up(&channel->intent_req_event);
@@ -1952,6 +1989,7 @@ struct qcom_glink *qcom_glink_native_probe(struct device *dev,
spin_lock_init(&glink->rx_lock);
INIT_LIST_HEAD(&glink->rx_queue);
INIT_WORK(&glink->rx_work, qcom_glink_work);
+ init_waitqueue_head(&glink->tx_avail_notify);
spin_lock_init(&glink->idr_lock);
idr_init(&glink->lcids);
diff --git a/drivers/rtc/rtc-pm8xxx.c b/drivers/rtc/rtc-pm8xxx.c
index 9f8cbbd..d38f38d 100644
--- a/drivers/rtc/rtc-pm8xxx.c
+++ b/drivers/rtc/rtc-pm8xxx.c
@@ -22,6 +22,7 @@
/* RTC_CTRL register bit fields */
#define PM8xxx_RTC_ENABLE BIT(7)
#define PM8xxx_RTC_ALARM_CLEAR BIT(0)
+#define PM8xxx_RTC_ALARM_ENABLE BIT(7)
#define NUM_8_BIT_RTC_REGS 0x4
@@ -297,6 +298,14 @@ static int pm8xxx_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alarm)
alarm->time.tm_sec, alarm->time.tm_mday,
alarm->time.tm_mon, alarm->time.tm_year);
+ rc = regmap_bulk_read(rtc_dd->regmap, regs->alarm_ctrl, value, 1);
+ if (rc) {
+ dev_err(dev, "Read from ALARM CTRL1 failed\n");
+ return rc;
+ }
+
+ alarm->enabled = !!(value[0] & PM8xxx_RTC_ALARM_ENABLE);
+
return 0;
}
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index f86a6be..54e74ca 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -2261,8 +2261,6 @@ void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q)
if (!shost->use_clustering)
q->limits.cluster = 0;
- if (shost->inlinecrypt_support)
- queue_flag_set_unlocked(QUEUE_FLAG_INLINECRYPT, q);
/*
* Set a reasonable default alignment: The larger of 32-byte (dword),
* which is a common minimum for HBAs, and the minimum DMA alignment,
diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig
index 9dd9167..f1aae8b 100644
--- a/drivers/scsi/ufs/Kconfig
+++ b/drivers/scsi/ufs/Kconfig
@@ -101,18 +101,6 @@
Select this if you have UFS controller on QCOM chipset.
If unsure, say N.
-config SCSI_UFS_QCOM_ICE
- bool "QCOM specific hooks to Inline Crypto Engine for UFS driver"
- depends on SCSI_UFS_QCOM && CRYPTO_DEV_QCOM_ICE
- help
- This selects the QCOM specific additions to support Inline Crypto
- Engine (ICE).
- ICE accelerates the crypto operations and maintains the high UFS
- performance.
-
- Select this if you have ICE supported for UFS on QCOM chipset.
- If unsure, say N.
-
config SCSI_UFS_TEST
tristate "Universal Flash Storage host controller driver unit-tests"
depends on SCSI_UFSHCD && IOSCHED_TEST
@@ -143,3 +131,20 @@
Select this if you have UFS controller on Hisilicon chipset.
If unsure, say N.
+
+config SCSI_UFS_CRYPTO
+ bool "UFS Crypto Engine Support"
+ depends on SCSI_UFSHCD && BLK_INLINE_ENCRYPTION
+ help
+ Enable Crypto Engine Support in UFS.
+ Enabling this makes it possible for the kernel to use the crypto
+ capabilities of the UFS device (if present) to perform crypto
+ operations on data being transferred to/from the device.
+
+config SCSI_UFS_CRYPTO_QTI
+ tristate "Vendor specific UFS Crypto Engine Support"
+ depends on SCSI_UFS_CRYPTO
+ help
+ Enable Vendor Crypto Engine Support in UFS
+ Enabling this allows kernel to use UFS crypto operations defined
+ and implemented by QTI.
diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile
index 7084ae4..e7294e6 100644
--- a/drivers/scsi/ufs/Makefile
+++ b/drivers/scsi/ufs/Makefile
@@ -11,3 +11,5 @@
obj-$(CONFIG_SCSI_UFS_TEST) += ufs_test.o
obj-$(CONFIG_DEBUG_FS) += ufs-debugfs.o ufs-qcom-debugfs.o
obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o
+ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
+ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO_QTI) += ufshcd-crypto-qti.o
diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c
index c2cee73..71655be 100644
--- a/drivers/scsi/ufs/ufs-hisi.c
+++ b/drivers/scsi/ufs/ufs-hisi.c
@@ -540,6 +540,14 @@ static int ufs_hisi_init_common(struct ufs_hba *hba)
if (!host)
return -ENOMEM;
+ /*
+ * Inline crypto is currently broken with ufs-hisi because the keyslots
+ * overlap with the vendor-specific SYS CTRL registers -- and even if
+ * software uses only non-overlapping keyslots, the kernel crashes when
+ * programming a key or a UFS error occurs on the first encrypted I/O.
+ */
+ hba->quirks |= UFSHCD_QUIRK_BROKEN_CRYPTO;
+
host->hba = hba;
ufshcd_set_variant(hba, host);
diff --git a/drivers/scsi/ufs/ufs-qcom-ice.c b/drivers/scsi/ufs/ufs-qcom-ice.c
deleted file mode 100644
index 48fd18c..0000000
--- a/drivers/scsi/ufs/ufs-qcom-ice.c
+++ /dev/null
@@ -1,782 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include <linux/io.h>
-#include <linux/of.h>
-#include <linux/blkdev.h>
-#include <linux/spinlock.h>
-#include <crypto/ice.h>
-
-#include "ufshcd.h"
-#include "ufs-qcom-ice.h"
-#include "ufs-qcom-debugfs.h"
-
-#define UFS_QCOM_CRYPTO_LABEL "ufs-qcom-crypto"
-/* Timeout waiting for ICE initialization, that requires TZ access */
-#define UFS_QCOM_ICE_COMPLETION_TIMEOUT_MS 500
-
-#define UFS_QCOM_ICE_DEFAULT_DBG_PRINT_EN 0
-
-static struct workqueue_struct *ice_workqueue;
-
-static void ufs_qcom_ice_dump_regs(struct ufs_qcom_host *qcom_host, int offset,
- int len, char *prefix)
-{
- print_hex_dump(KERN_ERR, prefix,
- len > 4 ? DUMP_PREFIX_OFFSET : DUMP_PREFIX_NONE,
- 16, 4, qcom_host->hba->mmio_base + offset, len * 4,
- false);
-}
-
-void ufs_qcom_ice_print_regs(struct ufs_qcom_host *qcom_host)
-{
- int i;
-
- if (!(qcom_host->dbg_print_en & UFS_QCOM_DBG_PRINT_ICE_REGS_EN))
- return;
-
- ufs_qcom_ice_dump_regs(qcom_host, REG_UFS_QCOM_ICE_CFG, 1,
- "REG_UFS_QCOM_ICE_CFG ");
- for (i = 0; i < NUM_QCOM_ICE_CTRL_INFO_n_REGS; i++) {
- pr_err("REG_UFS_QCOM_ICE_CTRL_INFO_1_%d = 0x%08X\n", i,
- ufshcd_readl(qcom_host->hba,
- (REG_UFS_QCOM_ICE_CTRL_INFO_1_n + 8 * i)));
-
- pr_err("REG_UFS_QCOM_ICE_CTRL_INFO_2_%d = 0x%08X\n", i,
- ufshcd_readl(qcom_host->hba,
- (REG_UFS_QCOM_ICE_CTRL_INFO_2_n + 8 * i)));
- }
-
- if (qcom_host->ice.pdev && qcom_host->ice.vops &&
- qcom_host->ice.vops->debug)
- qcom_host->ice.vops->debug(qcom_host->ice.pdev);
-}
-
-static void ufs_qcom_ice_error_cb(void *host_ctrl, u32 error)
-{
- struct ufs_qcom_host *qcom_host = (struct ufs_qcom_host *)host_ctrl;
-
- dev_err(qcom_host->hba->dev, "%s: Error in ice operation 0x%x\n",
- __func__, error);
-
- if (qcom_host->ice.state == UFS_QCOM_ICE_STATE_ACTIVE)
- qcom_host->ice.state = UFS_QCOM_ICE_STATE_DISABLED;
-}
-
-static struct platform_device *ufs_qcom_ice_get_pdevice(struct device *ufs_dev)
-{
- struct device_node *node;
- struct platform_device *ice_pdev = NULL;
-
- node = of_parse_phandle(ufs_dev->of_node, UFS_QCOM_CRYPTO_LABEL, 0);
-
- if (!node) {
- dev_err(ufs_dev, "%s: ufs-qcom-crypto property not specified\n",
- __func__);
- goto out;
- }
-
- ice_pdev = qcom_ice_get_pdevice(node);
-out:
- return ice_pdev;
-}
-
-static
-struct qcom_ice_variant_ops *ufs_qcom_ice_get_vops(struct device *ufs_dev)
-{
- struct qcom_ice_variant_ops *ice_vops = NULL;
- struct device_node *node;
-
- node = of_parse_phandle(ufs_dev->of_node, UFS_QCOM_CRYPTO_LABEL, 0);
-
- if (!node) {
- dev_err(ufs_dev, "%s: ufs-qcom-crypto property not specified\n",
- __func__);
- goto out;
- }
-
- ice_vops = qcom_ice_get_variant_ops(node);
-
- if (!ice_vops)
- dev_err(ufs_dev, "%s: invalid ice_vops\n", __func__);
-
- of_node_put(node);
-out:
- return ice_vops;
-}
-
-/**
- * ufs_qcom_ice_get_dev() - sets pointers to ICE data structs in UFS QCom host
- * @qcom_host: Pointer to a UFS QCom internal host structure.
- *
- * Sets ICE platform device pointer and ICE vops structure
- * corresponding to the current UFS device.
- *
- * Return: -EINVAL in-case of invalid input parameters:
- * qcom_host, qcom_host->hba or qcom_host->hba->dev
- * -ENODEV in-case ICE device is not required
- * -EPROBE_DEFER in-case ICE is required and hasn't been probed yet
- * 0 otherwise
- */
-int ufs_qcom_ice_get_dev(struct ufs_qcom_host *qcom_host)
-{
- struct device *ufs_dev;
- int err = 0;
-
- if (!qcom_host || !qcom_host->hba || !qcom_host->hba->dev) {
- pr_err("%s: invalid qcom_host %p or qcom_host->hba or qcom_host->hba->dev\n",
- __func__, qcom_host);
- err = -EINVAL;
- goto out;
- }
-
- ufs_dev = qcom_host->hba->dev;
-
- qcom_host->ice.vops = ufs_qcom_ice_get_vops(ufs_dev);
- qcom_host->ice.pdev = ufs_qcom_ice_get_pdevice(ufs_dev);
-
- if (qcom_host->ice.pdev == ERR_PTR(-EPROBE_DEFER)) {
- dev_err(ufs_dev, "%s: ICE device not probed yet\n",
- __func__);
- qcom_host->ice.pdev = NULL;
- qcom_host->ice.vops = NULL;
- err = -EPROBE_DEFER;
- goto out;
- }
-
- if (!qcom_host->ice.pdev || !qcom_host->ice.vops) {
- dev_err(ufs_dev, "%s: invalid platform device %p or vops %p\n",
- __func__, qcom_host->ice.pdev, qcom_host->ice.vops);
- qcom_host->ice.pdev = NULL;
- qcom_host->ice.vops = NULL;
- err = -ENODEV;
- goto out;
- }
-
- qcom_host->ice.state = UFS_QCOM_ICE_STATE_DISABLED;
-
-out:
- return err;
-}
-
-static void ufs_qcom_ice_cfg_work(struct work_struct *work)
-{
- unsigned long flags;
- struct ufs_qcom_host *qcom_host =
- container_of(work, struct ufs_qcom_host, ice_cfg_work);
-
- if (!qcom_host->ice.vops->config_start)
- return;
-
- spin_lock_irqsave(&qcom_host->ice_work_lock, flags);
- if (!qcom_host->req_pending ||
- ufshcd_is_shutdown_ongoing(qcom_host->hba)) {
- qcom_host->work_pending = false;
- spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
- return;
- }
- spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
-
- /*
- * config_start is called again as previous attempt returned -EAGAIN,
- * this call shall now take care of the necessary key setup.
- */
- qcom_host->ice.vops->config_start(qcom_host->ice.pdev,
- qcom_host->req_pending, NULL, false);
-
- spin_lock_irqsave(&qcom_host->ice_work_lock, flags);
- qcom_host->req_pending = NULL;
- qcom_host->work_pending = false;
- spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
-}
-
-/**
- * ufs_qcom_ice_init() - initializes the ICE-UFS interface and ICE device
- * @qcom_host: Pointer to a UFS QCom internal host structure.
- * qcom_host, qcom_host->hba and qcom_host->hba->dev should all
- * be valid pointers.
- *
- * Return: -EINVAL in-case of an error
- * 0 otherwise
- */
-int ufs_qcom_ice_init(struct ufs_qcom_host *qcom_host)
-{
- struct device *ufs_dev = qcom_host->hba->dev;
- int err;
-
- err = qcom_host->ice.vops->init(qcom_host->ice.pdev,
- qcom_host,
- ufs_qcom_ice_error_cb);
- if (err) {
- dev_err(ufs_dev, "%s: ice init failed. err = %d\n",
- __func__, err);
- goto out;
- } else {
- qcom_host->ice.state = UFS_QCOM_ICE_STATE_ACTIVE;
- }
-
- qcom_host->dbg_print_en |= UFS_QCOM_ICE_DEFAULT_DBG_PRINT_EN;
- if (!ice_workqueue) {
- ice_workqueue = alloc_workqueue("ice-set-key",
- WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_FREEZABLE, 0);
- if (!ice_workqueue) {
- dev_err(ufs_dev, "%s: workqueue allocation failed.\n",
- __func__);
- err = -ENOMEM;
- goto out;
- }
- }
- if (ice_workqueue) {
- if (!qcom_host->is_ice_cfg_work_set) {
- INIT_WORK(&qcom_host->ice_cfg_work,
- ufs_qcom_ice_cfg_work);
- qcom_host->is_ice_cfg_work_set = true;
- }
- }
-
-out:
- return err;
-}
-
-static inline bool ufs_qcom_is_data_cmd(char cmd_op, bool is_write)
-{
- if (is_write) {
- if (cmd_op == WRITE_6 || cmd_op == WRITE_10 ||
- cmd_op == WRITE_16)
- return true;
- } else {
- if (cmd_op == READ_6 || cmd_op == READ_10 ||
- cmd_op == READ_16)
- return true;
- }
-
- return false;
-}
-
-int ufs_qcom_ice_req_setup(struct ufs_qcom_host *qcom_host,
- struct scsi_cmnd *cmd, u8 *cc_index, bool *enable)
-{
- struct ice_data_setting ice_set;
- char cmd_op = cmd->cmnd[0];
- int err;
- unsigned long flags;
-
- if (!qcom_host->ice.pdev || !qcom_host->ice.vops) {
- dev_dbg(qcom_host->hba->dev, "%s: ice device is not enabled\n",
- __func__);
- return 0;
- }
-
- if (qcom_host->ice.vops->config_start) {
- memset(&ice_set, 0, sizeof(ice_set));
-
- spin_lock_irqsave(
- &qcom_host->ice_work_lock, flags);
-
- err = qcom_host->ice.vops->config_start(qcom_host->ice.pdev,
- cmd->request, &ice_set, true);
- if (err) {
- /*
- * config_start() returns -EAGAIN when a key slot is
- * available but still not configured. As configuration
- * requires a non-atomic context, this means we should
- * call the function again from the worker thread to do
- * the configuration. For this request the error will
- * propagate so it will be re-queued.
- */
- if (err == -EAGAIN) {
- if (!ice_workqueue) {
- spin_unlock_irqrestore(
- &qcom_host->ice_work_lock,
- flags);
-
- dev_err(qcom_host->hba->dev,
- "%s: error %d workqueue NULL\n",
- __func__, err);
- return -EINVAL;
- }
-
- dev_dbg(qcom_host->hba->dev,
- "%s: scheduling task for ice setup\n",
- __func__);
-
- if (!qcom_host->work_pending) {
- qcom_host->req_pending = cmd->request;
-
- if (!queue_work(ice_workqueue,
- &qcom_host->ice_cfg_work)) {
- qcom_host->req_pending = NULL;
-
- spin_unlock_irqrestore(
- &qcom_host->ice_work_lock,
- flags);
-
- return err;
- }
- qcom_host->work_pending = true;
- }
- } else {
- if (err != -EBUSY)
- dev_err(qcom_host->hba->dev,
- "%s: error in ice_vops->config %d\n",
- __func__, err);
- }
-
- spin_unlock_irqrestore(&qcom_host->ice_work_lock,
- flags);
-
- return err;
- }
-
- spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
-
- if (ufs_qcom_is_data_cmd(cmd_op, true))
- *enable = !ice_set.encr_bypass;
- else if (ufs_qcom_is_data_cmd(cmd_op, false))
- *enable = !ice_set.decr_bypass;
-
- if (ice_set.crypto_data.key_index >= 0)
- *cc_index = (u8)ice_set.crypto_data.key_index;
- }
- return 0;
-}
-
-/**
- * ufs_qcom_ice_cfg_start() - starts configuring UFS's ICE registers
- * for an ICE transaction
- * @qcom_host: Pointer to a UFS QCom internal host structure.
- * qcom_host, qcom_host->hba and qcom_host->hba->dev should all
- * be valid pointers.
- * @cmd: Pointer to a valid scsi command. cmd->request should also be
- * a valid pointer.
- *
- * Return: -EINVAL in-case of an error
- * 0 otherwise
- */
-int ufs_qcom_ice_cfg_start(struct ufs_qcom_host *qcom_host,
- struct scsi_cmnd *cmd)
-{
- struct device *dev = qcom_host->hba->dev;
- int err = 0;
- struct ice_data_setting ice_set;
- unsigned int slot = 0;
- sector_t lba = 0;
- unsigned int ctrl_info_val = 0;
- unsigned int bypass = 0;
- struct request *req;
- char cmd_op;
- unsigned long flags;
-
- if (!qcom_host->ice.pdev || !qcom_host->ice.vops) {
- dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
- goto out;
- }
-
- if (qcom_host->ice.state != UFS_QCOM_ICE_STATE_ACTIVE) {
- dev_err(dev, "%s: ice state (%d) is not active\n",
- __func__, qcom_host->ice.state);
- return -EINVAL;
- }
-
- if (qcom_host->hw_ver.major >= 0x3) {
- /*
- * ICE 3.0 crypto sequences were changed,
- * CTRL_INFO register no longer exists
- * and doesn't need to be configured.
- * The configuration is done via utrd.
- */
- return 0;
- }
-
- req = cmd->request;
- if (req->bio)
- lba = (req->bio->bi_iter.bi_sector) >>
- UFS_QCOM_ICE_TR_DATA_UNIT_4_KB;
-
- slot = req->tag;
- if (slot < 0 || slot > qcom_host->hba->nutrs) {
- dev_err(dev, "%s: slot (%d) is out of boundaries (0...%d)\n",
- __func__, slot, qcom_host->hba->nutrs);
- return -EINVAL;
- }
-
-
- memset(&ice_set, 0, sizeof(ice_set));
- if (qcom_host->ice.vops->config_start) {
-
- spin_lock_irqsave(
- &qcom_host->ice_work_lock, flags);
-
- err = qcom_host->ice.vops->config_start(qcom_host->ice.pdev,
- req, &ice_set, true);
- if (err) {
- /*
- * config_start() returns -EAGAIN when a key slot is
- * available but still not configured. As configuration
- * requires a non-atomic context, this means we should
- * call the function again from the worker thread to do
- * the configuration. For this request the error will
- * propagate so it will be re-queued.
- */
- if (err == -EAGAIN) {
- if (!ice_workqueue) {
- spin_unlock_irqrestore(
- &qcom_host->ice_work_lock,
- flags);
-
- dev_err(qcom_host->hba->dev,
- "%s: error %d workqueue NULL\n",
- __func__, err);
- return -EINVAL;
- }
-
- dev_dbg(qcom_host->hba->dev,
- "%s: scheduling task for ice setup\n",
- __func__);
-
- if (!qcom_host->work_pending) {
-
- qcom_host->req_pending = cmd->request;
- if (!queue_work(ice_workqueue,
- &qcom_host->ice_cfg_work)) {
- qcom_host->req_pending = NULL;
-
- spin_unlock_irqrestore(
- &qcom_host->ice_work_lock,
- flags);
-
- return err;
- }
- qcom_host->work_pending = true;
- }
-
- } else {
- if (err != -EBUSY)
- dev_err(qcom_host->hba->dev,
- "%s: error in ice_vops->config %d\n",
- __func__, err);
- }
-
- spin_unlock_irqrestore(
- &qcom_host->ice_work_lock, flags);
-
- return err;
- }
-
- spin_unlock_irqrestore(
- &qcom_host->ice_work_lock, flags);
- }
-
- cmd_op = cmd->cmnd[0];
-
-#define UFS_QCOM_DIR_WRITE true
-#define UFS_QCOM_DIR_READ false
- /* if non data command, bypass shall be enabled */
- if (!ufs_qcom_is_data_cmd(cmd_op, UFS_QCOM_DIR_WRITE) &&
- !ufs_qcom_is_data_cmd(cmd_op, UFS_QCOM_DIR_READ))
- bypass = UFS_QCOM_ICE_ENABLE_BYPASS;
- /* if writing data command */
- else if (ufs_qcom_is_data_cmd(cmd_op, UFS_QCOM_DIR_WRITE))
- bypass = ice_set.encr_bypass ? UFS_QCOM_ICE_ENABLE_BYPASS :
- UFS_QCOM_ICE_DISABLE_BYPASS;
- /* if reading data command */
- else if (ufs_qcom_is_data_cmd(cmd_op, UFS_QCOM_DIR_READ))
- bypass = ice_set.decr_bypass ? UFS_QCOM_ICE_ENABLE_BYPASS :
- UFS_QCOM_ICE_DISABLE_BYPASS;
-
-
- /* Configure ICE index */
- ctrl_info_val =
- (ice_set.crypto_data.key_index &
- MASK_UFS_QCOM_ICE_CTRL_INFO_KEY_INDEX)
- << OFFSET_UFS_QCOM_ICE_CTRL_INFO_KEY_INDEX;
-
- /* Configure data unit size of transfer request */
- ctrl_info_val |=
- UFS_QCOM_ICE_TR_DATA_UNIT_4_KB
- << OFFSET_UFS_QCOM_ICE_CTRL_INFO_CDU;
-
- /* Configure ICE bypass mode */
- ctrl_info_val |=
- (bypass & MASK_UFS_QCOM_ICE_CTRL_INFO_BYPASS)
- << OFFSET_UFS_QCOM_ICE_CTRL_INFO_BYPASS;
-
- if (qcom_host->hw_ver.major == 0x1) {
- ufshcd_writel(qcom_host->hba, lba,
- (REG_UFS_QCOM_ICE_CTRL_INFO_1_n + 8 * slot));
-
- ufshcd_writel(qcom_host->hba, ctrl_info_val,
- (REG_UFS_QCOM_ICE_CTRL_INFO_2_n + 8 * slot));
- }
- if (qcom_host->hw_ver.major == 0x2) {
- ufshcd_writel(qcom_host->hba, (lba & 0xFFFFFFFF),
- (REG_UFS_QCOM_ICE_CTRL_INFO_1_n + 16 * slot));
-
- ufshcd_writel(qcom_host->hba, ((lba >> 32) & 0xFFFFFFFF),
- (REG_UFS_QCOM_ICE_CTRL_INFO_2_n + 16 * slot));
-
- ufshcd_writel(qcom_host->hba, ctrl_info_val,
- (REG_UFS_QCOM_ICE_CTRL_INFO_3_n + 16 * slot));
- }
-
- /*
- * Ensure UFS-ICE registers are being configured
- * before next operation, otherwise UFS Host Controller might
- * set get errors
- */
- mb();
-out:
- return err;
-}
-
-/**
- * ufs_qcom_ice_cfg_end() - finishes configuring UFS's ICE registers
- * for an ICE transaction
- * @qcom_host: Pointer to a UFS QCom internal host structure.
- * qcom_host, qcom_host->hba and
- * qcom_host->hba->dev should all
- * be valid pointers.
- * @cmd: Pointer to a valid scsi command. cmd->request should also be
- * a valid pointer.
- *
- * Return: -EINVAL in-case of an error
- * 0 otherwise
- */
-int ufs_qcom_ice_cfg_end(struct ufs_qcom_host *qcom_host, struct request *req)
-{
- int err = 0;
- struct device *dev = qcom_host->hba->dev;
-
- if (qcom_host->ice.vops->config_end) {
- err = qcom_host->ice.vops->config_end(qcom_host->ice.pdev, req);
- if (err) {
- dev_err(dev, "%s: error in ice_vops->config_end %d\n",
- __func__, err);
- return err;
- }
- }
-
- return 0;
-}
-
-/**
- * ufs_qcom_ice_reset() - resets UFS-ICE interface and ICE device
- * @qcom_host: Pointer to a UFS QCom internal host structure.
- * qcom_host, qcom_host->hba and qcom_host->hba->dev should all
- * be valid pointers.
- *
- * Return: -EINVAL in-case of an error
- * 0 otherwise
- */
-int ufs_qcom_ice_reset(struct ufs_qcom_host *qcom_host)
-{
- struct device *dev = qcom_host->hba->dev;
- int err = 0;
-
- if (!qcom_host->ice.pdev) {
- dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
- goto out;
- }
-
- if (!qcom_host->ice.vops) {
- dev_err(dev, "%s: invalid ice_vops\n", __func__);
- return -EINVAL;
- }
-
- if (qcom_host->ice.state != UFS_QCOM_ICE_STATE_ACTIVE)
- goto out;
-
- if (qcom_host->ice.vops->reset) {
- err = qcom_host->ice.vops->reset(qcom_host->ice.pdev);
- if (err) {
- dev_err(dev, "%s: ice_vops->reset failed. err %d\n",
- __func__, err);
- goto out;
- }
- }
-
- if (qcom_host->ice.state != UFS_QCOM_ICE_STATE_ACTIVE) {
- dev_err(qcom_host->hba->dev,
- "%s: error. ice.state (%d) is not in active state\n",
- __func__, qcom_host->ice.state);
- err = -EINVAL;
- }
-
-out:
- return err;
-}
-
-/**
- * ufs_qcom_ice_resume() - resumes UFS-ICE interface and ICE device from power
- * collapse
- * @qcom_host: Pointer to a UFS QCom internal host structure.
- * qcom_host, qcom_host->hba and qcom_host->hba->dev should all
- * be valid pointers.
- *
- * Return: -EINVAL in-case of an error
- * 0 otherwise
- */
-int ufs_qcom_ice_resume(struct ufs_qcom_host *qcom_host)
-{
- struct device *dev = qcom_host->hba->dev;
- int err = 0;
-
- if (!qcom_host->ice.pdev) {
- dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
- goto out;
- }
-
- if (qcom_host->ice.state !=
- UFS_QCOM_ICE_STATE_SUSPENDED) {
- goto out;
- }
-
- if (!qcom_host->ice.vops) {
- dev_err(dev, "%s: invalid ice_vops\n", __func__);
- return -EINVAL;
- }
-
- if (qcom_host->ice.vops->resume) {
- err = qcom_host->ice.vops->resume(qcom_host->ice.pdev);
- if (err) {
- dev_err(dev, "%s: ice_vops->resume failed. err %d\n",
- __func__, err);
- return err;
- }
- }
- qcom_host->ice.state = UFS_QCOM_ICE_STATE_ACTIVE;
-out:
- return err;
-}
-
-/**
- * ufs_qcom_is_ice_busy() - lets the caller of the function know if
- * there is any ongoing operation in ICE in workqueue context.
- * @qcom_host: Pointer to a UFS QCom internal host structure.
- * qcom_host should be a valid pointer.
- *
- * Return: 1 if ICE is busy, 0 if it is free.
- * -EINVAL in case of error.
- */
-int ufs_qcom_is_ice_busy(struct ufs_qcom_host *qcom_host)
-{
- if (!qcom_host) {
- pr_err("%s: invalid qcom_host\n", __func__);
- return -EINVAL;
- }
-
- if (qcom_host->req_pending)
- return 1;
- else
- return 0;
-}
-
-/**
- * ufs_qcom_ice_suspend() - suspends UFS-ICE interface and ICE device
- * @qcom_host: Pointer to a UFS QCom internal host structure.
- * qcom_host, qcom_host->hba and qcom_host->hba->dev should all
- * be valid pointers.
- *
- * Return: -EINVAL in-case of an error
- * 0 otherwise
- */
-int ufs_qcom_ice_suspend(struct ufs_qcom_host *qcom_host)
-{
- struct device *dev = qcom_host->hba->dev;
- int err = 0;
-
- if (!qcom_host->ice.pdev) {
- dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
- goto out;
- }
-
- if (qcom_host->ice.vops->suspend) {
- err = qcom_host->ice.vops->suspend(qcom_host->ice.pdev);
- if (err) {
- dev_err(qcom_host->hba->dev,
- "%s: ice_vops->suspend failed. err %d\n",
- __func__, err);
- return -EINVAL;
- }
- }
-
- if (qcom_host->ice.state == UFS_QCOM_ICE_STATE_ACTIVE) {
- qcom_host->ice.state = UFS_QCOM_ICE_STATE_SUSPENDED;
- } else if (qcom_host->ice.state == UFS_QCOM_ICE_STATE_DISABLED) {
- dev_err(qcom_host->hba->dev,
- "%s: ice state is invalid: disabled\n",
- __func__);
- err = -EINVAL;
- }
-
-out:
- return err;
-}
-
-/**
- * ufs_qcom_ice_get_status() - returns the status of an ICE transaction
- * @qcom_host: Pointer to a UFS QCom internal host structure.
- * qcom_host, qcom_host->hba and qcom_host->hba->dev should all
- * be valid pointers.
- * @ice_status: Pointer to a valid output parameter.
- * < 0 in case of ICE transaction failure.
- * 0 otherwise.
- *
- * Return: -EINVAL in-case of an error
- * 0 otherwise
- */
-int ufs_qcom_ice_get_status(struct ufs_qcom_host *qcom_host, int *ice_status)
-{
- struct device *dev = NULL;
- int err = 0;
- int stat = -EINVAL;
-
- *ice_status = 0;
-
- dev = qcom_host->hba->dev;
- if (!dev) {
- err = -EINVAL;
- goto out;
- }
-
- if (!qcom_host->ice.pdev) {
- dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
- goto out;
- }
-
- if (qcom_host->ice.state != UFS_QCOM_ICE_STATE_ACTIVE) {
- err = -EINVAL;
- goto out;
- }
-
- if (!qcom_host->ice.vops) {
- dev_err(dev, "%s: invalid ice_vops\n", __func__);
- return -EINVAL;
- }
-
- if (qcom_host->ice.vops->status) {
- stat = qcom_host->ice.vops->status(qcom_host->ice.pdev);
- if (stat < 0) {
- dev_err(dev, "%s: ice_vops->status failed. stat %d\n",
- __func__, stat);
- err = -EINVAL;
- goto out;
- }
-
- *ice_status = stat;
- }
-
-out:
- return err;
-}
diff --git a/drivers/scsi/ufs/ufs-qcom-ice.h b/drivers/scsi/ufs/ufs-qcom-ice.h
deleted file mode 100644
index 2b42459..0000000
--- a/drivers/scsi/ufs/ufs-qcom-ice.h
+++ /dev/null
@@ -1,137 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- */
-
-#ifndef _UFS_QCOM_ICE_H_
-#define _UFS_QCOM_ICE_H_
-
-#include <scsi/scsi_cmnd.h>
-
-#include "ufs-qcom.h"
-
-/*
- * UFS host controller ICE registers. There are n [0..31]
- * of each of these registers
- */
-enum {
- REG_UFS_QCOM_ICE_CFG = 0x2200,
- REG_UFS_QCOM_ICE_CTRL_INFO_1_n = 0x2204,
- REG_UFS_QCOM_ICE_CTRL_INFO_2_n = 0x2208,
- REG_UFS_QCOM_ICE_CTRL_INFO_3_n = 0x220C,
-};
-#define NUM_QCOM_ICE_CTRL_INFO_n_REGS 32
-
-/* UFS QCOM ICE CTRL Info register offset */
-enum {
- OFFSET_UFS_QCOM_ICE_CTRL_INFO_BYPASS = 0,
- OFFSET_UFS_QCOM_ICE_CTRL_INFO_KEY_INDEX = 0x1,
- OFFSET_UFS_QCOM_ICE_CTRL_INFO_CDU = 0x6,
-};
-
-/* UFS QCOM ICE CTRL Info register masks */
-enum {
- MASK_UFS_QCOM_ICE_CTRL_INFO_BYPASS = 0x1,
- MASK_UFS_QCOM_ICE_CTRL_INFO_KEY_INDEX = 0x1F,
- MASK_UFS_QCOM_ICE_CTRL_INFO_CDU = 0x8,
-};
-
-/* UFS QCOM ICE encryption/decryption bypass state */
-enum {
- UFS_QCOM_ICE_DISABLE_BYPASS = 0,
- UFS_QCOM_ICE_ENABLE_BYPASS = 1,
-};
-
-/* UFS QCOM ICE Crypto Data Unit of target DUN of Transfer Request */
-enum {
- UFS_QCOM_ICE_TR_DATA_UNIT_512_B = 0,
- UFS_QCOM_ICE_TR_DATA_UNIT_1_KB = 1,
- UFS_QCOM_ICE_TR_DATA_UNIT_2_KB = 2,
- UFS_QCOM_ICE_TR_DATA_UNIT_4_KB = 3,
- UFS_QCOM_ICE_TR_DATA_UNIT_8_KB = 4,
- UFS_QCOM_ICE_TR_DATA_UNIT_16_KB = 5,
- UFS_QCOM_ICE_TR_DATA_UNIT_32_KB = 6,
-};
-
-/* UFS QCOM ICE internal state */
-enum {
- UFS_QCOM_ICE_STATE_DISABLED = 0,
- UFS_QCOM_ICE_STATE_ACTIVE = 1,
- UFS_QCOM_ICE_STATE_SUSPENDED = 2,
-};
-
-#ifdef CONFIG_SCSI_UFS_QCOM_ICE
-int ufs_qcom_ice_get_dev(struct ufs_qcom_host *qcom_host);
-int ufs_qcom_ice_init(struct ufs_qcom_host *qcom_host);
-int ufs_qcom_ice_req_setup(struct ufs_qcom_host *qcom_host,
- struct scsi_cmnd *cmd, u8 *cc_index, bool *enable);
-int ufs_qcom_ice_cfg_start(struct ufs_qcom_host *qcom_host,
- struct scsi_cmnd *cmd);
-int ufs_qcom_ice_cfg_end(struct ufs_qcom_host *qcom_host,
- struct request *req);
-int ufs_qcom_ice_reset(struct ufs_qcom_host *qcom_host);
-int ufs_qcom_ice_resume(struct ufs_qcom_host *qcom_host);
-int ufs_qcom_ice_suspend(struct ufs_qcom_host *qcom_host);
-int ufs_qcom_ice_get_status(struct ufs_qcom_host *qcom_host, int *ice_status);
-void ufs_qcom_ice_print_regs(struct ufs_qcom_host *qcom_host);
-int ufs_qcom_is_ice_busy(struct ufs_qcom_host *qcom_host);
-#else
-inline int ufs_qcom_ice_get_dev(struct ufs_qcom_host *qcom_host)
-{
- if (qcom_host) {
- qcom_host->ice.pdev = NULL;
- qcom_host->ice.vops = NULL;
- }
- return -ENODEV;
-}
-inline int ufs_qcom_ice_init(struct ufs_qcom_host *qcom_host)
-{
- return 0;
-}
-inline int ufs_qcom_ice_cfg_start(struct ufs_qcom_host *qcom_host,
- struct scsi_cmnd *cmd)
-{
- return 0;
-}
-inline int ufs_qcom_ice_cfg_end(struct ufs_qcom_host *qcom_host,
- struct request *req)
-{
- return 0;
-}
-inline int ufs_qcom_ice_reset(struct ufs_qcom_host *qcom_host)
-{
- return 0;
-}
-inline int ufs_qcom_ice_resume(struct ufs_qcom_host *qcom_host)
-{
- return 0;
-}
-inline int ufs_qcom_ice_suspend(struct ufs_qcom_host *qcom_host)
-{
- return 0;
-}
-inline int ufs_qcom_ice_get_status(struct ufs_qcom_host *qcom_host,
- int *ice_status)
-{
- return 0;
-}
-inline void ufs_qcom_ice_print_regs(struct ufs_qcom_host *qcom_host)
-{
-}
-static inline int ufs_qcom_is_ice_busy(struct ufs_qcom_host *qcom_host)
-{
- return 0;
-}
-#endif /* CONFIG_SCSI_UFS_QCOM_ICE */
-
-#endif /* UFS_QCOM_ICE_H_ */
diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
index 63918b0..982968f 100644
--- a/drivers/scsi/ufs/ufs-qcom.c
+++ b/drivers/scsi/ufs/ufs-qcom.c
@@ -28,9 +28,9 @@
#include "unipro.h"
#include "ufs-qcom.h"
#include "ufshci.h"
-#include "ufs-qcom-ice.h"
#include "ufs-qcom-debugfs.h"
#include "ufs_quirks.h"
+#include "ufshcd-crypto-qti.h"
#define MAX_PROP_SIZE 32
#define VDDP_REF_CLK_MIN_UV 1200000
@@ -408,15 +408,6 @@ static int ufs_qcom_hce_enable_notify(struct ufs_hba *hba,
* is initialized.
*/
err = ufs_qcom_enable_lane_clks(host);
- if (!err && host->ice.pdev) {
- err = ufs_qcom_ice_init(host);
- if (err) {
- dev_err(hba->dev, "%s: ICE init failed (%d)\n",
- __func__, err);
- err = -EINVAL;
- }
- }
-
break;
case POST_CHANGE:
/* check if UFS PHY moved from DISABLED to HIBERN8 */
@@ -847,11 +838,11 @@ static int ufs_qcom_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
if (host->vddp_ref_clk && ufs_qcom_is_link_off(hba))
ret = ufs_qcom_disable_vreg(hba->dev,
host->vddp_ref_clk);
+
if (host->vccq_parent && !hba->auto_bkops_enabled)
ufs_qcom_config_vreg(hba->dev,
host->vccq_parent, false);
- ufs_qcom_ice_suspend(host);
if (ufs_qcom_is_link_off(hba)) {
/* Assert PHY soft reset */
ufs_qcom_assert_reset(hba);
@@ -891,13 +882,6 @@ static int ufs_qcom_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
if (err)
goto out;
- err = ufs_qcom_ice_resume(host);
- if (err) {
- dev_err(hba->dev, "%s: ufs_qcom_ice_resume failed, err = %d\n",
- __func__, err);
- goto out;
- }
-
hba->is_sys_suspended = false;
out:
@@ -937,104 +921,6 @@ static int ufs_qcom_full_reset(struct ufs_hba *hba)
return ret;
}
-#ifdef CONFIG_SCSI_UFS_QCOM_ICE
-static int ufs_qcom_crypto_req_setup(struct ufs_hba *hba,
- struct ufshcd_lrb *lrbp, u8 *cc_index, bool *enable, u64 *dun)
-{
- struct ufs_qcom_host *host = ufshcd_get_variant(hba);
- struct request *req;
- int ret;
-
- if (lrbp->cmd && lrbp->cmd->request)
- req = lrbp->cmd->request;
- else
- return 0;
-
- /* Use request LBA or given dun as the DUN value */
- if (req->bio) {
-#ifdef CONFIG_PFK
- if (bio_dun(req->bio)) {
- /* dun @bio can be split, so we have to adjust offset */
- *dun = bio_dun(req->bio);
- } else {
- *dun = req->bio->bi_iter.bi_sector;
- *dun >>= UFS_QCOM_ICE_TR_DATA_UNIT_4_KB;
- }
-#else
- *dun = req->bio->bi_iter.bi_sector;
- *dun >>= UFS_QCOM_ICE_TR_DATA_UNIT_4_KB;
-#endif
- }
- ret = ufs_qcom_ice_req_setup(host, lrbp->cmd, cc_index, enable);
-
- return ret;
-}
-
-static
-int ufs_qcom_crytpo_engine_cfg_start(struct ufs_hba *hba, unsigned int task_tag)
-{
- struct ufs_qcom_host *host = ufshcd_get_variant(hba);
- struct ufshcd_lrb *lrbp = &hba->lrb[task_tag];
- int err = 0;
-
- if (!host->ice.pdev ||
- !lrbp->cmd ||
- (lrbp->command_type != UTP_CMD_TYPE_SCSI &&
- lrbp->command_type != UTP_CMD_TYPE_UFS_STORAGE))
- goto out;
-
- err = ufs_qcom_ice_cfg_start(host, lrbp->cmd);
-out:
- return err;
-}
-
-static
-int ufs_qcom_crytpo_engine_cfg_end(struct ufs_hba *hba,
- struct ufshcd_lrb *lrbp, struct request *req)
-{
- struct ufs_qcom_host *host = ufshcd_get_variant(hba);
- int err = 0;
-
- if (!host->ice.pdev || (lrbp->command_type != UTP_CMD_TYPE_SCSI &&
- lrbp->command_type != UTP_CMD_TYPE_UFS_STORAGE))
- goto out;
-
- err = ufs_qcom_ice_cfg_end(host, req);
-out:
- return err;
-}
-
-static
-int ufs_qcom_crytpo_engine_reset(struct ufs_hba *hba)
-{
- struct ufs_qcom_host *host = ufshcd_get_variant(hba);
- int err = 0;
-
- if (!host->ice.pdev)
- goto out;
-
- err = ufs_qcom_ice_reset(host);
-out:
- return err;
-}
-
-static int ufs_qcom_crypto_engine_get_status(struct ufs_hba *hba, u32 *status)
-{
- struct ufs_qcom_host *host = ufshcd_get_variant(hba);
-
- if (!status)
- return -EINVAL;
-
- return ufs_qcom_ice_get_status(host, status);
-}
-#else /* !CONFIG_SCSI_UFS_QCOM_ICE */
-#define ufs_qcom_crypto_req_setup NULL
-#define ufs_qcom_crytpo_engine_cfg_start NULL
-#define ufs_qcom_crytpo_engine_cfg_end NULL
-#define ufs_qcom_crytpo_engine_reset NULL
-#define ufs_qcom_crypto_engine_get_status NULL
-#endif /* CONFIG_SCSI_UFS_QCOM_ICE */
-
struct ufs_qcom_dev_params {
u32 pwm_rx_gear; /* pwm rx gear to work in */
u32 pwm_tx_gear; /* pwm tx gear to work in */
@@ -1574,6 +1460,12 @@ static void ufs_qcom_advertise_quirks(struct ufs_hba *hba)
if (host->disable_lpm)
hba->quirks |= UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8;
+ /*
+ * Inline crypto is currently broken with ufs-qcom at least because the
+ * device tree doesn't include the crypto registers. There are likely
+ * to be other issues that will need to be addressed too.
+ */
+ //hba->quirks |= UFSHCD_QUIRK_BROKEN_CRYPTO;
}
static void ufs_qcom_set_caps(struct ufs_hba *hba)
@@ -1642,14 +1534,7 @@ static int ufs_qcom_setup_clocks(struct ufs_hba *hba, bool on,
if (ufshcd_is_hs_mode(&hba->pwr_info))
ufs_qcom_dev_ref_clk_ctrl(host, true);
- err = ufs_qcom_ice_resume(host);
- if (err)
- goto out;
} else if (!on && (status == PRE_CHANGE)) {
- err = ufs_qcom_ice_suspend(host);
- if (err)
- goto out;
-
/*
* If auto hibern8 is enabled then the link will already
* be in hibern8 state and the ref clock can be gated.
@@ -2227,36 +2112,9 @@ static int ufs_qcom_init(struct ufs_hba *hba)
/* Make a two way bind between the qcom host and the hba */
host->hba = hba;
- spin_lock_init(&host->ice_work_lock);
ufshcd_set_variant(hba, host);
- err = ufs_qcom_ice_get_dev(host);
- if (err == -EPROBE_DEFER) {
- /*
- * UFS driver might be probed before ICE driver does.
- * In that case we would like to return EPROBE_DEFER code
- * in order to delay its probing.
- */
- dev_err(dev, "%s: required ICE device not probed yet err = %d\n",
- __func__, err);
- goto out_variant_clear;
-
- } else if (err == -ENODEV) {
- /*
- * ICE device is not enabled in DTS file. No need for further
- * initialization of ICE driver.
- */
- dev_warn(dev, "%s: ICE device is not enabled\n",
- __func__);
- } else if (err) {
- dev_err(dev, "%s: ufs_qcom_ice_get_dev failed %d\n",
- __func__, err);
- goto out_variant_clear;
- } else {
- hba->host->inlinecrypt_support = 1;
- }
-
host->generic_phy = devm_phy_get(dev, "ufsphy");
if (host->generic_phy == ERR_PTR(-EPROBE_DEFER)) {
@@ -2281,6 +2139,12 @@ static int ufs_qcom_init(struct ufs_hba *hba)
/* restore the secure configuration */
ufs_qcom_update_sec_cfg(hba, true);
+ /*
+ * Set the vendor specific ops needed for ICE.
+ * Default implementation if the ops are not set.
+ */
+ ufshcd_crypto_qti_set_vops(hba);
+
err = ufs_qcom_bus_register(host);
if (err)
goto out_variant_clear;
@@ -2832,7 +2696,6 @@ static void ufs_qcom_dump_dbg_regs(struct ufs_hba *hba, bool no_sleep)
usleep_range(1000, 1100);
ufs_qcom_phy_dbg_register_dump(phy);
usleep_range(1000, 1100);
- ufs_qcom_ice_print_regs(host);
}
static u32 ufs_qcom_get_user_cap_mode(struct ufs_hba *hba)
@@ -2869,14 +2732,6 @@ static struct ufs_hba_variant_ops ufs_hba_qcom_vops = {
.get_user_cap_mode = ufs_qcom_get_user_cap_mode,
};
-static struct ufs_hba_crypto_variant_ops ufs_hba_crypto_variant_ops = {
- .crypto_req_setup = ufs_qcom_crypto_req_setup,
- .crypto_engine_cfg_start = ufs_qcom_crytpo_engine_cfg_start,
- .crypto_engine_cfg_end = ufs_qcom_crytpo_engine_cfg_end,
- .crypto_engine_reset = ufs_qcom_crytpo_engine_reset,
- .crypto_engine_get_status = ufs_qcom_crypto_engine_get_status,
-};
-
static struct ufs_hba_pm_qos_variant_ops ufs_hba_pm_qos_variant_ops = {
.req_start = ufs_qcom_pm_qos_req_start,
.req_end = ufs_qcom_pm_qos_req_end,
@@ -2885,7 +2740,6 @@ static struct ufs_hba_pm_qos_variant_ops ufs_hba_pm_qos_variant_ops = {
static struct ufs_hba_variant ufs_hba_qcom_variant = {
.name = "qcom",
.vops = &ufs_hba_qcom_vops,
- .crypto_vops = &ufs_hba_crypto_variant_ops,
.pm_qos_vops = &ufs_hba_pm_qos_variant_ops,
};
diff --git a/drivers/scsi/ufs/ufs-qcom.h b/drivers/scsi/ufs/ufs-qcom.h
index 9197742..6538637 100644
--- a/drivers/scsi/ufs/ufs-qcom.h
+++ b/drivers/scsi/ufs/ufs-qcom.h
@@ -238,26 +238,6 @@ struct ufs_qcom_testbus {
u8 select_minor;
};
-/**
- * struct ufs_qcom_ice_data - ICE related information
- * @vops: pointer to variant operations of ICE
- * @async_done: completion for supporting ICE's driver asynchronous nature
- * @pdev: pointer to the proper ICE platform device
- * @state: UFS-ICE interface's internal state (see
- * ufs-qcom-ice.h for possible internal states)
- * @quirks: UFS-ICE interface related quirks
- * @crypto_engine_err: crypto engine errors
- */
-struct ufs_qcom_ice_data {
- struct qcom_ice_variant_ops *vops;
- struct platform_device *pdev;
- int state;
-
- u16 quirks;
-
- bool crypto_engine_err;
-};
-
#ifdef CONFIG_DEBUG_FS
struct qcom_debugfs_files {
struct dentry *debugfs_root;
@@ -366,7 +346,6 @@ struct ufs_qcom_host {
bool disable_lpm;
bool is_lane_clks_enabled;
bool sec_cfg_updated;
- struct ufs_qcom_ice_data ice;
void __iomem *dev_ref_clk_ctrl_mmio;
bool is_dev_ref_clk_enabled;
@@ -381,9 +360,6 @@ struct ufs_qcom_host {
u32 dbg_print_en;
struct ufs_qcom_testbus testbus;
- spinlock_t ice_work_lock;
- struct work_struct ice_cfg_work;
- bool is_ice_cfg_work_set;
struct request *req_pending;
struct ufs_vreg *vddp_ref_clk;
struct ufs_vreg *vccq_parent;
diff --git a/drivers/scsi/ufs/ufshcd-crypto-qti.c b/drivers/scsi/ufs/ufshcd-crypto-qti.c
new file mode 100644
index 0000000..f3351d0
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto-qti.c
@@ -0,0 +1,302 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2020, Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <crypto/algapi.h>
+#include <linux/platform_device.h>
+#include <linux/crypto-qti-common.h>
+
+#include "ufshcd-crypto-qti.h"
+
+#define MINIMUM_DUN_SIZE 512
+#define MAXIMUM_DUN_SIZE 65536
+
+#define NUM_KEYSLOTS(hba) (hba->crypto_capabilities.config_count + 1)
+
+static struct ufs_hba_crypto_variant_ops ufshcd_crypto_qti_variant_ops = {
+ .hba_init_crypto = ufshcd_crypto_qti_init_crypto,
+ .enable = ufshcd_crypto_qti_enable,
+ .disable = ufshcd_crypto_qti_disable,
+ .resume = ufshcd_crypto_qti_resume,
+ .debug = ufshcd_crypto_qti_debug,
+};
+
+static uint8_t get_data_unit_size_mask(unsigned int data_unit_size)
+{
+ if (data_unit_size < MINIMUM_DUN_SIZE ||
+ data_unit_size > MAXIMUM_DUN_SIZE ||
+ !is_power_of_2(data_unit_size))
+ return 0;
+
+ return data_unit_size / MINIMUM_DUN_SIZE;
+}
+
+static bool ice_cap_idx_valid(struct ufs_hba *hba,
+ unsigned int cap_idx)
+{
+ return cap_idx < hba->crypto_capabilities.num_crypto_cap;
+}
+
+void ufshcd_crypto_qti_enable(struct ufs_hba *hba)
+{
+ int err = 0;
+
+ if (!ufshcd_hba_is_crypto_supported(hba))
+ return;
+
+ err = crypto_qti_enable(hba->crypto_vops->priv);
+ if (err) {
+ pr_err("%s: Error enabling crypto, err %d\n",
+ __func__, err);
+ ufshcd_crypto_qti_disable(hba);
+ }
+
+ ufshcd_crypto_enable_spec(hba);
+
+}
+
+void ufshcd_crypto_qti_disable(struct ufs_hba *hba)
+{
+ ufshcd_crypto_disable_spec(hba);
+ crypto_qti_disable(hba->crypto_vops->priv);
+}
+
+
+static int ufshcd_crypto_qti_keyslot_program(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot)
+{
+ struct ufs_hba *hba = keyslot_manager_private(ksm);
+ int err = 0;
+ u8 data_unit_mask;
+ int crypto_alg_id;
+
+ crypto_alg_id = ufshcd_crypto_cap_find(hba, key->crypto_mode,
+ key->data_unit_size);
+
+ if (!ufshcd_is_crypto_enabled(hba) ||
+ !ufshcd_keyslot_valid(hba, slot) ||
+ !ice_cap_idx_valid(hba, crypto_alg_id))
+ return -EINVAL;
+
+ data_unit_mask = get_data_unit_size_mask(key->data_unit_size);
+
+ if (!(data_unit_mask &
+ hba->crypto_cap_array[crypto_alg_id].sdus_mask))
+ return -EINVAL;
+
+ pm_runtime_get_sync(hba->dev);
+ err = ufshcd_hold(hba, false);
+ if (err) {
+ pr_err("%s: failed to enable clocks, err %d\n", __func__, err);
+ return err;
+ }
+
+ err = crypto_qti_keyslot_program(hba->crypto_vops->priv, key, slot,
+ data_unit_mask, crypto_alg_id);
+ if (err) {
+ pr_err("%s: failed with error %d\n", __func__, err);
+ ufshcd_release(hba, false);
+ pm_runtime_put_sync(hba->dev);
+ return err;
+ }
+
+ ufshcd_release(hba, false);
+ pm_runtime_put_sync(hba->dev);
+
+ return 0;
+}
+
+static int ufshcd_crypto_qti_keyslot_evict(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot)
+{
+ int err = 0;
+ struct ufs_hba *hba = keyslot_manager_private(ksm);
+
+ if (!ufshcd_is_crypto_enabled(hba) ||
+ !ufshcd_keyslot_valid(hba, slot))
+ return -EINVAL;
+
+ pm_runtime_get_sync(hba->dev);
+ err = ufshcd_hold(hba, false);
+ if (err) {
+ pr_err("%s: failed to enable clocks, err %d\n", __func__, err);
+ return err;
+ }
+
+ err = crypto_qti_keyslot_evict(hba->crypto_vops->priv, slot);
+ if (err) {
+ pr_err("%s: failed with error %d\n",
+ __func__, err);
+ ufshcd_release(hba, false);
+ pm_runtime_put_sync(hba->dev);
+ return err;
+ }
+
+ ufshcd_release(hba, false);
+ pm_runtime_put_sync(hba->dev);
+
+ return err;
+}
+
+static int ufshcd_crypto_qti_derive_raw_secret(struct keyslot_manager *ksm,
+ const u8 *wrapped_key,
+ unsigned int wrapped_key_size,
+ u8 *secret,
+ unsigned int secret_size)
+{
+ return crypto_qti_derive_raw_secret(wrapped_key, wrapped_key_size,
+ secret, secret_size);
+}
+
+static const struct keyslot_mgmt_ll_ops ufshcd_crypto_qti_ksm_ops = {
+ .keyslot_program = ufshcd_crypto_qti_keyslot_program,
+ .keyslot_evict = ufshcd_crypto_qti_keyslot_evict,
+ .derive_raw_secret = ufshcd_crypto_qti_derive_raw_secret,
+};
+
+static enum blk_crypto_mode_num ufshcd_blk_crypto_qti_mode_num_for_alg_dusize(
+ enum ufs_crypto_alg ufs_crypto_alg,
+ enum ufs_crypto_key_size key_size)
+{
+ /*
+ * This is currently the only mode that UFS and blk-crypto both support.
+ */
+ if (ufs_crypto_alg == UFS_CRYPTO_ALG_AES_XTS &&
+ key_size == UFS_CRYPTO_KEY_SIZE_256)
+ return BLK_ENCRYPTION_MODE_AES_256_XTS;
+
+ return BLK_ENCRYPTION_MODE_INVALID;
+}
+
+static int ufshcd_hba_init_crypto_qti_spec(struct ufs_hba *hba,
+ const struct keyslot_mgmt_ll_ops *ksm_ops)
+{
+ int cap_idx = 0;
+ int err = 0;
+ unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX];
+ enum blk_crypto_mode_num blk_mode_num;
+
+ /* Default to disabling crypto */
+ hba->caps &= ~UFSHCD_CAP_CRYPTO;
+
+ if (!(hba->capabilities & MASK_CRYPTO_SUPPORT)) {
+ err = -ENODEV;
+ goto out;
+ }
+
+ /*
+ * Crypto Capabilities should never be 0, because the
+ * config_array_ptr > 04h. So we use a 0 value to indicate that
+ * crypto init failed, and can't be enabled.
+ */
+ hba->crypto_capabilities.reg_val =
+ cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP));
+ hba->crypto_cfg_register =
+ (u32)hba->crypto_capabilities.config_array_ptr * 0x100;
+ hba->crypto_cap_array =
+ devm_kcalloc(hba->dev,
+ hba->crypto_capabilities.num_crypto_cap,
+ sizeof(hba->crypto_cap_array[0]),
+ GFP_KERNEL);
+ if (!hba->crypto_cap_array) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ memset(crypto_modes_supported, 0, sizeof(crypto_modes_supported));
+ /*
+ * Store all the capabilities now so that we don't need to repeatedly
+ * access the device each time we want to know its capabilities
+ */
+ for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
+ cap_idx++) {
+ hba->crypto_cap_array[cap_idx].reg_val =
+ cpu_to_le32(ufshcd_readl(hba,
+ REG_UFS_CRYPTOCAP +
+ cap_idx * sizeof(__le32)));
+ blk_mode_num = ufshcd_blk_crypto_qti_mode_num_for_alg_dusize(
+ hba->crypto_cap_array[cap_idx].algorithm_id,
+ hba->crypto_cap_array[cap_idx].key_size);
+ if (blk_mode_num == BLK_ENCRYPTION_MODE_INVALID)
+ continue;
+ crypto_modes_supported[blk_mode_num] |=
+ hba->crypto_cap_array[cap_idx].sdus_mask * 512;
+ }
+
+ hba->ksm = keyslot_manager_create(ufshcd_num_keyslots(hba), ksm_ops,
+ crypto_modes_supported, hba);
+
+ if (!hba->ksm) {
+ err = -ENOMEM;
+ goto out;
+ }
+ pr_debug("%s: keyslot manager created\n", __func__);
+
+ return 0;
+
+out:
+ /* Indicate that init failed by setting crypto_capabilities to 0 */
+ hba->crypto_capabilities.reg_val = 0;
+ return err;
+}
+
+int ufshcd_crypto_qti_init_crypto(struct ufs_hba *hba,
+ const struct keyslot_mgmt_ll_ops *ksm_ops)
+{
+ int err = 0;
+ struct platform_device *pdev = to_platform_device(hba->dev);
+ void __iomem *mmio_base;
+ struct resource *mem_res;
+
+ mem_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ "ufs_ice");
+ mmio_base = devm_ioremap_resource(hba->dev, mem_res);
+ if (IS_ERR(mmio_base)) {
+ pr_err("%s: Unable to get ufs_crypto mmio base\n", __func__);
+ return PTR_ERR(mmio_base);
+ }
+
+ err = ufshcd_hba_init_crypto_qti_spec(hba, &ufshcd_crypto_qti_ksm_ops);
+ if (err) {
+ pr_err("%s: Error initiating crypto capabilities, err %d\n",
+ __func__, err);
+ return err;
+ }
+
+ err = crypto_qti_init_crypto(hba->dev,
+ mmio_base, (void **)&hba->crypto_vops->priv);
+ if (err) {
+ pr_err("%s: Error initiating crypto, err %d\n",
+ __func__, err);
+ }
+ return err;
+}
+
+int ufshcd_crypto_qti_debug(struct ufs_hba *hba)
+{
+ return crypto_qti_debug(hba->crypto_vops->priv);
+}
+
+void ufshcd_crypto_qti_set_vops(struct ufs_hba *hba)
+{
+ return ufshcd_crypto_set_vops(hba, &ufshcd_crypto_qti_variant_ops);
+}
+
+int ufshcd_crypto_qti_resume(struct ufs_hba *hba,
+ enum ufs_pm_op pm_op)
+{
+ return crypto_qti_resume(hba->crypto_vops->priv);
+}
diff --git a/drivers/scsi/ufs/ufshcd-crypto-qti.h b/drivers/scsi/ufs/ufshcd-crypto-qti.h
new file mode 100644
index 0000000..5c1b2ae
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto-qti.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef _UFSHCD_CRYPTO_QTI_H
+#define _UFSHCD_CRYPTO_QTI_H
+
+#include "ufshcd.h"
+#include "ufshcd-crypto.h"
+
+void ufshcd_crypto_qti_enable(struct ufs_hba *hba);
+
+void ufshcd_crypto_qti_disable(struct ufs_hba *hba);
+
+int ufshcd_crypto_qti_init_crypto(struct ufs_hba *hba,
+ const struct keyslot_mgmt_ll_ops *ksm_ops);
+
+void ufshcd_crypto_qti_setup_rq_keyslot_manager(struct ufs_hba *hba,
+ struct request_queue *q);
+
+void ufshcd_crypto_qti_destroy_rq_keyslot_manager(struct ufs_hba *hba,
+ struct request_queue *q);
+
+int ufshcd_crypto_qti_prepare_lrbp_crypto(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd, struct ufshcd_lrb *lrbp);
+
+int ufshcd_crypto_qti_complete_lrbp_crypto(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd, struct ufshcd_lrb *lrbp);
+
+int ufshcd_crypto_qti_debug(struct ufs_hba *hba);
+
+int ufshcd_crypto_qti_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op);
+
+int ufshcd_crypto_qti_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op);
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO_QTI
+void ufshcd_crypto_qti_set_vops(struct ufs_hba *hba);
+#else
+static inline void ufshcd_crypto_qti_set_vops(struct ufs_hba *hba)
+{}
+#endif /* CONFIG_SCSI_UFS_CRYPTO_QTI */
+#endif /* _UFSHCD_CRYPTO_QTI_H */
diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c
new file mode 100644
index 0000000..a72b1ca
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto.c
@@ -0,0 +1,499 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#include <linux/keyslot-manager.h>
+#include "ufshcd.h"
+#include "ufshcd-crypto.h"
+
+static bool ufshcd_cap_idx_valid(struct ufs_hba *hba, unsigned int cap_idx)
+{
+ return cap_idx < hba->crypto_capabilities.num_crypto_cap;
+}
+
+static u8 get_data_unit_size_mask(unsigned int data_unit_size)
+{
+ if (data_unit_size < 512 || data_unit_size > 65536 ||
+ !is_power_of_2(data_unit_size))
+ return 0;
+
+ return data_unit_size / 512;
+}
+
+static size_t get_keysize_bytes(enum ufs_crypto_key_size size)
+{
+ switch (size) {
+ case UFS_CRYPTO_KEY_SIZE_128:
+ return 16;
+ case UFS_CRYPTO_KEY_SIZE_192:
+ return 24;
+ case UFS_CRYPTO_KEY_SIZE_256:
+ return 32;
+ case UFS_CRYPTO_KEY_SIZE_512:
+ return 64;
+ default:
+ return 0;
+ }
+}
+
+int ufshcd_crypto_cap_find(struct ufs_hba *hba,
+ enum blk_crypto_mode_num crypto_mode,
+ unsigned int data_unit_size)
+{
+ enum ufs_crypto_alg ufs_alg;
+ u8 data_unit_mask;
+ int cap_idx;
+ enum ufs_crypto_key_size ufs_key_size;
+ union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array;
+
+ if (!ufshcd_hba_is_crypto_supported(hba))
+ return -EINVAL;
+
+ switch (crypto_mode) {
+ case BLK_ENCRYPTION_MODE_AES_256_XTS:
+ ufs_alg = UFS_CRYPTO_ALG_AES_XTS;
+ ufs_key_size = UFS_CRYPTO_KEY_SIZE_256;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ data_unit_mask = get_data_unit_size_mask(data_unit_size);
+
+ for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
+ cap_idx++) {
+ if (ccap_array[cap_idx].algorithm_id == ufs_alg &&
+ (ccap_array[cap_idx].sdus_mask & data_unit_mask) &&
+ ccap_array[cap_idx].key_size == ufs_key_size)
+ return cap_idx;
+ }
+
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ufshcd_crypto_cap_find);
+
+/**
+ * ufshcd_crypto_cfg_entry_write_key - Write a key into a crypto_cfg_entry
+ *
+ * Writes the key with the appropriate format - for AES_XTS,
+ * the first half of the key is copied as is, the second half is
+ * copied with an offset halfway into the cfg->crypto_key array.
+ * For the other supported crypto algs, the key is just copied.
+ *
+ * @cfg: The crypto config to write to
+ * @key: The key to write
+ * @cap: The crypto capability (which specifies the crypto alg and key size)
+ *
+ * Returns 0 on success, or -EINVAL
+ */
+static int ufshcd_crypto_cfg_entry_write_key(union ufs_crypto_cfg_entry *cfg,
+ const u8 *key,
+ union ufs_crypto_cap_entry cap)
+{
+ size_t key_size_bytes = get_keysize_bytes(cap.key_size);
+
+ if (key_size_bytes == 0)
+ return -EINVAL;
+
+ switch (cap.algorithm_id) {
+ case UFS_CRYPTO_ALG_AES_XTS:
+ key_size_bytes *= 2;
+ if (key_size_bytes > UFS_CRYPTO_KEY_MAX_SIZE)
+ return -EINVAL;
+
+ memcpy(cfg->crypto_key, key, key_size_bytes/2);
+ memcpy(cfg->crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2,
+ key + key_size_bytes/2, key_size_bytes/2);
+ return 0;
+ case UFS_CRYPTO_ALG_BITLOCKER_AES_CBC:
+ /* fall through */
+ case UFS_CRYPTO_ALG_AES_ECB:
+ /* fall through */
+ case UFS_CRYPTO_ALG_ESSIV_AES_CBC:
+ memcpy(cfg->crypto_key, key, key_size_bytes);
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int ufshcd_program_key(struct ufs_hba *hba,
+ const union ufs_crypto_cfg_entry *cfg, int slot)
+{
+ int i;
+ u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg);
+ int err;
+
+ pm_runtime_get_sync(hba->dev);
+ ufshcd_hold(hba, false);
+
+ if (hba->var->vops->program_key) {
+ err = hba->var->vops->program_key(hba, cfg, slot);
+ goto out;
+ }
+
+ /* Clear the dword 16 */
+ ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
+ /* Ensure that CFGE is cleared before programming the key */
+ wmb();
+ for (i = 0; i < 16; i++) {
+ ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[i]),
+ slot_offset + i * sizeof(cfg->reg_val[0]));
+ /* Spec says each dword in key must be written sequentially */
+ wmb();
+ }
+ /* Write dword 17 */
+ ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[17]),
+ slot_offset + 17 * sizeof(cfg->reg_val[0]));
+ /* Dword 16 must be written last */
+ wmb();
+ /* Write dword 16 */
+ ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[16]),
+ slot_offset + 16 * sizeof(cfg->reg_val[0]));
+ wmb();
+ err = 0;
+out:
+ ufshcd_release(hba, false);
+ pm_runtime_put_sync(hba->dev);
+ return err;
+}
+
+static void ufshcd_clear_keyslot(struct ufs_hba *hba, int slot)
+{
+ union ufs_crypto_cfg_entry cfg = { {0} };
+ int err;
+
+ err = ufshcd_program_key(hba, &cfg, slot);
+ WARN_ON_ONCE(err);
+}
+
+/* Clear all keyslots at driver init time */
+static void ufshcd_clear_all_keyslots(struct ufs_hba *hba)
+{
+ int slot;
+
+ for (slot = 0; slot < ufshcd_num_keyslots(hba); slot++)
+ ufshcd_clear_keyslot(hba, slot);
+}
+
+static int ufshcd_crypto_keyslot_program(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot)
+{
+ struct ufs_hba *hba = keyslot_manager_private(ksm);
+ int err = 0;
+ u8 data_unit_mask;
+ union ufs_crypto_cfg_entry cfg;
+ int cap_idx;
+
+ cap_idx = ufshcd_crypto_cap_find(hba, key->crypto_mode,
+ key->data_unit_size);
+
+ if (!ufshcd_is_crypto_enabled(hba) ||
+ !ufshcd_keyslot_valid(hba, slot) ||
+ !ufshcd_cap_idx_valid(hba, cap_idx))
+ return -EINVAL;
+
+ data_unit_mask = get_data_unit_size_mask(key->data_unit_size);
+
+ if (!(data_unit_mask & hba->crypto_cap_array[cap_idx].sdus_mask))
+ return -EINVAL;
+
+ memset(&cfg, 0, sizeof(cfg));
+ cfg.data_unit_size = data_unit_mask;
+ cfg.crypto_cap_idx = cap_idx;
+ cfg.config_enable |= UFS_CRYPTO_CONFIGURATION_ENABLE;
+
+ err = ufshcd_crypto_cfg_entry_write_key(&cfg, key->raw,
+ hba->crypto_cap_array[cap_idx]);
+ if (err)
+ return err;
+
+ err = ufshcd_program_key(hba, &cfg, slot);
+
+ memzero_explicit(&cfg, sizeof(cfg));
+
+ return err;
+}
+
+static int ufshcd_crypto_keyslot_evict(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot)
+{
+ struct ufs_hba *hba = keyslot_manager_private(ksm);
+
+ if (!ufshcd_is_crypto_enabled(hba) ||
+ !ufshcd_keyslot_valid(hba, slot))
+ return -EINVAL;
+
+ /*
+ * Clear the crypto cfg on the device. Clearing CFGE
+ * might not be sufficient, so just clear the entire cfg.
+ */
+ ufshcd_clear_keyslot(hba, slot);
+
+ return 0;
+}
+
+/* Functions implementing UFSHCI v2.1 specification behaviour */
+void ufshcd_crypto_enable_spec(struct ufs_hba *hba)
+{
+ if (!ufshcd_hba_is_crypto_supported(hba))
+ return;
+
+ hba->caps |= UFSHCD_CAP_CRYPTO;
+
+ /* Reset might clear all keys, so reprogram all the keys. */
+ keyslot_manager_reprogram_all_keys(hba->ksm);
+}
+EXPORT_SYMBOL_GPL(ufshcd_crypto_enable_spec);
+
+void ufshcd_crypto_disable_spec(struct ufs_hba *hba)
+{
+ hba->caps &= ~UFSHCD_CAP_CRYPTO;
+}
+EXPORT_SYMBOL_GPL(ufshcd_crypto_disable_spec);
+
+static const struct keyslot_mgmt_ll_ops ufshcd_ksm_ops = {
+ .keyslot_program = ufshcd_crypto_keyslot_program,
+ .keyslot_evict = ufshcd_crypto_keyslot_evict,
+};
+
+enum blk_crypto_mode_num ufshcd_blk_crypto_mode_num_for_alg_dusize(
+ enum ufs_crypto_alg ufs_crypto_alg,
+ enum ufs_crypto_key_size key_size)
+{
+ /*
+ * This is currently the only mode that UFS and blk-crypto both support.
+ */
+ if (ufs_crypto_alg == UFS_CRYPTO_ALG_AES_XTS &&
+ key_size == UFS_CRYPTO_KEY_SIZE_256)
+ return BLK_ENCRYPTION_MODE_AES_256_XTS;
+
+ return BLK_ENCRYPTION_MODE_INVALID;
+}
+
+/**
+ * ufshcd_hba_init_crypto - Read crypto capabilities, init crypto fields in hba
+ * @hba: Per adapter instance
+ *
+ * Return: 0 if crypto was initialized or is not supported, else a -errno value.
+ */
+int ufshcd_hba_init_crypto_spec(struct ufs_hba *hba,
+ const struct keyslot_mgmt_ll_ops *ksm_ops)
+{
+ int cap_idx = 0;
+ int err = 0;
+ unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX];
+ enum blk_crypto_mode_num blk_mode_num;
+
+ /* Default to disabling crypto */
+ hba->caps &= ~UFSHCD_CAP_CRYPTO;
+
+ /* Return 0 if crypto support isn't present */
+ if (!(hba->capabilities & MASK_CRYPTO_SUPPORT) ||
+ (hba->quirks & UFSHCD_QUIRK_BROKEN_CRYPTO))
+ goto out;
+
+ /*
+ * Crypto Capabilities should never be 0, because the
+ * config_array_ptr > 04h. So we use a 0 value to indicate that
+ * crypto init failed, and can't be enabled.
+ */
+ hba->crypto_capabilities.reg_val =
+ cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP));
+ hba->crypto_cfg_register =
+ (u32)hba->crypto_capabilities.config_array_ptr * 0x100;
+ hba->crypto_cap_array =
+ devm_kcalloc(hba->dev,
+ hba->crypto_capabilities.num_crypto_cap,
+ sizeof(hba->crypto_cap_array[0]),
+ GFP_KERNEL);
+ if (!hba->crypto_cap_array) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ memset(crypto_modes_supported, 0, sizeof(crypto_modes_supported));
+ /*
+ * Store all the capabilities now so that we don't need to repeatedly
+ * access the device each time we want to know its capabilities
+ */
+ for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
+ cap_idx++) {
+ hba->crypto_cap_array[cap_idx].reg_val =
+ cpu_to_le32(ufshcd_readl(hba,
+ REG_UFS_CRYPTOCAP +
+ cap_idx * sizeof(__le32)));
+ blk_mode_num = ufshcd_blk_crypto_mode_num_for_alg_dusize(
+ hba->crypto_cap_array[cap_idx].algorithm_id,
+ hba->crypto_cap_array[cap_idx].key_size);
+ if (blk_mode_num == BLK_ENCRYPTION_MODE_INVALID)
+ continue;
+ crypto_modes_supported[blk_mode_num] |=
+ hba->crypto_cap_array[cap_idx].sdus_mask * 512;
+ }
+
+ ufshcd_clear_all_keyslots(hba);
+
+ hba->ksm = keyslot_manager_create(ufshcd_num_keyslots(hba), ksm_ops,
+ crypto_modes_supported, hba);
+
+ if (!hba->ksm) {
+ err = -ENOMEM;
+ goto out_free_caps;
+ }
+
+ return 0;
+
+out_free_caps:
+ devm_kfree(hba->dev, hba->crypto_cap_array);
+out:
+ /* Indicate that init failed by setting crypto_capabilities to 0 */
+ hba->crypto_capabilities.reg_val = 0;
+ return err;
+}
+EXPORT_SYMBOL_GPL(ufshcd_hba_init_crypto_spec);
+
+void ufshcd_crypto_setup_rq_keyslot_manager_spec(struct ufs_hba *hba,
+ struct request_queue *q)
+{
+ if (!ufshcd_hba_is_crypto_supported(hba) || !q)
+ return;
+
+ q->ksm = hba->ksm;
+}
+EXPORT_SYMBOL_GPL(ufshcd_crypto_setup_rq_keyslot_manager_spec);
+
+void ufshcd_crypto_destroy_rq_keyslot_manager_spec(struct ufs_hba *hba,
+ struct request_queue *q)
+{
+ keyslot_manager_destroy(hba->ksm);
+}
+EXPORT_SYMBOL_GPL(ufshcd_crypto_destroy_rq_keyslot_manager_spec);
+
+int ufshcd_prepare_lrbp_crypto_spec(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp)
+{
+ struct bio_crypt_ctx *bc;
+
+ if (!bio_crypt_should_process(cmd->request)) {
+ lrbp->crypto_enable = false;
+ return 0;
+ }
+ bc = cmd->request->bio->bi_crypt_context;
+
+ if (WARN_ON(!ufshcd_is_crypto_enabled(hba))) {
+ /*
+ * Upper layer asked us to do inline encryption
+ * but that isn't enabled, so we fail this request.
+ */
+ return -EINVAL;
+ }
+ if (!ufshcd_keyslot_valid(hba, bc->bc_keyslot))
+ return -EINVAL;
+
+ lrbp->crypto_enable = true;
+ lrbp->crypto_key_slot = bc->bc_keyslot;
+ lrbp->data_unit_num = bc->bc_dun[0];
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ufshcd_prepare_lrbp_crypto_spec);
+
+/* Crypto Variant Ops Support */
+
+void ufshcd_crypto_enable(struct ufs_hba *hba)
+{
+ if (hba->crypto_vops && hba->crypto_vops->enable)
+ return hba->crypto_vops->enable(hba);
+
+ return ufshcd_crypto_enable_spec(hba);
+}
+
+void ufshcd_crypto_disable(struct ufs_hba *hba)
+{
+ if (hba->crypto_vops && hba->crypto_vops->disable)
+ return hba->crypto_vops->disable(hba);
+
+ return ufshcd_crypto_disable_spec(hba);
+}
+
+int ufshcd_hba_init_crypto(struct ufs_hba *hba)
+{
+ if (hba->crypto_vops && hba->crypto_vops->hba_init_crypto)
+ return hba->crypto_vops->hba_init_crypto(hba,
+ &ufshcd_ksm_ops);
+
+ return ufshcd_hba_init_crypto_spec(hba, &ufshcd_ksm_ops);
+}
+
+void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+ struct request_queue *q)
+{
+ if (hba->crypto_vops && hba->crypto_vops->setup_rq_keyslot_manager)
+ return hba->crypto_vops->setup_rq_keyslot_manager(hba, q);
+
+ return ufshcd_crypto_setup_rq_keyslot_manager_spec(hba, q);
+}
+
+void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba,
+ struct request_queue *q)
+{
+ if (hba->crypto_vops && hba->crypto_vops->destroy_rq_keyslot_manager)
+ return hba->crypto_vops->destroy_rq_keyslot_manager(hba, q);
+
+ return ufshcd_crypto_destroy_rq_keyslot_manager_spec(hba, q);
+}
+
+int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp)
+{
+ if (hba->crypto_vops && hba->crypto_vops->prepare_lrbp_crypto)
+ return hba->crypto_vops->prepare_lrbp_crypto(hba, cmd, lrbp);
+
+ return ufshcd_prepare_lrbp_crypto_spec(hba, cmd, lrbp);
+}
+
+int ufshcd_complete_lrbp_crypto(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp)
+{
+ if (hba->crypto_vops && hba->crypto_vops->complete_lrbp_crypto)
+ return hba->crypto_vops->complete_lrbp_crypto(hba, cmd, lrbp);
+
+ return 0;
+}
+
+void ufshcd_crypto_debug(struct ufs_hba *hba)
+{
+ if (hba->crypto_vops && hba->crypto_vops->debug)
+ hba->crypto_vops->debug(hba);
+}
+
+int ufshcd_crypto_suspend(struct ufs_hba *hba,
+ enum ufs_pm_op pm_op)
+{
+ if (hba->crypto_vops && hba->crypto_vops->suspend)
+ return hba->crypto_vops->suspend(hba, pm_op);
+
+ return 0;
+}
+
+int ufshcd_crypto_resume(struct ufs_hba *hba,
+ enum ufs_pm_op pm_op)
+{
+ if (hba->crypto_vops && hba->crypto_vops->resume)
+ return hba->crypto_vops->resume(hba, pm_op);
+
+ return 0;
+}
+
+void ufshcd_crypto_set_vops(struct ufs_hba *hba,
+ struct ufs_hba_crypto_variant_ops *crypto_vops)
+{
+ hba->crypto_vops = crypto_vops;
+}
diff --git a/drivers/scsi/ufs/ufshcd-crypto.h b/drivers/scsi/ufs/ufshcd-crypto.h
new file mode 100644
index 0000000..95f37c9
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto.h
@@ -0,0 +1,167 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef _UFSHCD_CRYPTO_H
+#define _UFSHCD_CRYPTO_H
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+#include <linux/keyslot-manager.h>
+#include "ufshcd.h"
+#include "ufshci.h"
+
+static inline int ufshcd_num_keyslots(struct ufs_hba *hba)
+{
+ return hba->crypto_capabilities.config_count + 1;
+}
+
+static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot)
+{
+ /*
+ * The actual number of configurations supported is (CFGC+1), so slot
+ * numbers range from 0 to config_count inclusive.
+ */
+ return slot < ufshcd_num_keyslots(hba);
+}
+
+static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
+{
+ return hba->crypto_capabilities.reg_val != 0;
+}
+
+static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
+{
+ return hba->caps & UFSHCD_CAP_CRYPTO;
+}
+
+/* Functions implementing UFSHCI v2.1 specification behaviour */
+int ufshcd_crypto_cap_find(struct ufs_hba *hba,
+ enum blk_crypto_mode_num crypto_mode,
+ unsigned int data_unit_size);
+
+int ufshcd_prepare_lrbp_crypto_spec(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp);
+
+void ufshcd_crypto_enable_spec(struct ufs_hba *hba);
+
+void ufshcd_crypto_disable_spec(struct ufs_hba *hba);
+
+struct keyslot_mgmt_ll_ops;
+int ufshcd_hba_init_crypto_spec(struct ufs_hba *hba,
+ const struct keyslot_mgmt_ll_ops *ksm_ops);
+
+void ufshcd_crypto_setup_rq_keyslot_manager_spec(struct ufs_hba *hba,
+ struct request_queue *q);
+
+void ufshcd_crypto_destroy_rq_keyslot_manager_spec(struct ufs_hba *hba,
+ struct request_queue *q);
+
+static inline bool ufshcd_lrbp_crypto_enabled(struct ufshcd_lrb *lrbp)
+{
+ return lrbp->crypto_enable;
+}
+
+/* Crypto Variant Ops Support */
+void ufshcd_crypto_enable(struct ufs_hba *hba);
+
+void ufshcd_crypto_disable(struct ufs_hba *hba);
+
+int ufshcd_hba_init_crypto(struct ufs_hba *hba);
+
+void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+ struct request_queue *q);
+
+void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba,
+ struct request_queue *q);
+
+int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp);
+
+int ufshcd_complete_lrbp_crypto(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp);
+
+void ufshcd_crypto_debug(struct ufs_hba *hba);
+
+int ufshcd_crypto_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op);
+
+int ufshcd_crypto_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op);
+
+void ufshcd_crypto_set_vops(struct ufs_hba *hba,
+ struct ufs_hba_crypto_variant_ops *crypto_vops);
+
+#else /* CONFIG_SCSI_UFS_CRYPTO */
+
+static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba,
+ unsigned int slot)
+{
+ return false;
+}
+
+static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
+{
+ return false;
+}
+
+static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
+{
+ return false;
+}
+
+static inline void ufshcd_crypto_enable(struct ufs_hba *hba) { }
+
+static inline void ufshcd_crypto_disable(struct ufs_hba *hba) { }
+
+static inline int ufshcd_hba_init_crypto(struct ufs_hba *hba)
+{
+ return 0;
+}
+
+static inline void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+ struct request_queue *q) { }
+
+static inline void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba,
+ struct request_queue *q) { }
+
+static inline int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp)
+{
+ return 0;
+}
+
+static inline bool ufshcd_lrbp_crypto_enabled(struct ufshcd_lrb *lrbp)
+{
+ return false;
+}
+
+static inline int ufshcd_complete_lrbp_crypto(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp)
+{
+ return 0;
+}
+
+static inline void ufshcd_crypto_debug(struct ufs_hba *hba) { }
+
+static inline int ufshcd_crypto_suspend(struct ufs_hba *hba,
+ enum ufs_pm_op pm_op)
+{
+ return 0;
+}
+
+static inline int ufshcd_crypto_resume(struct ufs_hba *hba,
+ enum ufs_pm_op pm_op)
+{
+ return 0;
+}
+
+static inline void ufshcd_crypto_set_vops(struct ufs_hba *hba,
+ struct ufs_hba_crypto_variant_ops *crypto_vops) { }
+
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
+
+#endif /* _UFSHCD_CRYPTO_H */
diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
index 5c1ce40..380cbca 100644
--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
+++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
@@ -195,7 +195,10 @@ static int ufshcd_populate_vreg(struct device *dev, const char *name,
goto out;
}
- vreg->min_uA = 0;
+ snprintf(prop_name, MAX_PROP_SIZE, "%s-min-microamp", name);
+ if (of_property_read_u32(np, prop_name, &vreg->min_uA))
+ vreg->min_uA = UFS_VREG_LPM_LOAD_UA;
+
if (!strcmp(name, "vcc")) {
if (of_property_read_bool(np, "vcc-supply-1p8")) {
vreg->min_uV = UFS_VREG_VCC_1P8_MIN_UV;
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 1791bce..b8a11a1 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -204,6 +204,7 @@ static void ufshcd_update_uic_error_cnt(struct ufs_hba *hba, u32 reg, int type)
break;
}
}
+#include "ufshcd-crypto.h"
#define CREATE_TRACE_POINTS
#include <trace/events/ufs.h>
@@ -424,7 +425,7 @@ static struct ufs_dev_fix ufs_fixups[] = {
/* UFS cards deviations table */
UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM),
- UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
+ UFS_FIX(UFS_ANY_VENDOR, UFS_ANY_MODEL,
UFS_DEVICE_NO_FASTAUTO),
UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE),
@@ -905,6 +906,8 @@ static inline void __ufshcd_print_host_regs(struct ufs_hba *hba, bool no_sleep)
static void ufshcd_print_host_regs(struct ufs_hba *hba)
{
__ufshcd_print_host_regs(hba, false);
+
+ ufshcd_crypto_debug(hba);
}
static
@@ -1412,8 +1415,11 @@ static inline void ufshcd_hba_start(struct ufs_hba *hba)
{
u32 val = CONTROLLER_ENABLE;
- if (ufshcd_is_crypto_supported(hba))
+ if (ufshcd_hba_is_crypto_supported(hba)) {
+ ufshcd_crypto_enable(hba);
val |= CRYPTO_GENERAL_ENABLE;
+ }
+
ufshcd_writel(hba, val, REG_CONTROLLER_ENABLE);
}
@@ -2271,6 +2277,8 @@ static void ufshcd_gate_work(struct work_struct *work)
unsigned long flags;
spin_lock_irqsave(hba->host->host_lock, flags);
+ if (hba->clk_gating.state == CLKS_OFF)
+ goto rel_lock;
/*
* In case you are here to cancel this work the gating state
* would be marked as REQ_CLKS_ON. In this case save time by
@@ -3364,41 +3372,6 @@ static void ufshcd_disable_intr(struct ufs_hba *hba, u32 intrs)
ufshcd_writel(hba, set, REG_INTERRUPT_ENABLE);
}
-static int ufshcd_prepare_crypto_utrd(struct ufs_hba *hba,
- struct ufshcd_lrb *lrbp)
-{
- struct utp_transfer_req_desc *req_desc = lrbp->utr_descriptor_ptr;
- u8 cc_index = 0;
- bool enable = false;
- u64 dun = 0;
- int ret;
-
- /*
- * Call vendor specific code to get crypto info for this request:
- * enable, crypto config. index, DUN.
- * If bypass is set, don't bother setting the other fields.
- */
- ret = ufshcd_vops_crypto_req_setup(hba, lrbp, &cc_index, &enable, &dun);
- if (ret) {
- if (ret != -EAGAIN) {
- dev_err(hba->dev,
- "%s: failed to setup crypto request (%d)\n",
- __func__, ret);
- }
-
- return ret;
- }
-
- if (!enable)
- goto out;
-
- req_desc->header.dword_0 |= cc_index | UTRD_CRYPTO_ENABLE;
- req_desc->header.dword_1 = (u32)(dun & 0xFFFFFFFF);
- req_desc->header.dword_3 = (u32)((dun >> 32) & 0xFFFFFFFF);
-out:
- return 0;
-}
-
/**
* ufshcd_prepare_req_desc_hdr() - Fills the requests header
* descriptor according to request
@@ -3432,9 +3405,23 @@ static int ufshcd_prepare_req_desc_hdr(struct ufs_hba *hba,
dword_0 |= UTP_REQ_DESC_INT_CMD;
/* Transfer request descriptor header fields */
+ if (ufshcd_lrbp_crypto_enabled(lrbp)) {
+#if IS_ENABLED(CONFIG_SCSI_UFS_CRYPTO)
+ dword_0 |= UTP_REQ_DESC_CRYPTO_ENABLE_CMD;
+ dword_0 |= lrbp->crypto_key_slot;
+ req_desc->header.dword_1 =
+ cpu_to_le32(lower_32_bits(lrbp->data_unit_num));
+ req_desc->header.dword_3 =
+ cpu_to_le32(upper_32_bits(lrbp->data_unit_num));
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
+ } else {
+ /* dword_1 and dword_3 are reserved, hence they are set to 0 */
+ req_desc->header.dword_1 = 0;
+ req_desc->header.dword_3 = 0;
+ }
+
req_desc->header.dword_0 = cpu_to_le32(dword_0);
- /* dword_1 is reserved, hence it is set to 0 */
- req_desc->header.dword_1 = 0;
+
/*
* assigning invalid value for command status. Controller
* updates OCS on command completion, with the command
@@ -3442,14 +3429,9 @@ static int ufshcd_prepare_req_desc_hdr(struct ufs_hba *hba,
*/
req_desc->header.dword_2 =
cpu_to_le32(OCS_INVALID_COMMAND_STATUS);
- /* dword_3 is reserved, hence it is set to 0 */
- req_desc->header.dword_3 = 0;
req_desc->prd_table_length = 0;
- if (ufshcd_is_crypto_supported(hba))
- return ufshcd_prepare_crypto_utrd(hba, lrbp);
-
return 0;
}
@@ -3702,8 +3684,11 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
cmd->scsi_done(cmd);
return 0;
}
- if (err == -EAGAIN)
+ if (err == -EAGAIN) {
+ hba->ufs_stats.scsi_blk_reqs.ts = ktime_get();
+ hba->ufs_stats.scsi_blk_reqs.busy_ctx = SCALING_BUSY;
return SCSI_MLQUEUE_HOST_BUSY;
+ }
} else if (err == 1) {
has_read_lock = true;
}
@@ -3719,6 +3704,8 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
/* if error handling is in progress, return host busy */
if (ufshcd_eh_in_progress(hba)) {
err = SCSI_MLQUEUE_HOST_BUSY;
+ hba->ufs_stats.scsi_blk_reqs.ts = ktime_get();
+ hba->ufs_stats.scsi_blk_reqs.busy_ctx = EH_IN_PROGRESS;
goto out_unlock;
}
@@ -3728,6 +3715,9 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
case UFSHCD_STATE_EH_SCHEDULED:
case UFSHCD_STATE_RESET:
err = SCSI_MLQUEUE_HOST_BUSY;
+ hba->ufs_stats.scsi_blk_reqs.ts = ktime_get();
+ hba->ufs_stats.scsi_blk_reqs.busy_ctx =
+ UFS_RESET_OR_EH_SCHEDULED;
goto out_unlock;
case UFSHCD_STATE_ERROR:
set_host_byte(cmd, DID_ERROR);
@@ -3753,6 +3743,8 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
* completion.
*/
err = SCSI_MLQUEUE_HOST_BUSY;
+ hba->ufs_stats.scsi_blk_reqs.ts = ktime_get();
+ hba->ufs_stats.scsi_blk_reqs.busy_ctx = LRB_IN_USE;
goto out;
}
@@ -3760,6 +3752,8 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
err = ufshcd_hold(hba, true);
if (err) {
err = SCSI_MLQUEUE_HOST_BUSY;
+ hba->ufs_stats.scsi_blk_reqs.ts = ktime_get();
+ hba->ufs_stats.scsi_blk_reqs.busy_ctx = UFSHCD_HOLD;
clear_bit_unlock(tag, &hba->lrb_in_use);
goto out;
}
@@ -3770,6 +3764,8 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
if (err) {
clear_bit_unlock(tag, &hba->lrb_in_use);
err = SCSI_MLQUEUE_HOST_BUSY;
+ hba->ufs_stats.scsi_blk_reqs.ts = ktime_get();
+ hba->ufs_stats.scsi_blk_reqs.busy_ctx = UFSHCD_HIBERN8_HOLD;
hba->ufs_stats.clk_rel.ctx = QUEUE_CMD;
ufshcd_release(hba, true);
goto out;
@@ -3791,6 +3787,13 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
lrbp->task_tag = tag;
lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
lrbp->intr_cmd = !ufshcd_is_intr_aggr_allowed(hba) ? true : false;
+
+ err = ufshcd_prepare_lrbp_crypto(hba, cmd, lrbp);
+ if (err) {
+ lrbp->cmd = NULL;
+ clear_bit_unlock(tag, &hba->lrb_in_use);
+ goto out;
+ }
lrbp->req_abort_skip = false;
err = ufshcd_comp_scsi_upiu(hba, lrbp);
@@ -3816,21 +3819,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
goto out;
}
- err = ufshcd_vops_crypto_engine_cfg_start(hba, tag);
- if (err) {
- if (err != -EAGAIN)
- dev_err(hba->dev,
- "%s: failed to configure crypto engine %d\n",
- __func__, err);
-
- scsi_dma_unmap(lrbp->cmd);
- lrbp->cmd = NULL;
- clear_bit_unlock(tag, &hba->lrb_in_use);
- ufshcd_release_all(hba);
- ufshcd_vops_pm_qos_req_end(hba, cmd->request, true);
-
- goto out;
- }
/* Make sure descriptors are ready before ringing the doorbell */
wmb();
@@ -3847,7 +3835,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
clear_bit_unlock(tag, &hba->lrb_in_use);
ufshcd_release_all(hba);
ufshcd_vops_pm_qos_req_end(hba, cmd->request, true);
- ufshcd_vops_crypto_engine_cfg_end(hba, lrbp, cmd->request);
dev_err(hba->dev, "%s: failed sending command, %d\n",
__func__, err);
err = DID_ERROR;
@@ -3871,6 +3858,9 @@ static int ufshcd_compose_dev_cmd(struct ufs_hba *hba,
lrbp->task_tag = tag;
lrbp->lun = 0; /* device management cmd is not specific to any LUN */
lrbp->intr_cmd = true; /* No interrupt aggregation */
+#if IS_ENABLED(CONFIG_SCSI_UFS_CRYPTO)
+ lrbp->crypto_enable = false; /* No crypto operations */
+#endif
hba->dev_cmd.type = cmd_type;
return ufshcd_comp_devman_upiu(hba, lrbp);
@@ -5772,6 +5762,8 @@ static inline void ufshcd_hba_stop(struct ufs_hba *hba, bool can_sleep)
{
int err;
+ ufshcd_crypto_disable(hba);
+
ufshcd_writel(hba, CONTROLLER_DISABLE, REG_CONTROLLER_ENABLE);
err = ufshcd_wait_for_register(hba, REG_CONTROLLER_ENABLE,
CONTROLLER_ENABLE, CONTROLLER_DISABLE,
@@ -6208,6 +6200,8 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
sdev->autosuspend_delay = UFSHCD_AUTO_SUSPEND_DELAY_MS;
sdev->use_rpm_auto = 1;
+ ufshcd_crypto_setup_rq_keyslot_manager(hba, q);
+
return 0;
}
@@ -6218,6 +6212,7 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
static void ufshcd_slave_destroy(struct scsi_device *sdev)
{
struct ufs_hba *hba;
+ struct request_queue *q = sdev->request_queue;
hba = shost_priv(sdev->host);
/* Drop the reference as it won't be needed anymore */
@@ -6228,6 +6223,8 @@ static void ufshcd_slave_destroy(struct scsi_device *sdev)
hba->sdev_ufs_device = NULL;
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
+
+ ufshcd_crypto_destroy_rq_keyslot_manager(hba, q);
}
/**
@@ -6499,9 +6496,9 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
result = ufshcd_transfer_rsp_status(hba, lrbp);
scsi_dma_unmap(cmd);
cmd->result = result;
- clear_bit_unlock(index, &hba->lrb_in_use);
lrbp->compl_time_stamp = ktime_get();
update_req_stats(hba, lrbp);
+ ufshcd_complete_lrbp_crypto(hba, cmd, lrbp);
/* Mark completed command as NULL in LRB */
lrbp->cmd = NULL;
hba->ufs_stats.clk_rel.ctx = XFR_REQ_COMPL;
@@ -6515,10 +6512,10 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
*/
ufshcd_vops_pm_qos_req_end(hba, cmd->request,
false);
- ufshcd_vops_crypto_engine_cfg_end(hba,
- lrbp, cmd->request);
}
+ clear_bit_unlock(index, &hba->lrb_in_use);
+
/* Do not touch lrbp after scsi done */
cmd->scsi_done(cmd);
} else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE ||
@@ -6569,7 +6566,6 @@ void ufshcd_abort_outstanding_transfer_requests(struct ufs_hba *hba, int result)
/* Clear pending transfer requests */
ufshcd_clear_cmd(hba, index);
ufshcd_outstanding_req_clear(hba, index);
- clear_bit_unlock(index, &hba->lrb_in_use);
lrbp->compl_time_stamp = ktime_get();
update_req_stats(hba, lrbp);
/* Mark completed command as NULL in LRB */
@@ -6583,9 +6579,8 @@ void ufshcd_abort_outstanding_transfer_requests(struct ufs_hba *hba, int result)
*/
ufshcd_vops_pm_qos_req_end(hba, cmd->request,
true);
- ufshcd_vops_crypto_engine_cfg_end(hba,
- lrbp, cmd->request);
}
+ clear_bit_unlock(index, &hba->lrb_in_use);
/* Do not touch lrbp after scsi done */
cmd->scsi_done(cmd);
} else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE) {
@@ -7651,8 +7646,6 @@ static irqreturn_t ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status)
ufsdbg_error_inject_dispatcher(hba,
ERR_INJECT_INTR, intr_status, &intr_status);
- ufshcd_vops_crypto_engine_get_status(hba, &hba->ce_error);
-
hba->errors = UFSHCD_ERROR_MASK & intr_status;
if (hba->errors || hba->ce_error)
retval |= ufshcd_check_errors(hba);
@@ -8130,15 +8123,6 @@ static int ufshcd_host_reset_and_restore(struct ufs_hba *hba)
goto out;
}
- if (!err) {
- err = ufshcd_vops_crypto_engine_reset(hba);
- if (err) {
- dev_err(hba->dev,
- "%s: failed to reset crypto engine %d\n",
- __func__, err);
- goto out;
- }
- }
out:
if (err)
@@ -9563,8 +9547,7 @@ static inline int ufshcd_config_vreg_lpm(struct ufs_hba *hba,
else if (vreg->unused)
return 0;
else
- return ufshcd_config_vreg_load(hba->dev, vreg,
- UFS_VREG_LPM_LOAD_UA);
+ return ufshcd_config_vreg_load(hba->dev, vreg, vreg->min_uA);
}
static inline int ufshcd_config_vreg_hpm(struct ufs_hba *hba,
@@ -10337,6 +10320,10 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
req_link_state = UIC_LINK_OFF_STATE;
}
+ ret = ufshcd_crypto_suspend(hba, pm_op);
+ if (ret)
+ goto out;
+
/*
* If we can't transition into any of the low power modes
* just gate the clocks.
@@ -10465,6 +10452,7 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
hba->hibern8_on_idle.is_suspended = false;
hba->clk_gating.is_suspended = false;
ufshcd_release_all(hba);
+ ufshcd_crypto_resume(hba, pm_op);
out:
hba->pm_op_in_progress = 0;
@@ -10488,9 +10476,11 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
{
int ret;
enum uic_link_state old_link_state;
+ enum ufs_dev_pwr_mode old_pwr_mode;
hba->pm_op_in_progress = 1;
old_link_state = hba->uic_link_state;
+ old_pwr_mode = hba->curr_dev_pwr_mode;
ufshcd_hba_vreg_set_hpm(hba);
/* Make sure clocks are enabled before accessing controller */
@@ -10543,10 +10533,26 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
ufshcd_wb_buf_flush_disable(hba);
if (!ufshcd_is_ufs_dev_active(hba)) {
ret = ufshcd_set_dev_pwr_mode(hba, UFS_ACTIVE_PWR_MODE);
- if (ret)
- goto set_old_link_state;
+ if (ret) {
+ /*
+ * In the case of SSU timeout, err_handler must have
+ * recovered the uic link and dev state to active so
+ * we can proceed after checking the link and
+ * dev state.
+ */
+ if ((host_byte(ret) == DID_TIME_OUT) &&
+ ufshcd_is_ufs_dev_active(hba) &&
+ ufshcd_is_link_active(hba))
+ ret = 0;
+ else
+ goto set_old_link_state;
+ }
}
+ ret = ufshcd_crypto_resume(hba, pm_op);
+ if (ret)
+ goto set_old_dev_pwr_mode;
+
if (ufshcd_keep_autobkops_enabled_except_suspend(hba))
ufshcd_enable_auto_bkops(hba);
else
@@ -10569,6 +10575,9 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
ufshcd_release_all(hba);
goto out;
+set_old_dev_pwr_mode:
+ if (old_pwr_mode != hba->curr_dev_pwr_mode)
+ ufshcd_set_dev_pwr_mode(hba, old_pwr_mode);
set_old_link_state:
ufshcd_link_state_transition(hba, old_link_state, 0);
if (ufshcd_is_link_hibern8(hba) &&
@@ -11085,6 +11094,12 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
if (hba->force_g4)
hba->phy_init_g4 = true;
+ /* Init crypto */
+ err = ufshcd_hba_init_crypto(hba);
+ if (err) {
+ dev_err(hba->dev, "crypto setup failed\n");
+ goto out_remove_scsi_host;
+ }
/* Host controller enable */
err = ufshcd_hba_enable(hba);
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index a0b8a82..8d88011 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -3,7 +3,7 @@
*
* This code is based on drivers/scsi/ufs/ufshcd.h
* Copyright (C) 2011-2013 Samsung India Software Operations
- * Copyright (c) 2013-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2013-2020, The Linux Foundation. All rights reserved.
*
* Authors:
* Santosh Yaraganavi <santosh.sy@samsung.com>
@@ -197,6 +197,9 @@ struct ufs_pm_lvl_states {
* @intr_cmd: Interrupt command (doesn't participate in interrupt aggregation)
* @issue_time_stamp: time stamp for debug purposes
* @compl_time_stamp: time stamp for statistics
+ * @crypto_enable: whether or not the request needs inline crypto operations
+ * @crypto_key_slot: the key slot to use for inline crypto
+ * @data_unit_num: the data unit number for the first block for inline crypto
* @req_abort_skip: skip request abort task flag
*/
struct ufshcd_lrb {
@@ -221,6 +224,11 @@ struct ufshcd_lrb {
bool intr_cmd;
ktime_t issue_time_stamp;
ktime_t compl_time_stamp;
+#if IS_ENABLED(CONFIG_SCSI_UFS_CRYPTO)
+ bool crypto_enable;
+ u8 crypto_key_slot;
+ u64 data_unit_num;
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
bool req_abort_skip;
};
@@ -302,6 +310,8 @@ struct ufs_pwr_mode_info {
struct ufs_pa_layer_attr info;
};
+union ufs_crypto_cfg_entry;
+
/**
* struct ufs_hba_variant_ops - variant specific callbacks
* @init: called when the driver is initialized
@@ -332,6 +342,7 @@ struct ufs_pwr_mode_info {
* scale down
* @set_bus_vote: called to vote for the required bus bandwidth
* @phy_initialization: used to initialize phys
+ * @program_key: program an inline encryption key into a keyslot
*/
struct ufs_hba_variant_ops {
int (*init)(struct ufs_hba *);
@@ -368,31 +379,8 @@ struct ufs_hba_variant_ops {
void (*add_debugfs)(struct ufs_hba *hba, struct dentry *root);
void (*remove_debugfs)(struct ufs_hba *hba);
#endif
-};
-
-/**
- * struct ufs_hba_crypto_variant_ops - variant specific crypto callbacks
- * @crypto_req_setup: retreieve the necessary cryptographic arguments to setup
- a requests's transfer descriptor.
- * @crypto_engine_cfg_start: start configuring cryptographic engine
- * according to tag
- * parameter
- * @crypto_engine_cfg_end: end configuring cryptographic engine
- * according to tag parameter
- * @crypto_engine_reset: perform reset to the cryptographic engine
- * @crypto_engine_get_status: get errors status of the cryptographic engine
- */
-struct ufs_hba_crypto_variant_ops {
- int (*crypto_req_setup)(struct ufs_hba *hba,
- struct ufshcd_lrb *lrbp, u8 *cc_index,
- bool *enable, u64 *dun);
- int (*crypto_engine_cfg_start)(struct ufs_hba *hba,
- unsigned int task_tag);
- int (*crypto_engine_cfg_end)(struct ufs_hba *hba,
- struct ufshcd_lrb *lrbp,
- struct request *req);
- int (*crypto_engine_reset)(struct ufs_hba *hba);
- int (*crypto_engine_get_status)(struct ufs_hba *hba, u32 *status);
+ int (*program_key)(struct ufs_hba *hba,
+ const union ufs_crypto_cfg_entry *cfg, int slot);
};
/**
@@ -412,10 +400,31 @@ struct ufs_hba_variant {
struct device *dev;
const char *name;
struct ufs_hba_variant_ops *vops;
- struct ufs_hba_crypto_variant_ops *crypto_vops;
struct ufs_hba_pm_qos_variant_ops *pm_qos_vops;
};
+struct keyslot_mgmt_ll_ops;
+struct ufs_hba_crypto_variant_ops {
+ void (*setup_rq_keyslot_manager)(struct ufs_hba *hba,
+ struct request_queue *q);
+ void (*destroy_rq_keyslot_manager)(struct ufs_hba *hba,
+ struct request_queue *q);
+ int (*hba_init_crypto)(struct ufs_hba *hba,
+ const struct keyslot_mgmt_ll_ops *ksm_ops);
+ void (*enable)(struct ufs_hba *hba);
+ void (*disable)(struct ufs_hba *hba);
+ int (*suspend)(struct ufs_hba *hba, enum ufs_pm_op pm_op);
+ int (*resume)(struct ufs_hba *hba, enum ufs_pm_op pm_op);
+ int (*debug)(struct ufs_hba *hba);
+ int (*prepare_lrbp_crypto)(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp);
+ int (*complete_lrbp_crypto)(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp);
+ void *priv;
+};
+
/* clock gating state */
enum clk_gating_state {
CLKS_OFF,
@@ -627,6 +636,20 @@ struct ufshcd_clk_ctx {
enum ufshcd_ctx ctx;
};
+enum ufshcd_scsi_host_busy_ctxt {
+ SCALING_BUSY,
+ EH_IN_PROGRESS,
+ UFS_RESET_OR_EH_SCHEDULED,
+ LRB_IN_USE,
+ UFSHCD_HOLD,
+ UFSHCD_HIBERN8_HOLD,
+};
+
+struct ufshcd_blk_ctx {
+ ktime_t ts;
+ enum ufshcd_scsi_host_busy_ctxt busy_ctx;
+};
+
/**
* struct ufs_stats - keeps usage/err statistics
* @enabled: enable tag stats for debugfs
@@ -659,6 +682,7 @@ struct ufs_stats {
ktime_t last_intr_ts;
struct ufshcd_clk_ctx clk_hold;
struct ufshcd_clk_ctx clk_rel;
+ struct ufshcd_blk_ctx scsi_blk_reqs;
u32 hibern8_exit_cnt;
ktime_t last_hibern8_exit_tstamp;
u32 power_mode_change_cnt;
@@ -769,6 +793,10 @@ struct ufshcd_cmd_log {
* @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for
* device is known or not.
* @scsi_block_reqs_cnt: reference counting for scsi block requests
+ * @crypto_capabilities: Content of crypto capabilities register (0x100)
+ * @crypto_cap_array: Array of crypto capabilities
+ * @crypto_cfg_register: Start of the crypto cfg array
+ * @ksm: the keyslot manager tied to this hba
*/
struct ufs_hba {
void __iomem *mmio_base;
@@ -816,6 +844,7 @@ struct ufs_hba {
u32 ufs_version;
struct ufs_hba_variant *var;
void *priv;
+ const struct ufs_hba_crypto_variant_ops *crypto_vops;
unsigned int irq;
bool is_irq_enabled;
bool crash_on_err;
@@ -920,6 +949,11 @@ struct ufs_hba {
#define UFSHCD_QUIRK_DME_PEER_GET_FAST_MODE 0x20000
#define UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8 0x40000
+ /*
+ * This quirk needs to be enabled if the host controller advertises
+ * inline encryption support but it doesn't work correctly.
+ */
+ #define UFSHCD_QUIRK_BROKEN_CRYPTO 0x800
unsigned int quirks; /* Deviations from standard UFSHCI spec. */
@@ -1033,6 +1067,11 @@ struct ufs_hba {
* in hibern8 then enable this cap.
*/
#define UFSHCD_CAP_POWER_COLLAPSE_DURING_HIBERN8 (1 << 7)
+ /*
+ * This capability allows the host controller driver to use the
+ * inline crypto engine, if it is present
+ */
+#define UFSHCD_CAP_CRYPTO (1 << 7)
struct devfreq *devfreq;
struct ufs_clk_scaling clk_scaling;
@@ -1060,6 +1099,14 @@ struct ufs_hba {
bool phy_init_g4;
bool force_g4;
bool wb_enabled;
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+ /* crypto */
+ union ufs_crypto_capabilities crypto_capabilities;
+ union ufs_crypto_cap_entry *crypto_cap_array;
+ u32 crypto_cfg_register;
+ struct keyslot_manager *ksm;
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
};
static inline void ufshcd_mark_shutdown_ongoing(struct ufs_hba *hba)
@@ -1514,7 +1561,8 @@ static inline void ufshcd_vops_remove_debugfs(struct ufs_hba *hba)
hba->var->vops->remove_debugfs(hba);
}
#else
-static inline void ufshcd_vops_add_debugfs(struct ufs_hba *hba, struct dentry *)
+static inline void ufshcd_vops_add_debugfs(struct ufs_hba *hba,
+ struct dentry *root)
{
}
@@ -1523,55 +1571,6 @@ static inline void ufshcd_vops_remove_debugfs(struct ufs_hba *hba)
}
#endif
-static inline int ufshcd_vops_crypto_req_setup(struct ufs_hba *hba,
- struct ufshcd_lrb *lrbp, u8 *cc_index, bool *enable, u64 *dun)
-{
- if (hba->var && hba->var->crypto_vops &&
- hba->var->crypto_vops->crypto_req_setup)
- return hba->var->crypto_vops->crypto_req_setup(hba, lrbp,
- cc_index, enable, dun);
- return 0;
-}
-
-static inline int ufshcd_vops_crypto_engine_cfg_start(struct ufs_hba *hba,
- unsigned int task_tag)
-{
- if (hba->var && hba->var->crypto_vops &&
- hba->var->crypto_vops->crypto_engine_cfg_start)
- return hba->var->crypto_vops->crypto_engine_cfg_start
- (hba, task_tag);
- return 0;
-}
-
-static inline int ufshcd_vops_crypto_engine_cfg_end(struct ufs_hba *hba,
- struct ufshcd_lrb *lrbp,
- struct request *req)
-{
- if (hba->var && hba->var->crypto_vops &&
- hba->var->crypto_vops->crypto_engine_cfg_end)
- return hba->var->crypto_vops->crypto_engine_cfg_end
- (hba, lrbp, req);
- return 0;
-}
-
-static inline int ufshcd_vops_crypto_engine_reset(struct ufs_hba *hba)
-{
- if (hba->var && hba->var->crypto_vops &&
- hba->var->crypto_vops->crypto_engine_reset)
- return hba->var->crypto_vops->crypto_engine_reset(hba);
- return 0;
-}
-
-static inline int ufshcd_vops_crypto_engine_get_status(struct ufs_hba *hba,
- u32 *status)
-{
- if (hba->var && hba->var->crypto_vops &&
- hba->var->crypto_vops->crypto_engine_get_status)
- return hba->var->crypto_vops->crypto_engine_get_status(hba,
- status);
- return 0;
-}
-
static inline void ufshcd_vops_pm_qos_req_start(struct ufs_hba *hba,
struct request *req)
{
diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
index 8a20eb7..3bf11f7 100644
--- a/drivers/scsi/ufs/ufshci.h
+++ b/drivers/scsi/ufs/ufshci.h
@@ -363,6 +363,61 @@ enum {
INTERRUPT_MASK_ALL_VER_21 = 0x71FFF,
};
+/* CCAP - Crypto Capability 100h */
+union ufs_crypto_capabilities {
+ __le32 reg_val;
+ struct {
+ u8 num_crypto_cap;
+ u8 config_count;
+ u8 reserved;
+ u8 config_array_ptr;
+ };
+};
+
+enum ufs_crypto_key_size {
+ UFS_CRYPTO_KEY_SIZE_INVALID = 0x0,
+ UFS_CRYPTO_KEY_SIZE_128 = 0x1,
+ UFS_CRYPTO_KEY_SIZE_192 = 0x2,
+ UFS_CRYPTO_KEY_SIZE_256 = 0x3,
+ UFS_CRYPTO_KEY_SIZE_512 = 0x4,
+};
+
+enum ufs_crypto_alg {
+ UFS_CRYPTO_ALG_AES_XTS = 0x0,
+ UFS_CRYPTO_ALG_BITLOCKER_AES_CBC = 0x1,
+ UFS_CRYPTO_ALG_AES_ECB = 0x2,
+ UFS_CRYPTO_ALG_ESSIV_AES_CBC = 0x3,
+};
+
+/* x-CRYPTOCAP - Crypto Capability X */
+union ufs_crypto_cap_entry {
+ __le32 reg_val;
+ struct {
+ u8 algorithm_id;
+ u8 sdus_mask; /* Supported data unit size mask */
+ u8 key_size;
+ u8 reserved;
+ };
+};
+
+#define UFS_CRYPTO_CONFIGURATION_ENABLE (1 << 7)
+#define UFS_CRYPTO_KEY_MAX_SIZE 64
+/* x-CRYPTOCFG - Crypto Configuration X */
+union ufs_crypto_cfg_entry {
+ __le32 reg_val[32];
+ struct {
+ u8 crypto_key[UFS_CRYPTO_KEY_MAX_SIZE];
+ u8 data_unit_size;
+ u8 crypto_cap_idx;
+ u8 reserved_1;
+ u8 config_enable;
+ u8 reserved_multi_host;
+ u8 reserved_2;
+ u8 vsb[2];
+ u8 reserved_3[56];
+ };
+};
+
/*
* Request Descriptor Definitions
*/
@@ -384,6 +439,7 @@ enum {
UTP_NATIVE_UFS_COMMAND = 0x10000000,
UTP_DEVICE_MANAGEMENT_FUNCTION = 0x20000000,
UTP_REQ_DESC_INT_CMD = 0x01000000,
+ UTP_REQ_DESC_CRYPTO_ENABLE_CMD = 0x00800000,
};
/* UTP Transfer Request Data Direction (DD) */
diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index ab38c20..98c3c24 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -844,6 +844,24 @@
bit in tcsr register if it is going to cross its own threshold.
If all clients are going to cross their thresholds then Cx ipeak
hw module will raise an interrupt to cDSP block to throttle cDSP fmax.
+
+config QTI_CRYPTO_COMMON
+ tristate "Enable common crypto functionality used for FBE"
+ depends on BLK_INLINE_ENCRYPTION
+ help
+ Say 'Y' to enable the common crypto implementation to be used by
+ different storage layers such as UFS and EMMC for file based hardware
+ encryption. This library implements API to program and evict
+ keys using Trustzone or Hardware Key Manager.
+
+config QTI_CRYPTO_TZ
+ tristate "Enable Trustzone to be used for FBE"
+ depends on QTI_CRYPTO_COMMON
+ help
+ Say 'Y' to enable routing crypto requests to Trustzone while
+ performing hardware based file encryption. This means keys are
+ programmed and managed through SCM calls to TZ where ICE driver
+ will configure keys.
endmenu
config QCOM_HYP_CORE_CTL
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index 530043d..4856a43 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -100,3 +100,5 @@
obj-$(CONFIG_QCOM_CX_IPEAK) += cx_ipeak.o
obj-$(CONFIG_QTI_L2_REUSE) += l2_reuse.o
obj-$(CONFIG_ICNSS2) += icnss2/
+obj-$(CONFIG_QTI_CRYPTO_COMMON) += crypto-qti-common.o
+obj-$(CONFIG_QTI_CRYPTO_TZ) += crypto-qti-tz.o
diff --git a/drivers/soc/qcom/crypto-qti-common.c b/drivers/soc/qcom/crypto-qti-common.c
new file mode 100644
index 0000000..97df33a
--- /dev/null
+++ b/drivers/soc/qcom/crypto-qti-common.c
@@ -0,0 +1,467 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2020, Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/crypto-qti-common.h>
+#include "crypto-qti-ice-regs.h"
+#include "crypto-qti-platform.h"
+
+static int ice_check_fuse_setting(struct crypto_vops_qti_entry *ice_entry)
+{
+ uint32_t regval;
+ uint32_t major, minor;
+
+ major = (ice_entry->ice_hw_version & ICE_CORE_MAJOR_REV_MASK) >>
+ ICE_CORE_MAJOR_REV;
+ minor = (ice_entry->ice_hw_version & ICE_CORE_MINOR_REV_MASK) >>
+ ICE_CORE_MINOR_REV;
+
+ //Check fuse setting is not supported on ICE 3.2 onwards
+ if ((major == 0x03) && (minor >= 0x02))
+ return 0;
+ regval = ice_readl(ice_entry, ICE_REGS_FUSE_SETTING);
+ regval &= (ICE_FUSE_SETTING_MASK |
+ ICE_FORCE_HW_KEY0_SETTING_MASK |
+ ICE_FORCE_HW_KEY1_SETTING_MASK);
+
+ if (regval) {
+ pr_err("%s: error: ICE_ERROR_HW_DISABLE_FUSE_BLOWN\n",
+ __func__);
+ return -EPERM;
+ }
+ return 0;
+}
+
+static int ice_check_version(struct crypto_vops_qti_entry *ice_entry)
+{
+ uint32_t version, major, minor, step;
+
+ version = ice_readl(ice_entry, ICE_REGS_VERSION);
+ major = (version & ICE_CORE_MAJOR_REV_MASK) >> ICE_CORE_MAJOR_REV;
+ minor = (version & ICE_CORE_MINOR_REV_MASK) >> ICE_CORE_MINOR_REV;
+ step = (version & ICE_CORE_STEP_REV_MASK) >> ICE_CORE_STEP_REV;
+
+ if (major < ICE_CORE_CURRENT_MAJOR_VERSION) {
+ pr_err("%s: Unknown ICE device at %lu, rev %d.%d.%d\n",
+ __func__, (unsigned long)ice_entry->icemmio_base,
+ major, minor, step);
+ return -ENODEV;
+ }
+
+ ice_entry->ice_hw_version = version;
+
+ return 0;
+}
+
+int crypto_qti_init_crypto(struct device *dev, void __iomem *mmio_base,
+ void **priv_data)
+{
+ int err = 0;
+ struct crypto_vops_qti_entry *ice_entry;
+
+ ice_entry = devm_kzalloc(dev,
+ sizeof(struct crypto_vops_qti_entry),
+ GFP_KERNEL);
+ if (!ice_entry)
+ return -ENOMEM;
+
+ ice_entry->icemmio_base = mmio_base;
+ ice_entry->flags = 0;
+
+ err = ice_check_version(ice_entry);
+ if (err) {
+ pr_err("%s: check version failed, err %d\n", __func__, err);
+ return err;
+ }
+
+ err = ice_check_fuse_setting(ice_entry);
+ if (err)
+ return err;
+
+ *priv_data = (void *)ice_entry;
+
+ return err;
+}
+
+static void ice_low_power_and_optimization_enable(
+ struct crypto_vops_qti_entry *ice_entry)
+{
+ uint32_t regval;
+
+ regval = ice_readl(ice_entry, ICE_REGS_ADVANCED_CONTROL);
+ /* Enable low power mode sequence
+ * [0]-0,[1]-0,[2]-0,[3]-7,[4]-0,[5]-0,[6]-0,[7]-0,
+ * Enable CONFIG_CLK_GATING, STREAM2_CLK_GATING and STREAM1_CLK_GATING
+ */
+ regval |= 0x7000;
+ /* Optimization enable sequence
+ */
+ regval |= 0xD807100;
+ ice_writel(ice_entry, regval, ICE_REGS_ADVANCED_CONTROL);
+ /*
+ * Memory barrier - to ensure write completion before next transaction
+ */
+ wmb();
+}
+
+static int ice_wait_bist_status(struct crypto_vops_qti_entry *ice_entry)
+{
+ int count;
+ uint32_t regval;
+
+ for (count = 0; count < QTI_ICE_MAX_BIST_CHECK_COUNT; count++) {
+ regval = ice_readl(ice_entry, ICE_REGS_BIST_STATUS);
+ if (!(regval & ICE_BIST_STATUS_MASK))
+ break;
+ udelay(50);
+ }
+
+ if (regval) {
+ pr_err("%s: wait bist status failed, reg %d\n",
+ __func__, regval);
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+static void ice_enable_intr(struct crypto_vops_qti_entry *ice_entry)
+{
+ uint32_t regval;
+
+ regval = ice_readl(ice_entry, ICE_REGS_NON_SEC_IRQ_MASK);
+ regval &= ~ICE_REGS_NON_SEC_IRQ_MASK;
+ ice_writel(ice_entry, regval, ICE_REGS_NON_SEC_IRQ_MASK);
+ /*
+ * Memory barrier - to ensure write completion before next transaction
+ */
+ wmb();
+}
+
+static void ice_disable_intr(struct crypto_vops_qti_entry *ice_entry)
+{
+ uint32_t regval;
+
+ regval = ice_readl(ice_entry, ICE_REGS_NON_SEC_IRQ_MASK);
+ regval |= ICE_REGS_NON_SEC_IRQ_MASK;
+ ice_writel(ice_entry, regval, ICE_REGS_NON_SEC_IRQ_MASK);
+ /*
+ * Memory barrier - to ensure write completion before next transaction
+ */
+ wmb();
+}
+
+int crypto_qti_enable(void *priv_data)
+{
+ int err = 0;
+ struct crypto_vops_qti_entry *ice_entry;
+
+ ice_entry = (struct crypto_vops_qti_entry *) priv_data;
+ if (!ice_entry) {
+ pr_err("%s: vops ice data is invalid\n", __func__);
+ return -EINVAL;
+ }
+
+ ice_low_power_and_optimization_enable(ice_entry);
+ err = ice_wait_bist_status(ice_entry);
+ if (err)
+ return err;
+ ice_enable_intr(ice_entry);
+
+ return err;
+}
+
+void crypto_qti_disable(void *priv_data)
+{
+ struct crypto_vops_qti_entry *ice_entry;
+
+ ice_entry = (struct crypto_vops_qti_entry *) priv_data;
+ if (!ice_entry) {
+ pr_err("%s: vops ice data is invalid\n", __func__);
+ return;
+ }
+
+ crypto_qti_disable_platform(ice_entry);
+ ice_disable_intr(ice_entry);
+}
+
+int crypto_qti_resume(void *priv_data)
+{
+ int err = 0;
+ struct crypto_vops_qti_entry *ice_entry;
+
+ ice_entry = (struct crypto_vops_qti_entry *) priv_data;
+ if (!ice_entry) {
+ pr_err("%s: vops ice data is invalid\n", __func__);
+ return -EINVAL;
+ }
+
+ err = ice_wait_bist_status(ice_entry);
+
+ return err;
+}
+
+static void ice_dump_test_bus(struct crypto_vops_qti_entry *ice_entry)
+{
+ uint32_t regval = 0x1;
+ uint32_t val;
+ uint8_t bus_selector;
+ uint8_t stream_selector;
+
+ pr_err("ICE TEST BUS DUMP:\n");
+
+ for (bus_selector = 0; bus_selector <= 0xF; bus_selector++) {
+ regval = 0x1; /* enable test bus */
+ regval |= bus_selector << 28;
+ if (bus_selector == 0xD)
+ continue;
+ ice_writel(ice_entry, regval, ICE_REGS_TEST_BUS_CONTROL);
+ /*
+ * make sure test bus selector is written before reading
+ * the test bus register
+ */
+ wmb();
+ val = ice_readl(ice_entry, ICE_REGS_TEST_BUS_REG);
+ pr_err("ICE_TEST_BUS_CONTROL: 0x%08x | ICE_TEST_BUS_REG: 0x%08x\n",
+ regval, val);
+ }
+
+ pr_err("ICE TEST BUS DUMP (ICE_STREAM1_DATAPATH_TEST_BUS):\n");
+ for (stream_selector = 0; stream_selector <= 0xF; stream_selector++) {
+ regval = 0xD0000001; /* enable stream test bus */
+ regval |= stream_selector << 16;
+ ice_writel(ice_entry, regval, ICE_REGS_TEST_BUS_CONTROL);
+ /*
+ * make sure test bus selector is written before reading
+ * the test bus register
+ */
+ wmb();
+ val = ice_readl(ice_entry, ICE_REGS_TEST_BUS_REG);
+ pr_err("ICE_TEST_BUS_CONTROL: 0x%08x | ICE_TEST_BUS_REG: 0x%08x\n",
+ regval, val);
+ }
+}
+
+
+int crypto_qti_debug(void *priv_data)
+{
+ struct crypto_vops_qti_entry *ice_entry;
+
+ ice_entry = (struct crypto_vops_qti_entry *) priv_data;
+ if (!ice_entry) {
+ pr_err("%s: vops ice data is invalid\n", __func__);
+ return -EINVAL;
+ }
+
+ pr_err("%s: ICE Control: 0x%08x | ICE Reset: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_CONTROL),
+ ice_readl(ice_entry, ICE_REGS_RESET));
+
+ pr_err("%s: ICE Version: 0x%08x | ICE FUSE: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_VERSION),
+ ice_readl(ice_entry, ICE_REGS_FUSE_SETTING));
+
+ pr_err("%s: ICE Param1: 0x%08x | ICE Param2: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_PARAMETERS_1),
+ ice_readl(ice_entry, ICE_REGS_PARAMETERS_2));
+
+ pr_err("%s: ICE Param3: 0x%08x | ICE Param4: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_PARAMETERS_3),
+ ice_readl(ice_entry, ICE_REGS_PARAMETERS_4));
+
+ pr_err("%s: ICE Param5: 0x%08x | ICE IRQ STTS: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_PARAMETERS_5),
+ ice_readl(ice_entry, ICE_REGS_NON_SEC_IRQ_STTS));
+
+ pr_err("%s: ICE IRQ MASK: 0x%08x | ICE IRQ CLR: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_NON_SEC_IRQ_MASK),
+ ice_readl(ice_entry, ICE_REGS_NON_SEC_IRQ_CLR));
+
+ pr_err("%s: ICE INVALID CCFG ERR STTS: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_INVALID_CCFG_ERR_STTS));
+
+ pr_err("%s: ICE BIST Sts: 0x%08x | ICE Bypass Sts: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_BIST_STATUS),
+ ice_readl(ice_entry, ICE_REGS_BYPASS_STATUS));
+
+ pr_err("%s: ICE ADV CTRL: 0x%08x | ICE ENDIAN SWAP: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_ADVANCED_CONTROL),
+ ice_readl(ice_entry, ICE_REGS_ENDIAN_SWAP));
+
+ pr_err("%s: ICE_STM1_ERR_SYND1: 0x%08x | ICE_STM1_ERR_SYND2: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM1_ERROR_SYNDROME1),
+ ice_readl(ice_entry, ICE_REGS_STREAM1_ERROR_SYNDROME2));
+
+ pr_err("%s: ICE_STM2_ERR_SYND1: 0x%08x | ICE_STM2_ERR_SYND2: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM2_ERROR_SYNDROME1),
+ ice_readl(ice_entry, ICE_REGS_STREAM2_ERROR_SYNDROME2));
+
+ pr_err("%s: ICE_STM1_COUNTER1: 0x%08x | ICE_STM1_COUNTER2: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS1),
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS2));
+
+ pr_err("%s: ICE_STM1_COUNTER3: 0x%08x | ICE_STM1_COUNTER4: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS3),
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS4));
+
+ pr_err("%s: ICE_STM2_COUNTER1: 0x%08x | ICE_STM2_COUNTER2: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS1),
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS2));
+
+ pr_err("%s: ICE_STM2_COUNTER3: 0x%08x | ICE_STM2_COUNTER4: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS3),
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS4));
+
+ pr_err("%s: ICE_STM1_CTR5_MSB: 0x%08x | ICE_STM1_CTR5_LSB: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS5_MSB),
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS5_LSB));
+
+ pr_err("%s: ICE_STM1_CTR6_MSB: 0x%08x | ICE_STM1_CTR6_LSB: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS6_MSB),
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS6_LSB));
+
+ pr_err("%s: ICE_STM1_CTR7_MSB: 0x%08x | ICE_STM1_CTR7_LSB: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS7_MSB),
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS7_LSB));
+
+ pr_err("%s: ICE_STM1_CTR8_MSB: 0x%08x | ICE_STM1_CTR8_LSB: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS8_MSB),
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS8_LSB));
+
+ pr_err("%s: ICE_STM1_CTR9_MSB: 0x%08x | ICE_STM1_CTR9_LSB: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS9_MSB),
+ ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS9_LSB));
+
+ pr_err("%s: ICE_STM2_CTR5_MSB: 0x%08x | ICE_STM2_CTR5_LSB: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS5_MSB),
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS5_LSB));
+
+ pr_err("%s: ICE_STM2_CTR6_MSB: 0x%08x | ICE_STM2_CTR6_LSB: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS6_MSB),
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS6_LSB));
+
+ pr_err("%s: ICE_STM2_CTR7_MSB: 0x%08x | ICE_STM2_CTR7_LSB: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS7_MSB),
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS7_LSB));
+
+ pr_err("%s: ICE_STM2_CTR8_MSB: 0x%08x | ICE_STM2_CTR8_LSB: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS8_MSB),
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS8_LSB));
+
+ pr_err("%s: ICE_STM2_CTR9_MSB: 0x%08x | ICE_STM2_CTR9_LSB: 0x%08x\n",
+ ice_entry->ice_dev_type,
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS9_MSB),
+ ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS9_LSB));
+
+ ice_dump_test_bus(ice_entry);
+
+ return 0;
+}
+
+int crypto_qti_keyslot_program(void *priv_data,
+ const struct blk_crypto_key *key,
+ unsigned int slot,
+ u8 data_unit_mask, int capid)
+{
+ int err = 0;
+ struct crypto_vops_qti_entry *ice_entry;
+
+ ice_entry = (struct crypto_vops_qti_entry *) priv_data;
+ if (!ice_entry) {
+ pr_err("%s: vops ice data is invalid\n", __func__);
+ return -EINVAL;
+ }
+
+ err = crypto_qti_program_key(ice_entry, key, slot,
+ data_unit_mask, capid);
+ if (err) {
+ pr_err("%s: program key failed with error %d\n", __func__, err);
+ err = crypto_qti_invalidate_key(ice_entry, slot);
+ if (err) {
+ pr_err("%s: invalidate key failed with error %d\n",
+ __func__, err);
+ return err;
+ }
+ }
+
+ return err;
+}
+
+int crypto_qti_keyslot_evict(void *priv_data, unsigned int slot)
+{
+ int err = 0;
+ struct crypto_vops_qti_entry *ice_entry;
+
+ ice_entry = (struct crypto_vops_qti_entry *) priv_data;
+ if (!ice_entry) {
+ pr_err("%s: vops ice data is invalid\n", __func__);
+ return -EINVAL;
+ }
+
+ err = crypto_qti_invalidate_key(ice_entry, slot);
+ if (err) {
+ pr_err("%s: invalidate key failed with error %d\n",
+ __func__, err);
+ return err;
+ }
+
+ return err;
+}
+
+int crypto_qti_derive_raw_secret(const u8 *wrapped_key,
+ unsigned int wrapped_key_size, u8 *secret,
+ unsigned int secret_size)
+{
+ int err = 0;
+
+ if (wrapped_key_size <= RAW_SECRET_SIZE) {
+ pr_err("%s: Invalid wrapped_key_size: %u\n",
+ __func__, wrapped_key_size);
+ err = -EINVAL;
+ return err;
+ }
+ if (secret_size != RAW_SECRET_SIZE) {
+ pr_err("%s: Invalid secret size: %u\n", __func__, secret_size);
+ err = -EINVAL;
+ return err;
+ }
+
+ memcpy(secret, wrapped_key, secret_size);
+
+ return err;
+}
diff --git a/drivers/soc/qcom/crypto-qti-ice-regs.h b/drivers/soc/qcom/crypto-qti-ice-regs.h
new file mode 100644
index 0000000..38e5c35
--- /dev/null
+++ b/drivers/soc/qcom/crypto-qti-ice-regs.h
@@ -0,0 +1,156 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef _CRYPTO_INLINE_CRYPTO_ENGINE_REGS_H_
+#define _CRYPTO_INLINE_CRYPTO_ENGINE_REGS_H_
+
+#include <linux/io.h>
+
+/* Register bits for ICE version */
+#define ICE_CORE_CURRENT_MAJOR_VERSION 0x03
+
+#define ICE_CORE_STEP_REV_MASK 0xFFFF
+#define ICE_CORE_STEP_REV 0 /* bit 15-0 */
+#define ICE_CORE_MAJOR_REV_MASK 0xFF000000
+#define ICE_CORE_MAJOR_REV 24 /* bit 31-24 */
+#define ICE_CORE_MINOR_REV_MASK 0xFF0000
+#define ICE_CORE_MINOR_REV 16 /* bit 23-16 */
+
+#define ICE_BIST_STATUS_MASK (0xF0000000) /* bits 28-31 */
+
+#define ICE_FUSE_SETTING_MASK 0x1
+#define ICE_FORCE_HW_KEY0_SETTING_MASK 0x2
+#define ICE_FORCE_HW_KEY1_SETTING_MASK 0x4
+
+/* QTI ICE Registers from SWI */
+#define ICE_REGS_CONTROL 0x0000
+#define ICE_REGS_RESET 0x0004
+#define ICE_REGS_VERSION 0x0008
+#define ICE_REGS_FUSE_SETTING 0x0010
+#define ICE_REGS_PARAMETERS_1 0x0014
+#define ICE_REGS_PARAMETERS_2 0x0018
+#define ICE_REGS_PARAMETERS_3 0x001C
+#define ICE_REGS_PARAMETERS_4 0x0020
+#define ICE_REGS_PARAMETERS_5 0x0024
+
+
+/* QTI ICE v3.X only */
+#define ICE_GENERAL_ERR_STTS 0x0040
+#define ICE_INVALID_CCFG_ERR_STTS 0x0030
+#define ICE_GENERAL_ERR_MASK 0x0044
+
+
+/* QTI ICE v2.X only */
+#define ICE_REGS_NON_SEC_IRQ_STTS 0x0040
+#define ICE_REGS_NON_SEC_IRQ_MASK 0x0044
+
+
+#define ICE_REGS_NON_SEC_IRQ_CLR 0x0048
+#define ICE_REGS_STREAM1_ERROR_SYNDROME1 0x0050
+#define ICE_REGS_STREAM1_ERROR_SYNDROME2 0x0054
+#define ICE_REGS_STREAM2_ERROR_SYNDROME1 0x0058
+#define ICE_REGS_STREAM2_ERROR_SYNDROME2 0x005C
+#define ICE_REGS_STREAM1_BIST_ERROR_VEC 0x0060
+#define ICE_REGS_STREAM2_BIST_ERROR_VEC 0x0064
+#define ICE_REGS_STREAM1_BIST_FINISH_VEC 0x0068
+#define ICE_REGS_STREAM2_BIST_FINISH_VEC 0x006C
+#define ICE_REGS_BIST_STATUS 0x0070
+#define ICE_REGS_BYPASS_STATUS 0x0074
+#define ICE_REGS_ADVANCED_CONTROL 0x1000
+#define ICE_REGS_ENDIAN_SWAP 0x1004
+#define ICE_REGS_TEST_BUS_CONTROL 0x1010
+#define ICE_REGS_TEST_BUS_REG 0x1014
+#define ICE_REGS_STREAM1_COUNTERS1 0x1100
+#define ICE_REGS_STREAM1_COUNTERS2 0x1104
+#define ICE_REGS_STREAM1_COUNTERS3 0x1108
+#define ICE_REGS_STREAM1_COUNTERS4 0x110C
+#define ICE_REGS_STREAM1_COUNTERS5_MSB 0x1110
+#define ICE_REGS_STREAM1_COUNTERS5_LSB 0x1114
+#define ICE_REGS_STREAM1_COUNTERS6_MSB 0x1118
+#define ICE_REGS_STREAM1_COUNTERS6_LSB 0x111C
+#define ICE_REGS_STREAM1_COUNTERS7_MSB 0x1120
+#define ICE_REGS_STREAM1_COUNTERS7_LSB 0x1124
+#define ICE_REGS_STREAM1_COUNTERS8_MSB 0x1128
+#define ICE_REGS_STREAM1_COUNTERS8_LSB 0x112C
+#define ICE_REGS_STREAM1_COUNTERS9_MSB 0x1130
+#define ICE_REGS_STREAM1_COUNTERS9_LSB 0x1134
+#define ICE_REGS_STREAM2_COUNTERS1 0x1200
+#define ICE_REGS_STREAM2_COUNTERS2 0x1204
+#define ICE_REGS_STREAM2_COUNTERS3 0x1208
+#define ICE_REGS_STREAM2_COUNTERS4 0x120C
+#define ICE_REGS_STREAM2_COUNTERS5_MSB 0x1210
+#define ICE_REGS_STREAM2_COUNTERS5_LSB 0x1214
+#define ICE_REGS_STREAM2_COUNTERS6_MSB 0x1218
+#define ICE_REGS_STREAM2_COUNTERS6_LSB 0x121C
+#define ICE_REGS_STREAM2_COUNTERS7_MSB 0x1220
+#define ICE_REGS_STREAM2_COUNTERS7_LSB 0x1224
+#define ICE_REGS_STREAM2_COUNTERS8_MSB 0x1228
+#define ICE_REGS_STREAM2_COUNTERS8_LSB 0x122C
+#define ICE_REGS_STREAM2_COUNTERS9_MSB 0x1230
+#define ICE_REGS_STREAM2_COUNTERS9_LSB 0x1234
+
+#define ICE_STREAM1_PREMATURE_LBA_CHANGE (1L << 0)
+#define ICE_STREAM2_PREMATURE_LBA_CHANGE (1L << 1)
+#define ICE_STREAM1_NOT_EXPECTED_LBO (1L << 2)
+#define ICE_STREAM2_NOT_EXPECTED_LBO (1L << 3)
+#define ICE_STREAM1_NOT_EXPECTED_DUN (1L << 4)
+#define ICE_STREAM2_NOT_EXPECTED_DUN (1L << 5)
+#define ICE_STREAM1_NOT_EXPECTED_DUS (1L << 6)
+#define ICE_STREAM2_NOT_EXPECTED_DUS (1L << 7)
+#define ICE_STREAM1_NOT_EXPECTED_DBO (1L << 8)
+#define ICE_STREAM2_NOT_EXPECTED_DBO (1L << 9)
+#define ICE_STREAM1_NOT_EXPECTED_ENC_SEL (1L << 10)
+#define ICE_STREAM2_NOT_EXPECTED_ENC_SEL (1L << 11)
+#define ICE_STREAM1_NOT_EXPECTED_CONF_IDX (1L << 12)
+#define ICE_STREAM2_NOT_EXPECTED_CONF_IDX (1L << 13)
+#define ICE_STREAM1_NOT_EXPECTED_NEW_TRNS (1L << 14)
+#define ICE_STREAM2_NOT_EXPECTED_NEW_TRNS (1L << 15)
+
+#define ICE_NON_SEC_IRQ_MASK \
+ (ICE_STREAM1_PREMATURE_LBA_CHANGE |\
+ ICE_STREAM2_PREMATURE_LBA_CHANGE |\
+ ICE_STREAM1_NOT_EXPECTED_LBO |\
+ ICE_STREAM2_NOT_EXPECTED_LBO |\
+ ICE_STREAM1_NOT_EXPECTED_DUN |\
+ ICE_STREAM2_NOT_EXPECTED_DUN |\
+ ICE_STREAM2_NOT_EXPECTED_DUS |\
+ ICE_STREAM1_NOT_EXPECTED_DBO |\
+ ICE_STREAM2_NOT_EXPECTED_DBO |\
+ ICE_STREAM1_NOT_EXPECTED_ENC_SEL |\
+ ICE_STREAM2_NOT_EXPECTED_ENC_SEL |\
+ ICE_STREAM1_NOT_EXPECTED_CONF_IDX |\
+ ICE_STREAM1_NOT_EXPECTED_NEW_TRNS |\
+ ICE_STREAM2_NOT_EXPECTED_NEW_TRNS)
+
+/* QTI ICE registers from secure side */
+#define ICE_TEST_BUS_REG_SECURE_INTR (1L << 28)
+#define ICE_TEST_BUS_REG_NON_SECURE_INTR (1L << 2)
+
+#define ICE_LUT_KEYS_CRYPTOCFG_R_16 0x4040
+#define ICE_LUT_KEYS_CRYPTOCFG_R_17 0x4044
+#define ICE_LUT_KEYS_CRYPTOCFG_OFFSET 0x80
+
+
+#define ICE_LUT_KEYS_ICE_SEC_IRQ_STTS 0x6200
+#define ICE_LUT_KEYS_ICE_SEC_IRQ_MASK 0x6204
+#define ICE_LUT_KEYS_ICE_SEC_IRQ_CLR 0x6208
+
+#define ICE_STREAM1_PARTIALLY_SET_KEY_USED (1L << 0)
+#define ICE_STREAM2_PARTIALLY_SET_KEY_USED (1L << 1)
+#define ICE_QTIC_DBG_OPEN_EVENT (1L << 30)
+#define ICE_KEYS_RAM_RESET_COMPLETED (1L << 31)
+
+#define ICE_SEC_IRQ_MASK \
+ (ICE_STREAM1_PARTIALLY_SET_KEY_USED |\
+ ICE_STREAM2_PARTIALLY_SET_KEY_USED |\
+ ICE_QTIC_DBG_OPEN_EVENT | \
+ ICE_KEYS_RAM_RESET_COMPLETED)
+
+#define ice_writel(ice_entry, val, reg) \
+ writel_relaxed((val), (ice_entry)->icemmio_base + (reg))
+#define ice_readl(ice_entry, reg) \
+ readl_relaxed((ice_entry)->icemmio_base + (reg))
+
+#endif /* _CRYPTO_INLINE_CRYPTO_ENGINE_REGS_H_ */
diff --git a/drivers/soc/qcom/crypto-qti-platform.h b/drivers/soc/qcom/crypto-qti-platform.h
new file mode 100644
index 0000000..be00e50
--- /dev/null
+++ b/drivers/soc/qcom/crypto-qti-platform.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef _CRYPTO_QTI_PLATFORM_H
+#define _CRYPTO_QTI_PLATFORM_H
+
+#include <linux/bio-crypt-ctx.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/device.h>
+
+#if IS_ENABLED(CONFIG_QTI_CRYPTO_TZ)
+int crypto_qti_program_key(struct crypto_vops_qti_entry *ice_entry,
+ const struct blk_crypto_key *key, unsigned int slot,
+ unsigned int data_unit_mask, int capid);
+int crypto_qti_invalidate_key(struct crypto_vops_qti_entry *ice_entry,
+ unsigned int slot);
+#else
+static inline int crypto_qti_program_key(
+ struct crypto_vops_qti_entry *ice_entry,
+ const struct blk_crypto_key *key,
+ unsigned int slot, unsigned int data_unit_mask,
+ int capid)
+{
+ return 0;
+}
+static inline int crypto_qti_invalidate_key(
+ struct crypto_vops_qti_entry *ice_entry, unsigned int slot)
+{
+ return 0;
+}
+#endif /* CONFIG_QTI_CRYPTO_TZ */
+
+static inline void crypto_qti_disable_platform(
+ struct crypto_vops_qti_entry *ice_entry)
+{}
+
+#endif /* _CRYPTO_QTI_PLATFORM_H */
diff --git a/drivers/soc/qcom/crypto-qti-tz.c b/drivers/soc/qcom/crypto-qti-tz.c
new file mode 100644
index 0000000..b4fef6b
--- /dev/null
+++ b/drivers/soc/qcom/crypto-qti-tz.c
@@ -0,0 +1,118 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2020, Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/module.h>
+#include <asm/cacheflush.h>
+#include <soc/qcom/scm.h>
+#include <soc/qcom/qtee_shmbridge.h>
+#include <linux/crypto-qti-common.h>
+#include "crypto-qti-platform.h"
+#include "crypto-qti-tz.h"
+
+unsigned int storage_type = SDCC_CE;
+
+int crypto_qti_program_key(struct crypto_vops_qti_entry *ice_entry,
+ const struct blk_crypto_key *key,
+ unsigned int slot, unsigned int data_unit_mask,
+ int capid)
+{
+ int err = 0;
+ uint32_t smc_id = 0;
+ char *tzbuf = NULL;
+ struct qtee_shm shm;
+ struct scm_desc desc = {0};
+ int i;
+ union {
+ u8 bytes[BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE];
+ u32 words[BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE / sizeof(u32)];
+ } key_new;
+
+ err = qtee_shmbridge_allocate_shm(key->size, &shm);
+ if (err)
+ return -ENOMEM;
+
+ tzbuf = shm.vaddr;
+
+ memcpy(key_new.bytes, key->raw, key->size);
+ if (!key->is_hw_wrapped) {
+ for (i = 0; i < ARRAY_SIZE(key_new.words); i++)
+ __cpu_to_be32s(&key_new.words[i]);
+ }
+
+ memcpy(tzbuf, key_new.bytes, key->size);
+ dmac_flush_range(tzbuf, tzbuf + key->size);
+
+ smc_id = TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_ID;
+ desc.arginfo = TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_PARAM_ID;
+ desc.args[0] = slot;
+ desc.args[1] = shm.paddr;
+ desc.args[2] = shm.size;
+ desc.args[3] = ICE_CIPHER_MODE_XTS_256;
+ desc.args[4] = data_unit_mask;
+ desc.args[5] = storage_type;
+
+
+ err = scm_call2_noretry(smc_id, &desc);
+ if (err)
+ pr_err("%s:SCM call Error: 0x%x slot %d\n",
+ __func__, err, slot);
+
+ qtee_shmbridge_free_shm(&shm);
+
+ return err;
+}
+
+int crypto_qti_invalidate_key(
+ struct crypto_vops_qti_entry *ice_entry, unsigned int slot)
+{
+ int err = 0;
+ uint32_t smc_id = 0;
+ struct scm_desc desc = {0};
+
+ smc_id = TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_ID;
+
+ desc.arginfo = TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_PARAM_ID;
+ desc.args[0] = slot;
+ desc.args[1] = storage_type;
+
+ err = scm_call2_noretry(smc_id, &desc);
+ if (err)
+ pr_err("%s:SCM call Error: 0x%x\n", __func__, err);
+ return err;
+}
+
+static int crypto_qti_storage_type(unsigned int *s_type)
+{
+ char boot[20] = {'\0'};
+ char *match = (char *)strnstr(saved_command_line,
+ "androidboot.bootdevice=",
+ strlen(saved_command_line));
+ if (match) {
+ memcpy(boot, (match + strlen("androidboot.bootdevice=")),
+ sizeof(boot) - 1);
+ if (strnstr(boot, "ufs", strlen(boot)))
+ *s_type = UFS_CE;
+
+ return 0;
+ }
+ return -EINVAL;
+}
+
+static int __init crypto_qti_init(void)
+{
+ return crypto_qti_storage_type(&storage_type);
+}
+
+module_init(crypto_qti_init);
diff --git a/drivers/soc/qcom/crypto-qti-tz.h b/drivers/soc/qcom/crypto-qti-tz.h
new file mode 100644
index 0000000..bf7ac00
--- /dev/null
+++ b/drivers/soc/qcom/crypto-qti-tz.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#include <soc/qcom/qseecomi.h>
+
+#ifndef _CRYPTO_QTI_TZ_H
+#define _CRYPTO_QTI_TZ_H
+
+#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE 0x5
+#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE 0x6
+
+#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_ID \
+ TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_SIP, TZ_SVC_ES, \
+ TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE)
+
+#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_ID \
+ TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_SIP, \
+ TZ_SVC_ES, TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE)
+
+#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_PARAM_ID \
+ TZ_SYSCALL_CREATE_PARAM_ID_2( \
+ TZ_SYSCALL_PARAM_TYPE_VAL, TZ_SYSCALL_PARAM_TYPE_VAL)
+
+#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_PARAM_ID \
+ TZ_SYSCALL_CREATE_PARAM_ID_6( \
+ TZ_SYSCALL_PARAM_TYPE_VAL, \
+ TZ_SYSCALL_PARAM_TYPE_BUF_RW, TZ_SYSCALL_PARAM_TYPE_VAL, \
+ TZ_SYSCALL_PARAM_TYPE_VAL, TZ_SYSCALL_PARAM_TYPE_VAL, \
+ TZ_SYSCALL_PARAM_TYPE_VAL)
+
+enum {
+ ICE_CIPHER_MODE_XTS_128 = 0,
+ ICE_CIPHER_MODE_CBC_128 = 1,
+ ICE_CIPHER_MODE_XTS_256 = 3,
+ ICE_CIPHER_MODE_CBC_256 = 4
+};
+
+#define UFS_CE 10
+#define SDCC_CE 20
+#define UFS_CARD_CE 30
+
+#endif /* _CRYPTO_QTI_TZ_H */
diff --git a/drivers/soc/qcom/dcc_v2.c b/drivers/soc/qcom/dcc_v2.c
index a5e2ec0..ada4be8 100644
--- a/drivers/soc/qcom/dcc_v2.c
+++ b/drivers/soc/qcom/dcc_v2.c
@@ -719,6 +719,7 @@ static int dcc_enable(struct dcc_drvdata *drvdata)
int ret = 0;
int list;
uint32_t ram_cfg_base;
+ uint32_t hw_info;
mutex_lock(&drvdata->mutex);
@@ -754,6 +755,10 @@ static int dcc_enable(struct dcc_drvdata *drvdata)
drvdata->ram_offset/4, DCC_FD_BASE(list));
dcc_writel(drvdata, 0xFFF, DCC_LL_TIMEOUT(list));
+ hw_info = dcc_readl(drvdata, DCC_HW_INFO);
+ if (hw_info & 0x80)
+ dcc_writel(drvdata, 0x3F, DCC_TRANS_TIMEOUT(list));
+
/* 4. Clears interrupt status register */
dcc_writel(drvdata, 0, DCC_LL_INT_ENABLE(list));
dcc_writel(drvdata, (BIT(0) | BIT(1) | BIT(2)),
diff --git a/drivers/soc/qcom/eud.c b/drivers/soc/qcom/eud.c
index 0ee43a8..864bd65 100644
--- a/drivers/soc/qcom/eud.c
+++ b/drivers/soc/qcom/eud.c
@@ -92,6 +92,14 @@ static int enable;
static bool eud_ready;
static struct platform_device *eud_private;
+static int check_eud_mode_mgr2(struct eud_chip *chip)
+{
+ u32 val;
+
+ val = scm_io_read(chip->eud_mode_mgr2_phys_base);
+ return val & BIT(0);
+}
+
static void enable_eud(struct platform_device *pdev)
{
struct eud_chip *priv = platform_get_drvdata(pdev);
@@ -105,7 +113,7 @@ static void enable_eud(struct platform_device *pdev)
priv->eud_reg_base + EUD_REG_INT1_EN_MASK);
/* Enable secure eud if supported */
- if (priv->secure_eud_en) {
+ if (priv->secure_eud_en && !check_eud_mode_mgr2(priv)) {
ret = scm_io_write(priv->eud_mode_mgr2_phys_base +
EUD_REG_EUD_EN2, EUD_ENABLE_CMD);
if (ret)
@@ -564,6 +572,9 @@ static int msm_eud_probe(struct platform_device *pdev)
}
chip->eud_mode_mgr2_phys_base = res->start;
+
+ if (check_eud_mode_mgr2(chip))
+ enable = 1;
}
chip->need_phy_clk_vote = of_property_read_bool(pdev->dev.of_node,
diff --git a/drivers/soc/qcom/icnss.c b/drivers/soc/qcom/icnss.c
index 1811d4d..3e3a41b 100644
--- a/drivers/soc/qcom/icnss.c
+++ b/drivers/soc/qcom/icnss.c
@@ -571,26 +571,6 @@ int icnss_power_off(struct device *dev)
}
EXPORT_SYMBOL(icnss_power_off);
-int icnss_update_fw_down_params(struct icnss_priv *priv,
- struct icnss_uevent_fw_down_data *fw_down_data,
- bool crashed)
-{
- fw_down_data->crashed = crashed;
-
- if (!priv->hang_event_data_va)
- return -EINVAL;
-
- priv->hang_event_data = kmemdup(priv->hang_event_data_va,
- priv->hang_event_data_len,
- GFP_ATOMIC);
- if (!priv->hang_event_data)
- return -ENOMEM;
-
- // Update the hang event params
- fw_down_data->hang_event_data = priv->hang_event_data;
- fw_down_data->hang_event_data_len = priv->hang_event_data_len;
- return 0;
-}
static irqreturn_t fw_error_fatal_handler(int irq, void *ctx)
{
@@ -608,7 +588,6 @@ static irqreturn_t fw_crash_indication_handler(int irq, void *ctx)
{
struct icnss_priv *priv = ctx;
struct icnss_uevent_fw_down_data fw_down_data = {0};
- int ret = 0;
icnss_pr_err("Received early crash indication from FW\n");
@@ -617,18 +596,9 @@ static irqreturn_t fw_crash_indication_handler(int irq, void *ctx)
icnss_ignore_fw_timeout(true);
if (test_bit(ICNSS_FW_READY, &priv->state)) {
- ret = icnss_update_fw_down_params(priv, &fw_down_data,
- true);
- if (ret)
- icnss_pr_err("Unable to allocate memory for Hang event data\n");
-
+ fw_down_data.crashed = true;
icnss_call_driver_uevent(priv, ICNSS_UEVENT_FW_DOWN,
&fw_down_data);
-
- if (!ret) {
- kfree(priv->hang_event_data);
- priv->hang_event_data = NULL;
- }
}
}
@@ -1186,32 +1156,6 @@ static int icnss_driver_event_unregister_driver(void *data)
return 0;
}
-static int icnss_call_driver_remove(struct icnss_priv *priv)
-{
- icnss_pr_dbg("Calling driver remove state: 0x%lx\n", priv->state);
-
- clear_bit(ICNSS_FW_READY, &priv->state);
-
- if (test_bit(ICNSS_DRIVER_UNLOADING, &priv->state))
- return 0;
-
- if (!test_bit(ICNSS_DRIVER_PROBED, &priv->state))
- return 0;
-
- if (!priv->ops || !priv->ops->remove)
- return 0;
-
- set_bit(ICNSS_DRIVER_UNLOADING, &priv->state);
- priv->ops->remove(&priv->pdev->dev);
-
- clear_bit(ICNSS_DRIVER_UNLOADING, &priv->state);
- clear_bit(ICNSS_DRIVER_PROBED, &priv->state);
-
- icnss_hw_power_off(priv);
-
- return 0;
-}
-
static int icnss_fw_crashed(struct icnss_priv *priv,
struct icnss_event_pd_service_down_data *event_data)
{
@@ -1231,6 +1175,46 @@ static int icnss_fw_crashed(struct icnss_priv *priv,
return 0;
}
+int icnss_update_hang_event_data(struct icnss_priv *priv,
+ struct icnss_uevent_hang_data *hang_data)
+{
+ if (!priv->hang_event_data_va)
+ return -EINVAL;
+
+ priv->hang_event_data = kmemdup(priv->hang_event_data_va,
+ priv->hang_event_data_len,
+ GFP_ATOMIC);
+ if (!priv->hang_event_data)
+ return -ENOMEM;
+
+ // Update the hang event params
+ hang_data->hang_event_data = priv->hang_event_data;
+ hang_data->hang_event_data_len = priv->hang_event_data_len;
+
+ return 0;
+}
+
+int icnss_send_hang_event_data(struct icnss_priv *priv)
+{
+ struct icnss_uevent_hang_data hang_data = {0};
+ int ret = 0xFF;
+
+ if (priv->early_crash_ind) {
+ ret = icnss_update_hang_event_data(priv, &hang_data);
+ if (ret)
+ icnss_pr_err("Unable to allocate memory for Hang event data\n");
+ }
+ icnss_call_driver_uevent(priv, ICNSS_UEVENT_HANG_DATA,
+ &hang_data);
+
+ if (!ret) {
+ kfree(priv->hang_event_data);
+ priv->hang_event_data = NULL;
+ }
+
+ return 0;
+}
+
static int icnss_driver_event_pd_service_down(struct icnss_priv *priv,
void *data)
{
@@ -1244,6 +1228,8 @@ static int icnss_driver_event_pd_service_down(struct icnss_priv *priv,
if (priv->force_err_fatal)
ICNSS_ASSERT(0);
+ icnss_send_hang_event_data(priv);
+
if (priv->early_crash_ind) {
icnss_pr_dbg("PD Down ignored as early indication is processed: %d, state: 0x%lx\n",
event_data->crashed, priv->state);
@@ -1431,7 +1417,11 @@ static void icnss_update_state_send_modem_shutdown(struct icnss_priv *priv,
if (!test_bit(ICNSS_PD_RESTART, &priv->state) &&
!test_bit(ICNSS_SHUTDOWN_DONE, &priv->state) &&
!test_bit(ICNSS_BLOCK_SHUTDOWN, &priv->state)) {
- icnss_call_driver_remove(priv);
+ clear_bit(ICNSS_FW_READY, &priv->state);
+ icnss_driver_event_post(
+ ICNSS_DRIVER_EVENT_UNREGISTER_DRIVER,
+ ICNSS_EVENT_SYNC_UNINTERRUPTIBLE,
+ NULL);
}
}
@@ -1847,8 +1837,8 @@ int icnss_unregister_driver(struct icnss_driver_ops *ops)
icnss_pr_dbg("Unregistering driver, state: 0x%lx\n", penv->state);
- if (!penv->ops) {
- icnss_pr_err("Driver not registered\n");
+ if (!penv->ops || (!test_bit(ICNSS_DRIVER_PROBED, &penv->state))) {
+ icnss_pr_err("Driver not registered/probed\n");
ret = -ENOENT;
goto out;
}
@@ -2204,6 +2194,7 @@ int icnss_smmu_map(struct device *dev,
{
struct icnss_priv *priv = dev_get_drvdata(dev);
unsigned long iova;
+ int prop_len = 0;
size_t len;
int ret = 0;
@@ -2222,7 +2213,8 @@ int icnss_smmu_map(struct device *dev,
len = roundup(size + paddr - rounddown(paddr, PAGE_SIZE), PAGE_SIZE);
iova = roundup(penv->smmu_iova_ipa_current, PAGE_SIZE);
- if (iova >= priv->smmu_iova_ipa_start + priv->smmu_iova_ipa_len) {
+ if (of_get_property(dev->of_node, "qcom,iommu-geometry", &prop_len) &&
+ iova >= priv->smmu_iova_ipa_start + priv->smmu_iova_ipa_len) {
icnss_pr_err("No IOVA space to map, iova %lx, smmu_iova_ipa_start %pad, smmu_iova_ipa_len %zu\n",
iova,
&priv->smmu_iova_ipa_start,
diff --git a/drivers/soc/qcom/icnss2/Makefile b/drivers/soc/qcom/icnss2/Makefile
index 433483e..aed4ac2 100644
--- a/drivers/soc/qcom/icnss2/Makefile
+++ b/drivers/soc/qcom/icnss2/Makefile
@@ -6,4 +6,5 @@
icnss2-y := main.o
icnss2-y += debug.o
icnss2-y += power.o
+icnss2-y += genl.o
icnss2-$(CONFIG_ICNSS2_QMI) += qmi.o
diff --git a/drivers/soc/qcom/icnss2/genl.c b/drivers/soc/qcom/icnss2/genl.c
new file mode 100644
index 0000000..d4d8570
--- /dev/null
+++ b/drivers/soc/qcom/icnss2/genl.c
@@ -0,0 +1,213 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2020, The Linux Foundation. All rights reserved. */
+
+#define pr_fmt(fmt) "cnss_genl: " fmt
+
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <net/netlink.h>
+#include <net/genetlink.h>
+
+#include "main.h"
+#include "debug.h"
+
+#define ICNSS_GENL_FAMILY_NAME "cnss-genl"
+#define ICNSS_GENL_MCAST_GROUP_NAME "cnss-genl-grp"
+#define ICNSS_GENL_VERSION 1
+#define ICNSS_GENL_DATA_LEN_MAX (15 * 1024)
+#define ICNSS_GENL_STR_LEN_MAX 16
+
+enum {
+ ICNSS_GENL_ATTR_MSG_UNSPEC,
+ ICNSS_GENL_ATTR_MSG_TYPE,
+ ICNSS_GENL_ATTR_MSG_FILE_NAME,
+ ICNSS_GENL_ATTR_MSG_TOTAL_SIZE,
+ ICNSS_GENL_ATTR_MSG_SEG_ID,
+ ICNSS_GENL_ATTR_MSG_END,
+ ICNSS_GENL_ATTR_MSG_DATA_LEN,
+ ICNSS_GENL_ATTR_MSG_DATA,
+ __ICNSS_GENL_ATTR_MAX,
+};
+
+#define ICNSS_GENL_ATTR_MAX (__ICNSS_GENL_ATTR_MAX - 1)
+
+enum {
+ ICNSS_GENL_CMD_UNSPEC,
+ ICNSS_GENL_CMD_MSG,
+ __ICNSS_GENL_CMD_MAX,
+};
+
+#define ICNSS_GENL_CMD_MAX (__ICNSS_GENL_CMD_MAX - 1)
+
+static struct nla_policy icnss_genl_msg_policy[ICNSS_GENL_ATTR_MAX + 1] = {
+ [ICNSS_GENL_ATTR_MSG_TYPE] = { .type = NLA_U8 },
+ [ICNSS_GENL_ATTR_MSG_FILE_NAME] = { .type = NLA_NUL_STRING,
+ .len = ICNSS_GENL_STR_LEN_MAX },
+ [ICNSS_GENL_ATTR_MSG_TOTAL_SIZE] = { .type = NLA_U32 },
+ [ICNSS_GENL_ATTR_MSG_SEG_ID] = { .type = NLA_U32 },
+ [ICNSS_GENL_ATTR_MSG_END] = { .type = NLA_U8 },
+ [ICNSS_GENL_ATTR_MSG_DATA_LEN] = { .type = NLA_U32 },
+ [ICNSS_GENL_ATTR_MSG_DATA] = { .type = NLA_BINARY,
+ .len = ICNSS_GENL_DATA_LEN_MAX },
+};
+
+static int icnss_genl_process_msg(struct sk_buff *skb, struct genl_info *info)
+{
+ return 0;
+}
+
+static struct genl_ops icnss_genl_ops[] = {
+ {
+ .cmd = ICNSS_GENL_CMD_MSG,
+ .policy = icnss_genl_msg_policy,
+ .doit = icnss_genl_process_msg,
+ },
+};
+
+static struct genl_multicast_group icnss_genl_mcast_grp[] = {
+ {
+ .name = ICNSS_GENL_MCAST_GROUP_NAME,
+ },
+};
+
+static struct genl_family icnss_genl_family = {
+ .id = 0,
+ .hdrsize = 0,
+ .name = ICNSS_GENL_FAMILY_NAME,
+ .version = ICNSS_GENL_VERSION,
+ .maxattr = ICNSS_GENL_ATTR_MAX,
+ .module = THIS_MODULE,
+ .ops = icnss_genl_ops,
+ .n_ops = ARRAY_SIZE(icnss_genl_ops),
+ .mcgrps = icnss_genl_mcast_grp,
+ .n_mcgrps = ARRAY_SIZE(icnss_genl_mcast_grp),
+};
+
+static int icnss_genl_send_data(u8 type, char *file_name, u32 total_size,
+ u32 seg_id, u8 end, u32 data_len, u8 *msg_buff)
+{
+ struct sk_buff *skb = NULL;
+ void *msg_header = NULL;
+ int ret = 0;
+ char filename[ICNSS_GENL_STR_LEN_MAX + 1];
+
+ icnss_pr_dbg("type: %u, file_name %s, total_size: %x, seg_id %u, end %u, data_len %u\n",
+ type, file_name, total_size, seg_id, end, data_len);
+
+ if (!file_name)
+ strlcpy(filename, "default", sizeof(filename));
+ else
+ strlcpy(filename, file_name, sizeof(filename));
+
+ skb = genlmsg_new(NLMSG_HDRLEN +
+ nla_total_size(sizeof(type)) +
+ nla_total_size(strlen(filename) + 1) +
+ nla_total_size(sizeof(total_size)) +
+ nla_total_size(sizeof(seg_id)) +
+ nla_total_size(sizeof(end)) +
+ nla_total_size(sizeof(data_len)) +
+ nla_total_size(data_len), GFP_KERNEL);
+ if (!skb)
+ return -ENOMEM;
+
+ msg_header = genlmsg_put(skb, 0, 0,
+ &icnss_genl_family, 0,
+ ICNSS_GENL_CMD_MSG);
+ if (!msg_header) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ ret = nla_put_u8(skb, ICNSS_GENL_ATTR_MSG_TYPE, type);
+ if (ret < 0)
+ goto fail;
+ ret = nla_put_string(skb, ICNSS_GENL_ATTR_MSG_FILE_NAME, filename);
+ if (ret < 0)
+ goto fail;
+ ret = nla_put_u32(skb, ICNSS_GENL_ATTR_MSG_TOTAL_SIZE, total_size);
+ if (ret < 0)
+ goto fail;
+ ret = nla_put_u32(skb, ICNSS_GENL_ATTR_MSG_SEG_ID, seg_id);
+ if (ret < 0)
+ goto fail;
+ ret = nla_put_u8(skb, ICNSS_GENL_ATTR_MSG_END, end);
+ if (ret < 0)
+ goto fail;
+ ret = nla_put_u32(skb, ICNSS_GENL_ATTR_MSG_DATA_LEN, data_len);
+ if (ret < 0)
+ goto fail;
+ ret = nla_put(skb, ICNSS_GENL_ATTR_MSG_DATA, data_len, msg_buff);
+ if (ret < 0)
+ goto fail;
+
+ genlmsg_end(skb, msg_header);
+ ret = genlmsg_multicast(&icnss_genl_family, skb, 0, 0, GFP_KERNEL);
+ if (ret < 0)
+ icnss_pr_err("Fail to send genl msg: %d\n", ret);
+
+ return ret;
+fail:
+ icnss_pr_err("Fail to generate genl msg: %d\n", ret);
+ if (skb)
+ nlmsg_free(skb);
+ return ret;
+}
+
+int icnss_genl_send_msg(void *buff, u8 type, char *file_name, u32 total_size)
+{
+ int ret = 0;
+ u8 *msg_buff = buff;
+ u32 remaining = total_size;
+ u32 seg_id = 0;
+ u32 data_len = 0;
+ u8 end = 0;
+ u8 retry;
+
+ icnss_pr_dbg("type: %u, total_size: %x\n", type, total_size);
+
+ while (remaining) {
+ if (remaining > ICNSS_GENL_DATA_LEN_MAX) {
+ data_len = ICNSS_GENL_DATA_LEN_MAX;
+ } else {
+ data_len = remaining;
+ end = 1;
+ }
+
+ for (retry = 0; retry < 2; retry++) {
+ ret = icnss_genl_send_data(type, file_name, total_size,
+ seg_id, end, data_len,
+ msg_buff);
+ if (ret >= 0)
+ break;
+ msleep(100);
+ }
+
+ if (ret < 0) {
+ icnss_pr_err("fail to send genl data, ret %d\n", ret);
+ return ret;
+ }
+
+ remaining -= data_len;
+ msg_buff += data_len;
+ seg_id++;
+ }
+
+ return ret;
+}
+
+int icnss_genl_init(void)
+{
+ int ret = 0;
+
+ ret = genl_register_family(&icnss_genl_family);
+ if (ret != 0)
+ icnss_pr_err("genl_register_family fail: %d\n", ret);
+
+ return ret;
+}
+
+void icnss_genl_exit(void)
+{
+ genl_unregister_family(&icnss_genl_family);
+}
diff --git a/drivers/soc/qcom/icnss2/genl.h b/drivers/soc/qcom/icnss2/genl.h
new file mode 100644
index 0000000..6cc04c9
--- /dev/null
+++ b/drivers/soc/qcom/icnss2/genl.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (c) 2019-2020, The Linux Foundation. All rights reserved. */
+
+#ifndef __ICNSS_GENL_H__
+#define __ICNSS_GENL_H__
+
+enum icnss_genl_msg_type {
+ ICNSS_GENL_MSG_TYPE_UNSPEC,
+ ICNSS_GENL_MSG_TYPE_QDSS,
+};
+
+int icnss_genl_init(void);
+void icnss_genl_exit(void);
+int icnss_genl_send_msg(void *buff, u8 type,
+ char *file_name, u32 total_size);
+
+#endif
diff --git a/drivers/soc/qcom/icnss2/main.c b/drivers/soc/qcom/icnss2/main.c
index 1c33f1a..b869497 100644
--- a/drivers/soc/qcom/icnss2/main.c
+++ b/drivers/soc/qcom/icnss2/main.c
@@ -43,6 +43,7 @@
#include "qmi.h"
#include "debug.h"
#include "power.h"
+#include "genl.h"
#define MAX_PROP_SIZE 32
#define NUM_LOG_PAGES 10
@@ -163,6 +164,12 @@ char *icnss_driver_event_to_str(enum icnss_driver_event_type type)
return "IDLE_RESTART";
case ICNSS_DRIVER_EVENT_FW_INIT_DONE_IND:
return "FW_INIT_DONE";
+ case ICNSS_DRIVER_EVENT_QDSS_TRACE_REQ_MEM:
+ return "QDSS_TRACE_REQ_MEM";
+ case ICNSS_DRIVER_EVENT_QDSS_TRACE_SAVE:
+ return "QDSS_TRACE_SAVE";
+ case ICNSS_DRIVER_EVENT_QDSS_TRACE_FREE:
+ return "QDSS_TRACE_FREE";
case ICNSS_DRIVER_EVENT_MAX:
return "EVENT_MAX";
}
@@ -728,6 +735,159 @@ static int icnss_driver_event_fw_init_done(struct icnss_priv *priv, void *data)
return ret;
}
+int icnss_alloc_qdss_mem(struct icnss_priv *priv)
+{
+ struct platform_device *pdev = priv->pdev;
+ struct icnss_fw_mem *qdss_mem = priv->qdss_mem;
+ int i, j;
+
+ for (i = 0; i < priv->qdss_mem_seg_len; i++) {
+ if (!qdss_mem[i].va && qdss_mem[i].size) {
+ qdss_mem[i].va =
+ dma_alloc_coherent(&pdev->dev,
+ qdss_mem[i].size,
+ &qdss_mem[i].pa,
+ GFP_KERNEL);
+ if (!qdss_mem[i].va) {
+ icnss_pr_err("Failed to allocate QDSS memory for FW, size: 0x%zx, type: %u, chuck-ID: %d\n",
+ qdss_mem[i].size,
+ qdss_mem[i].type, i);
+ break;
+ }
+ }
+ }
+
+ /* Best-effort allocation for QDSS trace */
+ if (i < priv->qdss_mem_seg_len) {
+ for (j = i; j < priv->qdss_mem_seg_len; j++) {
+ qdss_mem[j].type = 0;
+ qdss_mem[j].size = 0;
+ }
+ priv->qdss_mem_seg_len = i;
+ }
+
+ return 0;
+}
+
+void icnss_free_qdss_mem(struct icnss_priv *priv)
+{
+ struct platform_device *pdev = priv->pdev;
+ struct icnss_fw_mem *qdss_mem = priv->qdss_mem;
+ int i;
+
+ for (i = 0; i < priv->qdss_mem_seg_len; i++) {
+ if (qdss_mem[i].va && qdss_mem[i].size) {
+ icnss_pr_dbg("Freeing memory for QDSS: pa: %pa, size: 0x%zx, type: %u\n",
+ &qdss_mem[i].pa, qdss_mem[i].size,
+ qdss_mem[i].type);
+ dma_free_coherent(&pdev->dev,
+ qdss_mem[i].size, qdss_mem[i].va,
+ qdss_mem[i].pa);
+ qdss_mem[i].va = NULL;
+ qdss_mem[i].pa = 0;
+ qdss_mem[i].size = 0;
+ qdss_mem[i].type = 0;
+ }
+ }
+ priv->qdss_mem_seg_len = 0;
+}
+
+static int icnss_qdss_trace_req_mem_hdlr(struct icnss_priv *priv)
+{
+ int ret = 0;
+
+ ret = icnss_alloc_qdss_mem(priv);
+ if (ret < 0)
+ return ret;
+
+ return wlfw_qdss_trace_mem_info_send_sync(priv);
+}
+
+static void *icnss_qdss_trace_pa_to_va(struct icnss_priv *priv,
+ u64 pa, u32 size, int *seg_id)
+{
+ int i = 0;
+ struct icnss_fw_mem *qdss_mem = priv->qdss_mem;
+ u64 offset = 0;
+ void *va = NULL;
+ u64 local_pa;
+ u32 local_size;
+
+ for (i = 0; i < priv->qdss_mem_seg_len; i++) {
+ local_pa = (u64)qdss_mem[i].pa;
+ local_size = (u32)qdss_mem[i].size;
+ if (pa == local_pa && size <= local_size) {
+ va = qdss_mem[i].va;
+ break;
+ }
+ if (pa > local_pa &&
+ pa < local_pa + local_size &&
+ pa + size <= local_pa + local_size) {
+ offset = pa - local_pa;
+ va = qdss_mem[i].va + offset;
+ break;
+ }
+ }
+
+ *seg_id = i;
+ return va;
+}
+
+static int icnss_qdss_trace_save_hdlr(struct icnss_priv *priv,
+ void *data)
+{
+ struct icnss_qmi_event_qdss_trace_save_data *event_data = data;
+ struct icnss_fw_mem *qdss_mem = priv->qdss_mem;
+ int ret = 0;
+ int i;
+ void *va = NULL;
+ u64 pa;
+ u32 size;
+ int seg_id = 0;
+
+ if (!priv->qdss_mem_seg_len) {
+ icnss_pr_err("Memory for QDSS trace is not available\n");
+ return -ENOMEM;
+ }
+
+ if (event_data->mem_seg_len == 0) {
+ for (i = 0; i < priv->qdss_mem_seg_len; i++) {
+ ret = icnss_genl_send_msg(qdss_mem[i].va,
+ ICNSS_GENL_MSG_TYPE_QDSS,
+ event_data->file_name,
+ qdss_mem[i].size);
+ if (ret < 0) {
+ icnss_pr_err("Fail to save QDSS data: %d\n",
+ ret);
+ break;
+ }
+ }
+ } else {
+ for (i = 0; i < event_data->mem_seg_len; i++) {
+ pa = event_data->mem_seg[i].addr;
+ size = event_data->mem_seg[i].size;
+ va = icnss_qdss_trace_pa_to_va(priv, pa,
+ size, &seg_id);
+ if (!va) {
+ icnss_pr_err("Fail to find matching va for pa %pa\n",
+ &pa);
+ ret = -EINVAL;
+ break;
+ }
+ ret = icnss_genl_send_msg(va, ICNSS_GENL_MSG_TYPE_QDSS,
+ event_data->file_name, size);
+ if (ret < 0) {
+ icnss_pr_err("Fail to save QDSS data: %d\n",
+ ret);
+ break;
+ }
+ }
+ }
+
+ kfree(data);
+ return ret;
+}
+
static int icnss_driver_event_register_driver(struct icnss_priv *priv,
void *data)
{
@@ -955,6 +1115,13 @@ static int icnss_driver_event_idle_restart(struct icnss_priv *priv,
return ret;
}
+static int icnss_qdss_trace_free_hdlr(struct icnss_priv *priv)
+{
+ icnss_free_qdss_mem(priv);
+
+ return 0;
+}
+
static void icnss_driver_event_work(struct work_struct *work)
{
struct icnss_priv *priv =
@@ -1018,6 +1185,16 @@ static void icnss_driver_event_work(struct work_struct *work)
ret = icnss_driver_event_fw_init_done(priv,
event->data);
break;
+ case ICNSS_DRIVER_EVENT_QDSS_TRACE_REQ_MEM:
+ ret = icnss_qdss_trace_req_mem_hdlr(priv);
+ break;
+ case ICNSS_DRIVER_EVENT_QDSS_TRACE_SAVE:
+ ret = icnss_qdss_trace_save_hdlr(priv,
+ event->data);
+ break;
+ case ICNSS_DRIVER_EVENT_QDSS_TRACE_FREE:
+ ret = icnss_qdss_trace_free_hdlr(priv);
+ break;
default:
icnss_pr_err("Invalid Event type: %d", event->type);
kfree(event);
@@ -1436,6 +1613,32 @@ static int icnss_enable_recovery(struct icnss_priv *priv)
return 0;
}
+int icnss_qmi_send(struct device *dev, int type, void *cmd,
+ int cmd_len, void *cb_ctx,
+ int (*cb)(void *ctx, void *event, int event_len))
+{
+ struct icnss_priv *priv = icnss_get_plat_priv();
+ int ret;
+
+ if (!priv)
+ return -ENODEV;
+
+ if (!test_bit(ICNSS_WLFW_CONNECTED, &priv->state))
+ return -EINVAL;
+
+ priv->get_info_cb = cb;
+ priv->get_info_cb_ctx = cb_ctx;
+
+ ret = icnss_wlfw_get_info_send_sync(priv, type, cmd, cmd_len);
+ if (ret) {
+ priv->get_info_cb = NULL;
+ priv->get_info_cb_ctx = NULL;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL(icnss_qmi_send);
+
int __icnss_register_driver(struct icnss_driver_ops *ops,
struct module *owner, const char *mod_name)
{
@@ -2486,6 +2689,10 @@ static int icnss_probe(struct platform_device *pdev)
init_completion(&priv->unblock_shutdown);
+ ret = icnss_genl_init();
+ if (ret < 0)
+ icnss_pr_err("ICNSS genl init failed %d\n", ret);
+
icnss_pr_info("Platform driver probed successfully\n");
return 0;
@@ -2506,6 +2713,8 @@ static int icnss_remove(struct platform_device *pdev)
icnss_pr_info("Removing driver: state: 0x%lx\n", priv->state);
+ icnss_genl_exit();
+
device_init_wakeup(&priv->pdev->dev, false);
icnss_debugfs_destroy(priv);
@@ -2580,6 +2789,14 @@ static int icnss_pm_resume(struct device *dev)
!test_bit(ICNSS_DRIVER_PROBED, &priv->state))
goto out;
+ if (priv->device_id == WCN6750_DEVICE_ID) {
+ ret = wlfw_exit_power_save_send_msg(priv);
+ if (ret) {
+ priv->stats.pm_resume_err++;
+ return ret;
+ }
+ }
+
ret = priv->ops->pm_resume(dev);
out:
diff --git a/drivers/soc/qcom/icnss2/main.h b/drivers/soc/qcom/icnss2/main.h
index 26741b0..cd5d6dd 100644
--- a/drivers/soc/qcom/icnss2/main.h
+++ b/drivers/soc/qcom/icnss2/main.h
@@ -21,6 +21,7 @@
#define WCN6750_DEVICE_ID 0x6750
#define ADRASTEA_DEVICE_ID 0xabcd
+#define QMI_WLFW_MAX_NUM_MEM_SEG 32
extern uint64_t dynamic_feature_mask;
@@ -48,6 +49,9 @@ enum icnss_driver_event_type {
ICNSS_DRIVER_EVENT_IDLE_SHUTDOWN,
ICNSS_DRIVER_EVENT_IDLE_RESTART,
ICNSS_DRIVER_EVENT_FW_INIT_DONE_IND,
+ ICNSS_DRIVER_EVENT_QDSS_TRACE_REQ_MEM,
+ ICNSS_DRIVER_EVENT_QDSS_TRACE_SAVE,
+ ICNSS_DRIVER_EVENT_QDSS_TRACE_FREE,
ICNSS_DRIVER_EVENT_MAX,
};
@@ -130,6 +134,15 @@ struct icnss_clk_info {
u32 enabled;
};
+struct icnss_fw_mem {
+ size_t size;
+ void *va;
+ phys_addr_t pa;
+ u8 valid;
+ u32 type;
+ unsigned long attrs;
+};
+
struct icnss_stats {
struct {
uint32_t posted;
@@ -194,6 +207,9 @@ struct icnss_stats {
uint32_t device_info_req;
uint32_t device_info_resp;
uint32_t device_info_err;
+ u32 exit_power_save_req;
+ u32 exit_power_save_resp;
+ u32 exit_power_save_err;
};
#define WLFW_MAX_TIMESTAMP_LEN 32
@@ -322,6 +338,10 @@ struct icnss_priv {
bool is_ssr;
struct kobject *icnss_kobject;
atomic_t is_shutdown;
+ u32 qdss_mem_seg_len;
+ struct icnss_fw_mem qdss_mem[QMI_WLFW_MAX_NUM_MEM_SEG];
+ void *get_info_cb_ctx;
+ int (*get_info_cb)(void *ctx, void *event, int event_len);
};
struct icnss_reg_info {
diff --git a/drivers/soc/qcom/icnss2/qmi.c b/drivers/soc/qcom/icnss2/qmi.c
index 9dd0eed..3a96131 100644
--- a/drivers/soc/qcom/icnss2/qmi.c
+++ b/drivers/soc/qcom/icnss2/qmi.c
@@ -36,7 +36,9 @@
#define MAX_BDF_FILE_NAME 13
#define BDF_FILE_NAME_PREFIX "bdwlan"
#define ELF_BDF_FILE_NAME "bdwlan.elf"
+#define ELF_BDF_FILE_NAME_PREFIX "bdwlan.e"
#define BIN_BDF_FILE_NAME "bdwlan.bin"
+#define BIN_BDF_FILE_NAME_PREFIX "bdwlan.b"
#define REGDB_FILE_NAME "regdb.bin"
#define DUMMY_BDF_FILE_NAME "bdwlan.dmy"
@@ -338,6 +340,79 @@ int wlfw_device_info_send_msg(struct icnss_priv *priv)
return ret;
}
+int wlfw_exit_power_save_send_msg(struct icnss_priv *priv)
+{
+ int ret;
+ struct wlfw_exit_power_save_req_msg_v01 *req;
+ struct wlfw_exit_power_save_resp_msg_v01 *resp;
+ struct qmi_txn txn;
+
+ if (!priv)
+ return -ENODEV;
+
+ if (test_bit(ICNSS_FW_DOWN, &priv->state))
+ return -EINVAL;
+
+ icnss_pr_dbg("Sending exit power save, state: 0x%lx\n",
+ priv->state);
+
+ req = kzalloc(sizeof(*req), GFP_KERNEL);
+ if (!req)
+ return -ENOMEM;
+
+ resp = kzalloc(sizeof(*resp), GFP_KERNEL);
+ if (!resp) {
+ kfree(req);
+ return -ENOMEM;
+ }
+
+ priv->stats.exit_power_save_req++;
+
+ ret = qmi_txn_init(&priv->qmi, &txn,
+ wlfw_exit_power_save_resp_msg_v01_ei, resp);
+ if (ret < 0) {
+ icnss_qmi_fatal_err("Fail to init txn for exit power save%d\n",
+ ret);
+ goto out;
+ }
+
+ ret = qmi_send_request(&priv->qmi, NULL, &txn,
+ QMI_WLFW_EXIT_POWER_SAVE_REQ_V01,
+ WLFW_EXIT_POWER_SAVE_REQ_MSG_V01_MAX_MSG_LEN,
+ wlfw_exit_power_save_req_msg_v01_ei, req);
+ if (ret < 0) {
+ qmi_txn_cancel(&txn);
+ icnss_qmi_fatal_err("Fail to send exit power save req %d\n",
+ ret);
+ goto out;
+ }
+
+ ret = qmi_txn_wait(&txn, priv->ctrl_params.qmi_timeout);
+ if (ret < 0) {
+ icnss_qmi_fatal_err("Exit power save wait failed with ret %d\n",
+ ret);
+ goto out;
+ } else if (resp->resp.result != QMI_RESULT_SUCCESS_V01) {
+ icnss_qmi_fatal_err(
+ "QMI exit power save request rejected,result:%d error:%d\n",
+ resp->resp.result, resp->resp.error);
+ ret = -resp->resp.result;
+ goto out;
+ }
+
+ priv->stats.exit_power_save_resp++;
+
+ kfree(resp);
+ kfree(req);
+ return 0;
+
+out:
+ kfree(resp);
+ kfree(req);
+ priv->stats.exit_power_save_err++;
+ return ret;
+}
+
int wlfw_ind_register_send_sync_msg(struct icnss_priv *priv)
{
int ret;
@@ -381,6 +456,14 @@ int wlfw_ind_register_send_sync_msg(struct icnss_priv *priv)
req->fw_init_done_enable = 1;
req->cal_done_enable_valid = 1;
req->cal_done_enable = 1;
+ req->qdss_trace_req_mem_enable_valid = 1;
+ req->qdss_trace_req_mem_enable = 1;
+ req->qdss_trace_save_enable_valid = 1;
+ req->qdss_trace_save_enable = 1;
+ req->qdss_trace_free_enable_valid = 1;
+ req->qdss_trace_free_enable = 1;
+ req->respond_get_info_enable_valid = 1;
+ req->respond_get_info_enable = 1;
}
priv->stats.ind_register_req++;
@@ -549,24 +632,26 @@ static int icnss_get_bdf_file_name(struct icnss_priv *priv,
snprintf(filename, filename_len, ELF_BDF_FILE_NAME);
else if (priv->board_id < 0xFF)
snprintf(filename, filename_len,
- BDF_FILE_NAME_PREFIX "e%02x",
+ ELF_BDF_FILE_NAME_PREFIX "%02x",
priv->board_id);
else
snprintf(filename, filename_len,
- BDF_FILE_NAME_PREFIX "%03x",
- priv->board_id);
+ BDF_FILE_NAME_PREFIX "%02x.e%02x",
+ priv->board_id >> 8 & 0xFF,
+ priv->board_id & 0xFF);
break;
case ICNSS_BDF_BIN:
if (priv->board_id == 0xFF)
snprintf(filename, filename_len, BIN_BDF_FILE_NAME);
else if (priv->board_id < 0xFF)
snprintf(filename, filename_len,
- BDF_FILE_NAME_PREFIX "b%02x",
+ BIN_BDF_FILE_NAME_PREFIX "%02x",
priv->board_id);
else
snprintf(filename, filename_len,
- BDF_FILE_NAME_PREFIX "%03x",
- priv->board_id);
+ BDF_FILE_NAME_PREFIX "%02x.b%02x",
+ priv->board_id >> 8 & 0xFF,
+ priv->board_id & 0xFF);
break;
case ICNSS_BDF_REGDB:
snprintf(filename, filename_len, REGDB_FILE_NAME);
@@ -1313,6 +1398,82 @@ void icnss_handle_rejuvenate(struct icnss_priv *priv)
0, event_data);
}
+int wlfw_qdss_trace_mem_info_send_sync(struct icnss_priv *priv)
+{
+ struct wlfw_qdss_trace_mem_info_req_msg_v01 *req;
+ struct wlfw_qdss_trace_mem_info_resp_msg_v01 *resp;
+ struct qmi_txn txn;
+ struct icnss_fw_mem *qdss_mem = priv->qdss_mem;
+ int ret = 0;
+ int i;
+
+ icnss_pr_dbg("Sending QDSS trace mem info, state: 0x%lx\n",
+ priv->state);
+
+ req = kzalloc(sizeof(*req), GFP_KERNEL);
+ if (!req)
+ return -ENOMEM;
+
+ resp = kzalloc(sizeof(*resp), GFP_KERNEL);
+ if (!resp) {
+ kfree(req);
+ return -ENOMEM;
+ }
+
+ req->mem_seg_len = priv->qdss_mem_seg_len;
+ for (i = 0; i < req->mem_seg_len; i++) {
+ icnss_pr_dbg("Memory for FW, va: 0x%pK, pa: %pa, size: 0x%zx, type: %u\n",
+ qdss_mem[i].va, &qdss_mem[i].pa,
+ qdss_mem[i].size, qdss_mem[i].type);
+
+ req->mem_seg[i].addr = qdss_mem[i].pa;
+ req->mem_seg[i].size = qdss_mem[i].size;
+ req->mem_seg[i].type = qdss_mem[i].type;
+ }
+
+ ret = qmi_txn_init(&priv->qmi, &txn,
+ wlfw_qdss_trace_mem_info_resp_msg_v01_ei, resp);
+ if (ret < 0) {
+ icnss_pr_err("Fail to initialize txn for QDSS trace mem request: err %d\n",
+ ret);
+ goto out;
+ }
+
+ ret = qmi_send_request(&priv->qmi, NULL, &txn,
+ QMI_WLFW_QDSS_TRACE_MEM_INFO_REQ_V01,
+ WLFW_QDSS_TRACE_MEM_INFO_REQ_MSG_V01_MAX_MSG_LEN,
+ wlfw_qdss_trace_mem_info_req_msg_v01_ei, req);
+ if (ret < 0) {
+ qmi_txn_cancel(&txn);
+ icnss_pr_err("Fail to send QDSS trace mem info request: err %d\n",
+ ret);
+ goto out;
+ }
+
+ ret = qmi_txn_wait(&txn, priv->ctrl_params.qmi_timeout);
+ if (ret < 0) {
+ icnss_pr_err("Fail to wait for response of QDSS trace mem info request, err %d\n",
+ ret);
+ goto out;
+ }
+
+ if (resp->resp.result != QMI_RESULT_SUCCESS_V01) {
+ icnss_pr_err("QDSS trace mem info request failed, result: %d, err: %d\n",
+ resp->resp.result, resp->resp.error);
+ ret = -resp->resp.result;
+ goto out;
+ }
+
+ kfree(req);
+ kfree(resp);
+ return 0;
+
+out:
+ kfree(req);
+ kfree(resp);
+ return ret;
+}
+
static void fw_ready_ind_cb(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
struct qmi_txn *txn, const void *data)
{
@@ -1445,6 +1606,144 @@ static void fw_init_done_ind_cb(struct qmi_handle *qmi,
0, NULL);
}
+static void wlfw_qdss_trace_req_mem_ind_cb(struct qmi_handle *qmi,
+ struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn,
+ const void *data)
+{
+ struct icnss_priv *priv =
+ container_of(qmi, struct icnss_priv, qmi);
+ const struct wlfw_qdss_trace_req_mem_ind_msg_v01 *ind_msg = data;
+ int i;
+
+ icnss_pr_dbg("Received QMI WLFW QDSS trace request mem indication\n");
+
+ if (!txn) {
+ icnss_pr_err("Spurious indication\n");
+ return;
+ }
+
+ if (priv->qdss_mem_seg_len) {
+ icnss_pr_err("Ignore double allocation for QDSS trace, current len %u\n",
+ priv->qdss_mem_seg_len);
+ return;
+ }
+
+ priv->qdss_mem_seg_len = ind_msg->mem_seg_len;
+ for (i = 0; i < priv->qdss_mem_seg_len; i++) {
+ icnss_pr_dbg("QDSS requests for memory, size: 0x%x, type: %u\n",
+ ind_msg->mem_seg[i].size,
+ ind_msg->mem_seg[i].type);
+ priv->qdss_mem[i].type = ind_msg->mem_seg[i].type;
+ priv->qdss_mem[i].size = ind_msg->mem_seg[i].size;
+ }
+
+ icnss_driver_event_post(priv, ICNSS_DRIVER_EVENT_QDSS_TRACE_REQ_MEM,
+ 0, NULL);
+}
+
+static void wlfw_qdss_trace_save_ind_cb(struct qmi_handle *qmi,
+ struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn,
+ const void *data)
+{
+ struct icnss_priv *priv =
+ container_of(qmi, struct icnss_priv, qmi);
+ const struct wlfw_qdss_trace_save_ind_msg_v01 *ind_msg = data;
+ struct icnss_qmi_event_qdss_trace_save_data *event_data;
+ int i = 0;
+
+ icnss_pr_dbg("Received QMI WLFW QDSS trace save indication\n");
+
+ if (!txn) {
+ icnss_pr_err("Spurious indication\n");
+ return;
+ }
+
+ icnss_pr_dbg("QDSS_trace_save info: source %u, total_size %u, file_name_valid %u, file_name %s\n",
+ ind_msg->source, ind_msg->total_size,
+ ind_msg->file_name_valid, ind_msg->file_name);
+
+ if (ind_msg->source == 1)
+ return;
+
+ event_data = kzalloc(sizeof(*event_data), GFP_KERNEL);
+ if (!event_data)
+ return;
+
+ if (ind_msg->mem_seg_valid) {
+ if (ind_msg->mem_seg_len > QDSS_TRACE_SEG_LEN_MAX) {
+ icnss_pr_err("Invalid seg len %u\n",
+ ind_msg->mem_seg_len);
+ goto free_event_data;
+ }
+ icnss_pr_dbg("QDSS_trace_save seg len %u\n",
+ ind_msg->mem_seg_len);
+ event_data->mem_seg_len = ind_msg->mem_seg_len;
+ for (i = 0; i < ind_msg->mem_seg_len; i++) {
+ event_data->mem_seg[i].addr = ind_msg->mem_seg[i].addr;
+ event_data->mem_seg[i].size = ind_msg->mem_seg[i].size;
+ icnss_pr_dbg("seg-%d: addr 0x%llx size 0x%x\n",
+ i, ind_msg->mem_seg[i].addr,
+ ind_msg->mem_seg[i].size);
+ }
+ }
+
+ event_data->total_size = ind_msg->total_size;
+
+ if (ind_msg->file_name_valid)
+ strlcpy(event_data->file_name, ind_msg->file_name,
+ QDSS_TRACE_FILE_NAME_MAX + 1);
+ else
+ strlcpy(event_data->file_name, "qdss_trace",
+ QDSS_TRACE_FILE_NAME_MAX + 1);
+
+ icnss_driver_event_post(priv, ICNSS_DRIVER_EVENT_QDSS_TRACE_SAVE,
+ 0, event_data);
+
+ return;
+
+free_event_data:
+ kfree(event_data);
+}
+
+static void wlfw_qdss_trace_free_ind_cb(struct qmi_handle *qmi,
+ struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn,
+ const void *data)
+{
+ struct icnss_priv *priv =
+ container_of(qmi, struct icnss_priv, qmi);
+
+ icnss_driver_event_post(priv, ICNSS_DRIVER_EVENT_QDSS_TRACE_FREE,
+ 0, NULL);
+}
+
+static void icnss_wlfw_respond_get_info_ind_cb(struct qmi_handle *qmi,
+ struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn,
+ const void *data)
+{
+ struct icnss_priv *priv = container_of(qmi, struct icnss_priv, qmi);
+ const struct wlfw_respond_get_info_ind_msg_v01 *ind_msg = data;
+
+ icnss_pr_vdbg("Received QMI WLFW respond get info indication\n");
+
+ if (!txn) {
+ icnss_pr_err("Spurious indication\n");
+ return;
+ }
+
+ icnss_pr_vdbg("Extract message with event length: %d, type: %d, is last: %d, seq no: %d\n",
+ ind_msg->data_len, ind_msg->type,
+ ind_msg->is_last, ind_msg->seq_no);
+
+ if (priv->get_info_cb_ctx && priv->get_info_cb)
+ priv->get_info_cb(priv->get_info_cb_ctx,
+ (void *)ind_msg->data,
+ ind_msg->data_len);
+}
+
static struct qmi_msg_handler wlfw_msg_handlers[] = {
{
.type = QMI_INDICATION,
@@ -1489,6 +1788,38 @@ static struct qmi_msg_handler wlfw_msg_handlers[] = {
.decoded_size = sizeof(struct wlfw_fw_init_done_ind_msg_v01),
.fn = fw_init_done_ind_cb
},
+ {
+ .type = QMI_INDICATION,
+ .msg_id = QMI_WLFW_QDSS_TRACE_REQ_MEM_IND_V01,
+ .ei = wlfw_qdss_trace_req_mem_ind_msg_v01_ei,
+ .decoded_size =
+ sizeof(struct wlfw_qdss_trace_req_mem_ind_msg_v01),
+ .fn = wlfw_qdss_trace_req_mem_ind_cb
+ },
+ {
+ .type = QMI_INDICATION,
+ .msg_id = QMI_WLFW_QDSS_TRACE_SAVE_IND_V01,
+ .ei = wlfw_qdss_trace_save_ind_msg_v01_ei,
+ .decoded_size =
+ sizeof(struct wlfw_qdss_trace_save_ind_msg_v01),
+ .fn = wlfw_qdss_trace_save_ind_cb
+ },
+ {
+ .type = QMI_INDICATION,
+ .msg_id = QMI_WLFW_QDSS_TRACE_FREE_IND_V01,
+ .ei = wlfw_qdss_trace_free_ind_msg_v01_ei,
+ .decoded_size =
+ sizeof(struct wlfw_qdss_trace_free_ind_msg_v01),
+ .fn = wlfw_qdss_trace_free_ind_cb
+ },
+ {
+ .type = QMI_INDICATION,
+ .msg_id = QMI_WLFW_RESPOND_GET_INFO_IND_V01,
+ .ei = wlfw_respond_get_info_ind_msg_v01_ei,
+ .decoded_size =
+ sizeof(struct wlfw_respond_get_info_ind_msg_v01),
+ .fn = icnss_wlfw_respond_get_info_ind_cb
+ },
{}
};
@@ -1850,3 +2181,77 @@ int wlfw_host_cap_send_sync(struct icnss_priv *priv)
kfree(resp);
return ret;
}
+
+int icnss_wlfw_get_info_send_sync(struct icnss_priv *plat_priv, int type,
+ void *cmd, int cmd_len)
+{
+ struct wlfw_get_info_req_msg_v01 *req;
+ struct wlfw_get_info_resp_msg_v01 *resp;
+ struct qmi_txn txn;
+ int ret = 0;
+
+ icnss_pr_dbg("Sending get info message, type: %d, cmd length: %d, state: 0x%lx\n",
+ type, cmd_len, plat_priv->state);
+
+ if (cmd_len > QMI_WLFW_MAX_DATA_SIZE_V01)
+ return -EINVAL;
+
+ if (test_bit(ICNSS_FW_DOWN, &priv->state))
+ return -EINVAL;
+
+ req = kzalloc(sizeof(*req), GFP_KERNEL);
+ if (!req)
+ return -ENOMEM;
+
+ resp = kzalloc(sizeof(*resp), GFP_KERNEL);
+ if (!resp) {
+ kfree(req);
+ return -ENOMEM;
+ }
+
+ req->type = type;
+ req->data_len = cmd_len;
+ memcpy(req->data, cmd, req->data_len);
+
+ ret = qmi_txn_init(&plat_priv->qmi, &txn,
+ wlfw_get_info_resp_msg_v01_ei, resp);
+ if (ret < 0) {
+ icnss_pr_err("Failed to initialize txn for get info request, err: %d\n",
+ ret);
+ goto out;
+ }
+
+ ret = qmi_send_request(&plat_priv->qmi, NULL, &txn,
+ QMI_WLFW_GET_INFO_REQ_V01,
+ WLFW_GET_INFO_REQ_MSG_V01_MAX_MSG_LEN,
+ wlfw_get_info_req_msg_v01_ei, req);
+ if (ret < 0) {
+ qmi_txn_cancel(&txn);
+ icnss_pr_err("Failed to send get info request, err: %d\n",
+ ret);
+ goto out;
+ }
+
+ ret = qmi_txn_wait(&txn, plat_priv->ctrl_params.qmi_timeout);
+ if (ret < 0) {
+ icnss_pr_err("Failed to wait for response of get info request, err: %d\n",
+ ret);
+ goto out;
+ }
+
+ if (resp->resp.result != QMI_RESULT_SUCCESS_V01) {
+ icnss_pr_err("Get info request failed, result: %d, err: %d\n",
+ resp->resp.result, resp->resp.error);
+ ret = -resp->resp.result;
+ goto out;
+ }
+
+ kfree(req);
+ kfree(resp);
+ return 0;
+
+out:
+ kfree(req);
+ kfree(resp);
+ return ret;
+}
diff --git a/drivers/soc/qcom/icnss2/qmi.h b/drivers/soc/qcom/icnss2/qmi.h
index 964579b..c4c42ce 100644
--- a/drivers/soc/qcom/icnss2/qmi.h
+++ b/drivers/soc/qcom/icnss2/qmi.h
@@ -6,6 +6,21 @@
#ifndef __ICNSS_QMI_H__
#define __ICNSS_QMI_H__
+#define QDSS_TRACE_SEG_LEN_MAX 32
+#define QDSS_TRACE_FILE_NAME_MAX 16
+
+struct icnss_mem_seg {
+ u64 addr;
+ u32 size;
+};
+
+struct icnss_qmi_event_qdss_trace_save_data {
+ u32 total_size;
+ u32 mem_seg_len;
+ struct icnss_mem_seg mem_seg[QDSS_TRACE_SEG_LEN_MAX];
+ char file_name[QDSS_TRACE_FILE_NAME_MAX + 1];
+};
+
#ifndef CONFIG_ICNSS2_QMI
static inline int wlfw_ind_register_send_sync_msg(struct icnss_priv *priv)
@@ -108,6 +123,22 @@ int icnss_wlfw_bdf_dnld_send_sync(struct icnss_priv *priv, u32 bdf_type)
{
return 0;
}
+
+int wlfw_qdss_trace_mem_info_send_sync(struct icnss_priv *priv)
+{
+ return 0;
+}
+
+int wlfw_exit_power_save_send_msg(struct icnss_priv *priv)
+{
+ return 0;
+}
+
+int icnss_wlfw_get_info_send_sync(struct icnss_priv *priv, int type,
+ void *cmd, int cmd_len)
+{
+ return 0;
+}
#else
int wlfw_ind_register_send_sync_msg(struct icnss_priv *priv);
int icnss_connect_to_fw_server(struct icnss_priv *priv, void *data);
@@ -142,6 +173,10 @@ int wlfw_device_info_send_msg(struct icnss_priv *priv);
int wlfw_wlan_mode_send_sync_msg(struct icnss_priv *priv,
enum wlfw_driver_mode_enum_v01 mode);
int icnss_wlfw_bdf_dnld_send_sync(struct icnss_priv *priv, u32 bdf_type);
+int wlfw_qdss_trace_mem_info_send_sync(struct icnss_priv *priv);
+int wlfw_exit_power_save_send_msg(struct icnss_priv *priv);
+int icnss_wlfw_get_info_send_sync(struct icnss_priv *priv, int type,
+ void *cmd, int cmd_len);
#endif
#endif /* __ICNSS_QMI_H__*/
diff --git a/drivers/soc/qcom/icnss_qmi.c b/drivers/soc/qcom/icnss_qmi.c
index c955911..3141600 100644
--- a/drivers/soc/qcom/icnss_qmi.c
+++ b/drivers/soc/qcom/icnss_qmi.c
@@ -1255,12 +1255,22 @@ int icnss_connect_to_fw_server(struct icnss_priv *priv, void *data)
int icnss_clear_server(struct icnss_priv *priv)
{
+ int ret;
+
if (!priv)
return -ENODEV;
icnss_pr_info("QMI Service Disconnected: 0x%lx\n", priv->state);
clear_bit(ICNSS_WLFW_CONNECTED, &priv->state);
+ icnss_unregister_fw_service(priv);
+
+ ret = icnss_register_fw_service(priv);
+ if (ret < 0) {
+ icnss_pr_err("WLFW server registration failed\n");
+ ICNSS_ASSERT(0);
+ }
+
return 0;
}
diff --git a/drivers/soc/qcom/llcc-lagoon.c b/drivers/soc/qcom/llcc-lagoon.c
index 21c0e7c..3d19ff1 100644
--- a/drivers/soc/qcom/llcc-lagoon.c
+++ b/drivers/soc/qcom/llcc-lagoon.c
@@ -51,12 +51,13 @@
}
static struct llcc_slice_config lagoon_data[] = {
- SCT_ENTRY(LLCC_CPUSS, 1, 768, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 1),
- SCT_ENTRY(LLCC_MDM, 8, 256, 2, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0),
+ SCT_ENTRY(LLCC_CPUSS, 1, 768, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 1),
+ SCT_ENTRY(LLCC_MDM, 8, 512, 2, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0),
SCT_ENTRY(LLCC_GPUHTW, 11, 256, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0),
- SCT_ENTRY(LLCC_GPU, 12, 256, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0),
- SCT_ENTRY(LLCC_MDMPNG, 21, 768, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0),
+ SCT_ENTRY(LLCC_GPU, 12, 512, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0),
+ SCT_ENTRY(LLCC_MDMPNG, 21, 768, 0, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0),
SCT_ENTRY(LLCC_NPU, 23, 768, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0),
+ SCT_ENTRY(LLCC_MODEMVPE, 29, 64, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0),
};
static int lagoon_qcom_llcc_probe(struct platform_device *pdev)
diff --git a/drivers/soc/qcom/msm_bus/msm_bus_dbg.c b/drivers/soc/qcom/msm_bus/msm_bus_dbg.c
index 5cb058d..f88c85a 100644
--- a/drivers/soc/qcom/msm_bus/msm_bus_dbg.c
+++ b/drivers/soc/qcom/msm_bus/msm_bus_dbg.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2010-2012, 2014-2019, The Linux Foundation. All rights
+ * Copyright (c) 2010-2012, 2014-2020, The Linux Foundation. All rights
*/
#define pr_fmt(fmt) "AXI: %s(): " fmt, __func__
@@ -21,7 +21,6 @@
#include "msm_bus_core.h"
#include "msm_bus_adhoc.h"
-#define CREATE_TRACE_POINTS
#include <trace/events/trace_msm_bus.h>
#define MAX_BUFF_SIZE 4096
diff --git a/drivers/soc/qcom/msm_bus/msm_bus_dbg_rpmh.c b/drivers/soc/qcom/msm_bus/msm_bus_dbg_rpmh.c
index efe9d23..196e050 100644
--- a/drivers/soc/qcom/msm_bus/msm_bus_dbg_rpmh.c
+++ b/drivers/soc/qcom/msm_bus/msm_bus_dbg_rpmh.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2019-2020, The Linux Foundation. All rights reserved.
*/
#define pr_fmt(fmt) "AXI: %s(): " fmt, __func__
@@ -21,7 +21,6 @@
#include "msm_bus_core.h"
#include "msm_bus_rpmh.h"
-#define CREATE_TRACE_POINTS
#include <trace/events/trace_msm_bus.h>
#define MAX_BUFF_SIZE 4096
diff --git a/drivers/soc/qcom/msm_bus/msm_bus_rules.c b/drivers/soc/qcom/msm_bus/msm_bus_rules.c
index e435ea7..124f8e9 100644
--- a/drivers/soc/qcom/msm_bus/msm_bus_rules.c
+++ b/drivers/soc/qcom/msm_bus/msm_bus_rules.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2014-2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2014-2018, 2020, The Linux Foundation. All rights reserved.
*/
#include <linux/list_sort.h>
@@ -9,6 +9,7 @@
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/msm-bus.h>
+#define CREATE_TRACE_POINTS
#include <trace/events/trace_msm_bus.h>
struct node_vote_info {
diff --git a/drivers/soc/qcom/msm_minidump.c b/drivers/soc/qcom/msm_minidump.c
index 56643227..7bd4951 100644
--- a/drivers/soc/qcom/msm_minidump.c
+++ b/drivers/soc/qcom/msm_minidump.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2017-2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2018,2020 The Linux Foundation. All rights reserved.
*/
#define pr_fmt(fmt) "Minidump: " fmt
@@ -355,6 +355,7 @@ static int msm_minidump_add_header(void)
struct elf_phdr *phdr;
unsigned int strtbl_off, elfh_size, phdr_off;
char *banner;
+ size_t linux_banner_len = strlen(linux_banner);
/* Header buffer contains:
* elf header, MAX_NUM_ENTRIES+4 of section and program elf headers,
@@ -425,7 +426,7 @@ static int msm_minidump_add_header(void)
/* 4th section is linux banner */
banner = (char *)ehdr + strtbl_off + MAX_STRTBL_SIZE;
- strlcpy(banner, linux_banner, strlen(linux_banner) + 1);
+ strlcpy(banner, linux_banner, linux_banner_len + 1);
shdr->sh_type = SHT_PROGBITS;
shdr->sh_offset = (elf_addr_t)(strtbl_off + MAX_STRTBL_SIZE);
diff --git a/drivers/soc/qcom/qdss_bridge.c b/drivers/soc/qcom/qdss_bridge.c
index d62e566..92af4dcb6 100644
--- a/drivers/soc/qcom/qdss_bridge.c
+++ b/drivers/soc/qcom/qdss_bridge.c
@@ -108,7 +108,6 @@ static int qdss_create_buf_tbl(struct qdss_bridge_drvdata *drvdata)
buf = kzalloc(drvdata->mtu, GFP_KERNEL);
usb_req = kzalloc(sizeof(*usb_req), GFP_KERNEL);
- init_completion(&usb_req->write_done);
entry->buf = buf;
entry->usb_req = usb_req;
@@ -450,17 +449,22 @@ static void usb_notifier(void *priv, unsigned int event,
{
struct qdss_bridge_drvdata *drvdata = priv;
- if (!drvdata)
+ if (!drvdata || drvdata->mode != MHI_TRANSFER_TYPE_USB
+ || drvdata->opened == DISABLE) {
+ pr_err_ratelimited("%s can't be called in invalid status.\n",
+ __func__);
return;
+ }
switch (event) {
case USB_QDSS_CONNECT:
- usb_qdss_alloc_req(ch, drvdata->nr_trbs, 0);
+ usb_qdss_alloc_req(ch, drvdata->nr_trbs);
mhi_queue_read(drvdata);
break;
case USB_QDSS_DISCONNECT:
- /* Leave MHI/USB open.Only close on MHI disconnect */
+ if (drvdata->opened == ENABLE)
+ usb_qdss_free_req(drvdata->usb_ch);
break;
case USB_QDSS_DATA_WRITE_DONE:
diff --git a/drivers/soc/qcom/rpm_master_stat.c b/drivers/soc/qcom/rpm_master_stat.c
index 6ae297a..eda94c6 100644
--- a/drivers/soc/qcom/rpm_master_stat.c
+++ b/drivers/soc/qcom/rpm_master_stat.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2012-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/debugfs.h>
@@ -402,15 +402,17 @@ static struct msm_rpm_master_stats_platform_data
*/
for (i = 0; i < pdata->num_masters; i++) {
const char *master_name;
+ size_t master_name_len;
of_property_read_string_index(node, "qcom,masters",
i, &master_name);
+ master_name_len = strlen(master_name);
pdata->masters[i] = devm_kzalloc(dev, sizeof(char) *
- strlen(master_name) + 1, GFP_KERNEL);
+ master_name_len + 1, GFP_KERNEL);
if (!pdata->masters[i])
goto err;
strlcpy(pdata->masters[i], master_name,
- strlen(master_name) + 1);
+ master_name_len + 1);
}
return pdata;
err:
diff --git a/drivers/soc/qcom/smcinvoke.c b/drivers/soc/qcom/smcinvoke.c
index 0a38c45..c28870c 100644
--- a/drivers/soc/qcom/smcinvoke.c
+++ b/drivers/soc/qcom/smcinvoke.c
@@ -3,6 +3,8 @@
* Copyright (c) 2016-2020, The Linux Foundation. All rights reserved.
*/
+#define pr_fmt(fmt) "smcinvoke: %s: " fmt, __func__
+
#include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/device.h>
@@ -400,8 +402,10 @@ static int release_mem_obj_locked(int32_t tzhandle)
struct smcinvoke_mem_obj *mem_obj = find_mem_obj_locked(
TZHANDLE_GET_OBJID(tzhandle), is_mem_regn_obj);
- if (!mem_obj)
+ if (!mem_obj) {
+ pr_err("memory object not found\n");
return OBJECT_ERROR_BADOBJ;
+ }
if (is_mem_regn_obj)
kref_put(&mem_obj->mem_regn_ref_cnt, del_mem_regn_obj_locked);
@@ -432,8 +436,10 @@ static int get_pending_cbobj_locked(uint16_t srvr_id, int16_t obj_id)
struct smcinvoke_cbobj *obj = NULL;
struct smcinvoke_server_info *server = get_cb_server_locked(srvr_id);
- if (!server)
+ if (!server) {
+ pr_err("%s, server id : %u not found\n", __func__, srvr_id);
return OBJECT_ERROR_BADOBJ;
+ }
head = &server->pending_cbobjs;
list_for_each_entry(cbobj, head, list)
@@ -471,8 +477,10 @@ static int put_pending_cbobj_locked(uint16_t srvr_id, int16_t obj_id)
struct list_head *head = NULL;
struct smcinvoke_cbobj *cbobj = NULL;
- if (!srvr_info)
+ if (!srvr_info) {
+ pr_err("%s, server id : %u not found\n", __func__, srvr_id);
return ret;
+ }
head = &srvr_info->pending_cbobjs;
list_for_each_entry(cbobj, head, list)
@@ -784,8 +792,10 @@ static int32_t smcinvoke_release_mem_obj_locked(void *buf, size_t buf_len)
{
struct smcinvoke_tzcb_req *msg = buf;
- if (msg->hdr.counts != OBJECT_COUNTS_PACK(0, 0, 0, 0))
+ if (msg->hdr.counts != OBJECT_COUNTS_PACK(0, 0, 0, 0)) {
+ pr_err("Invalid object count in %s\n", __func__);
return OBJECT_ERROR_INVALID;
+ }
return release_tzhandle_locked(msg->hdr.tzhandle);
}
@@ -805,9 +815,10 @@ static int32_t smcinvoke_map_mem_region(void *buf, size_t buf_len)
struct sg_table *sgt = NULL;
if (msg->hdr.counts != OBJECT_COUNTS_PACK(0, 1, 1, 1) ||
- (buf_len - msg->args[0].b.offset < msg->args[0].b.size))
+ (buf_len - msg->args[0].b.offset < msg->args[0].b.size)) {
+ pr_err("Invalid counts received for mapping mem obj\n");
return OBJECT_ERROR_INVALID;
-
+ }
/* args[0] = BO, args[1] = OI, args[2] = OO */
ob = buf + msg->args[0].b.offset;
oo = &msg->args[2].handle;
@@ -817,6 +828,7 @@ static int32_t smcinvoke_map_mem_region(void *buf, size_t buf_len)
SMCINVOKE_MEM_RGN_OBJ);
if (!mem_obj) {
mutex_unlock(&g_smcinvoke_lock);
+ pr_err("Memory object not found\n");
return OBJECT_ERROR_BADOBJ;
}
@@ -826,6 +838,7 @@ static int32_t smcinvoke_map_mem_region(void *buf, size_t buf_len)
&smcinvoke_pdev->dev);
if (IS_ERR(buf_attach)) {
ret = OBJECT_ERROR_KMEM;
+ pr_err("dma buf attach failed, ret: %d\n", ret);
goto out;
}
mem_obj->buf_attach = buf_attach;
@@ -833,6 +846,7 @@ static int32_t smcinvoke_map_mem_region(void *buf, size_t buf_len)
sgt = dma_buf_map_attachment(buf_attach, DMA_BIDIRECTIONAL);
if (IS_ERR(sgt)) {
ret = OBJECT_ERROR_KMEM;
+ pr_err("mapping dma buffers failed, ret: %d\n", ret);
goto out;
}
mem_obj->sgt = sgt;
@@ -840,12 +854,14 @@ static int32_t smcinvoke_map_mem_region(void *buf, size_t buf_len)
/* contiguous only => nents=1 */
if (sgt->nents != 1) {
ret = OBJECT_ERROR_INVALID;
+ pr_err("sg enries are not contigous, ret: %d\n", ret);
goto out;
}
mem_obj->p_addr = sg_dma_address(sgt->sgl);
mem_obj->p_addr_len = sgt->sgl->length;
if (!mem_obj->p_addr) {
ret = OBJECT_ERROR_INVALID;
+ pr_err("invalid physical address, ret: %d\n", ret);
goto out;
}
mem_obj->mem_map_obj_id = next_mem_map_obj_id_locked();
@@ -875,6 +891,7 @@ static void process_kernel_obj(void *buf, size_t buf_len)
cb_req->result = OBJECT_OK;
break;
default:
+ pr_err(" invalid operation for tz kernel object\n");
cb_req->result = OBJECT_ERROR_INVALID;
break;
}
@@ -902,8 +919,10 @@ static void process_tzcb_req(void *buf, size_t buf_len, struct file **arr_filp)
struct smcinvoke_tzcb_req *cb_req = NULL, *tmp_cb_req = NULL;
struct smcinvoke_server_info *srvr_info = NULL;
- if (buf_len < sizeof(struct smcinvoke_tzcb_req))
+ if (buf_len < sizeof(struct smcinvoke_tzcb_req)) {
+ pr_err("smaller buffer length : %u\n", buf_len);
return;
+ }
cb_req = buf;
@@ -913,6 +932,7 @@ static void process_tzcb_req(void *buf, size_t buf_len, struct file **arr_filp)
} else if (TZHANDLE_IS_MEM_OBJ(cb_req->hdr.tzhandle)) {
return process_mem_obj(buf, buf_len);
} else if (!TZHANDLE_IS_CB_OBJ(cb_req->hdr.tzhandle)) {
+ pr_err("Request object is not a callback object\n");
cb_req->result = OBJECT_ERROR_INVALID;
return;
}
@@ -926,12 +946,16 @@ static void process_tzcb_req(void *buf, size_t buf_len, struct file **arr_filp)
if (!tmp_cb_req) {
/* we need to return error to caller so fill up result */
cb_req->result = OBJECT_ERROR_KMEM;
+ pr_err("failed to create copy of request, set result: %d\n",
+ cb_req->result);
return;
}
cb_txn = kzalloc(sizeof(*cb_txn), GFP_KERNEL);
if (!cb_txn) {
cb_req->result = OBJECT_ERROR_KMEM;
+ pr_err("failed to allocate memory for request, result: %d\n",
+ cb_req->result);
kfree(tmp_cb_req);
return;
}
@@ -950,6 +974,7 @@ static void process_tzcb_req(void *buf, size_t buf_len, struct file **arr_filp)
TZHANDLE_GET_SERVER(cb_req->hdr.tzhandle));
if (!srvr_info || srvr_info->state == SMCINVOKE_SERVER_STATE_DEFUNCT) {
/* ret equals Object_ERROR_DEFUNCT, at this point go to out */
+ pr_err("sever is either invalid or defunct\n");
mutex_unlock(&g_smcinvoke_lock);
goto out;
}
@@ -961,12 +986,11 @@ static void process_tzcb_req(void *buf, size_t buf_len, struct file **arr_filp)
* we need not worry that server_info will be deleted because as long
* as this CBObj is served by this server, srvr_info will be valid.
*/
- if (wq_has_sleeper(&srvr_info->req_wait_q)) {
- wake_up_interruptible_all(&srvr_info->req_wait_q);
- ret = wait_event_interruptible(srvr_info->rsp_wait_q,
- (cb_txn->state == SMCINVOKE_REQ_PROCESSED) ||
- (srvr_info->state == SMCINVOKE_SERVER_STATE_DEFUNCT));
- }
+ wake_up_interruptible_all(&srvr_info->req_wait_q);
+ ret = wait_event_interruptible(srvr_info->rsp_wait_q,
+ (cb_txn->state == SMCINVOKE_REQ_PROCESSED) ||
+ (srvr_info->state == SMCINVOKE_SERVER_STATE_DEFUNCT));
+
out:
/*
* we could be here because of either: a. Req is PROCESSED
@@ -983,6 +1007,7 @@ static void process_tzcb_req(void *buf, size_t buf_len, struct file **arr_filp)
} else if (!srvr_info ||
srvr_info->state == SMCINVOKE_SERVER_STATE_DEFUNCT) {
cb_req->result = OBJECT_ERROR_DEFUNCT;
+ pr_err("server invalid, res: %d\n", cb_req->result);
} else {
pr_debug("%s wait_event interrupted ret = %d\n", __func__, ret);
cb_req->result = OBJECT_ERROR_ABORT;
@@ -1460,14 +1485,16 @@ static long process_server_req(struct file *filp, unsigned int cmd,
struct smcinvoke_server server_req = {0};
struct smcinvoke_server_info *server_info = NULL;
- if (_IOC_SIZE(cmd) != sizeof(server_req))
+ if (_IOC_SIZE(cmd) != sizeof(server_req)) {
+ pr_err("invalid command size received for server request\n");
return -EINVAL;
-
+ }
ret = copy_from_user(&server_req, (void __user *)(uintptr_t)arg,
sizeof(server_req));
- if (ret)
+ if (ret) {
+ pr_err("copying server request from user failed\n");
return -EFAULT;
-
+ }
server_info = kzalloc(sizeof(*server_info), GFP_KERNEL);
if (!server_info)
return -ENOMEM;
@@ -1507,29 +1534,43 @@ static long process_accept_req(struct file *filp, unsigned int cmd,
struct smcinvoke_cb_txn *cb_txn = NULL;
struct smcinvoke_server_info *server_info = NULL;
- if (_IOC_SIZE(cmd) != sizeof(struct smcinvoke_accept))
+ if (_IOC_SIZE(cmd) != sizeof(struct smcinvoke_accept)) {
+ pr_err("command size invalid for accept request\n");
return -EINVAL;
+ }
if (copy_from_user(&user_args, (void __user *)arg,
- sizeof(struct smcinvoke_accept)))
+ sizeof(struct smcinvoke_accept))) {
+ pr_err("copying accept request from user failed\n");
return -EFAULT;
+ }
- if (user_args.argsize != sizeof(union smcinvoke_arg))
+ if (user_args.argsize != sizeof(union smcinvoke_arg)) {
+ pr_err("arguments size is invalid for accept thread\n");
return -EINVAL;
+ }
/* ACCEPT is available only on server obj */
- if (server_obj->context_type != SMCINVOKE_OBJ_TYPE_SERVER)
+ if (server_obj->context_type != SMCINVOKE_OBJ_TYPE_SERVER) {
+ pr_err("invalid object type received for accept req\n");
return -EPERM;
+ }
mutex_lock(&g_smcinvoke_lock);
server_info = get_cb_server_locked(server_obj->server_id);
- mutex_unlock(&g_smcinvoke_lock);
- if (!server_info)
+
+ if (!server_info) {
+ pr_err("No matching server with server id : %u found\n",
+ server_obj->server_id);
+ mutex_unlock(&g_smcinvoke_lock);
return -EINVAL;
+ }
if (server_info->state == SMCINVOKE_SERVER_STATE_DEFUNCT)
server_info->state = 0;
+ mutex_unlock(&g_smcinvoke_lock);
+
/* First check if it has response otherwise wait for req */
if (user_args.has_resp) {
mutex_lock(&g_smcinvoke_lock);
@@ -1602,6 +1643,7 @@ static long process_accept_req(struct file *filp, unsigned int cmd,
ret = marshal_in_tzcb_req(cb_txn, &user_args,
server_obj->server_id);
if (ret) {
+ pr_err("failed to marshal in the callback request\n");
cb_txn->cb_req->result = OBJECT_ERROR_UNAVAIL;
cb_txn->state = SMCINVOKE_REQ_PROCESSED;
kref_put(&cb_txn->ref_cnt, delete_cb_txn);
@@ -1620,6 +1662,10 @@ static long process_accept_req(struct file *filp, unsigned int cmd,
out:
if (server_info)
kref_put(&server_info->ref_cnt, destroy_cb_server);
+
+ if (ret && ret != -ERESTARTSYS)
+ pr_err("accept thread returning with ret: %d\n", ret);
+
return ret;
}
@@ -1645,18 +1691,26 @@ static long process_invoke_req(struct file *filp, unsigned int cmd,
int32_t tzhandles_to_release[OBJECT_COUNTS_MAX_OO] = {0};
bool tz_acked = false;
- if (_IOC_SIZE(cmd) != sizeof(req))
+ if (_IOC_SIZE(cmd) != sizeof(req)) {
+ pr_err("command size for invoke req is invalid\n");
return -EINVAL;
+ }
- if (tzobj->context_type != SMCINVOKE_OBJ_TYPE_TZ_OBJ)
+ if (tzobj->context_type != SMCINVOKE_OBJ_TYPE_TZ_OBJ) {
+ pr_err("object type for invoke req is invalid\n");
return -EPERM;
+ }
ret = copy_from_user(&req, (void __user *)arg, sizeof(req));
- if (ret)
+ if (ret) {
+ pr_err("copying invoke req failed\n");
return -EFAULT;
+ }
- if (req.argsize != sizeof(union smcinvoke_arg))
+ if (req.argsize != sizeof(union smcinvoke_arg)) {
+ pr_err("arguments size for invoke req is invalid\n");
return -EINVAL;
+ }
nr_args = OBJECT_COUNTS_NUM_buffers(req.counts) +
OBJECT_COUNTS_NUM_objects(req.counts);
@@ -1679,6 +1733,7 @@ static long process_invoke_req(struct file *filp, unsigned int cmd,
ret = qtee_shmbridge_allocate_shm(inmsg_size, &in_shm);
if (ret) {
ret = -ENOMEM;
+ pr_err("shmbridge alloc failed for in msg in invoke req\n");
goto out;
}
in_msg = in_shm.vaddr;
@@ -1689,14 +1744,17 @@ static long process_invoke_req(struct file *filp, unsigned int cmd,
ret = qtee_shmbridge_allocate_shm(outmsg_size, &out_shm);
if (ret) {
ret = -ENOMEM;
+ pr_err("shmbridge alloc failed for out msg in invoke req\n");
goto out;
}
out_msg = out_shm.vaddr;
ret = marshal_in_invoke_req(&req, args_buf, tzobj->tzhandle, in_msg,
inmsg_size, filp_to_release, tzhandles_to_release);
- if (ret)
+ if (ret) {
+ pr_err("failed to marshal in invoke req, ret :%d\n", ret);
goto out;
+ }
ret = prepare_send_scm_msg(in_msg, in_shm.paddr, inmsg_size,
out_msg, out_shm.paddr, outmsg_size,
@@ -1706,8 +1764,10 @@ static long process_invoke_req(struct file *filp, unsigned int cmd,
* If scm_call is success, TZ owns responsibility to release
* refs for local objs.
*/
- if (!tz_acked)
+ if (!tz_acked) {
+ pr_debug("scm call successful\n");
goto out;
+ }
memset(tzhandles_to_release, 0, sizeof(tzhandles_to_release));
/*
@@ -1738,6 +1798,10 @@ static long process_invoke_req(struct file *filp, unsigned int cmd,
qtee_shmbridge_free_shm(&in_shm);
qtee_shmbridge_free_shm(&out_shm);
kfree(args_buf);
+
+ if (ret)
+ pr_err("invoke thread returning with ret = %d\n", ret);
+
return ret;
}
@@ -1818,12 +1882,14 @@ static int smcinvoke_release(struct inode *nodp, struct file *filp)
ret = qtee_shmbridge_allocate_shm(SMCINVOKE_TZ_MIN_BUF_SIZE, &in_shm);
if (ret) {
ret = -ENOMEM;
+ pr_err("shmbridge alloc failed for in msg in release\n");
goto out;
}
ret = qtee_shmbridge_allocate_shm(SMCINVOKE_TZ_MIN_BUF_SIZE, &out_shm);
if (ret) {
ret = -ENOMEM;
+ pr_err("shmbridge alloc failed for out msg in release\n");
goto out;
}
@@ -1883,7 +1949,6 @@ static int smcinvoke_probe(struct platform_device *pdev)
goto exit_destroy_device;
}
smcinvoke_pdev = pdev;
- cb_reqs_inflight = 0;
return 0;
@@ -1910,12 +1975,15 @@ static int smcinvoke_remove(struct platform_device *pdev)
static int __maybe_unused smcinvoke_suspend(struct platform_device *pdev,
pm_message_t state)
{
+ int ret = 0;
+
+ mutex_lock(&g_smcinvoke_lock);
if (cb_reqs_inflight) {
pr_err("Failed to suspend smcinvoke driver\n");
- return -EIO;
+ ret = -EIO;
}
-
- return 0;
+ mutex_unlock(&g_smcinvoke_lock);
+ return ret;
}
static int __maybe_unused smcinvoke_resume(struct platform_device *pdev)
diff --git a/drivers/soc/qcom/smem.c b/drivers/soc/qcom/smem.c
index 7888648..b5638df 100644
--- a/drivers/soc/qcom/smem.c
+++ b/drivers/soc/qcom/smem.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2015, Sony Mobile Communications AB.
- * Copyright (c) 2012-2013, 2018-2019 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2013, 2018-2020 The Linux Foundation. All rights reserved.
*/
#include <linux/hwspinlock.h>
@@ -192,6 +192,19 @@ struct smem_partition_header {
__le32 offset_free_cached;
__le32 reserved[3];
};
+/**
+ * struct smem_partition_desc - descriptor for partition
+ * @virt_base: starting virtual address of partition
+ * @phys_base: starting physical address of partition
+ * @cacheline: alignment for "cached" entries
+ * @size: size of partition
+ */
+struct smem_partition_desc {
+ void __iomem *virt_base;
+ u32 phys_base;
+ u32 cacheline;
+ u32 size;
+};
static const u8 SMEM_PART_MAGIC[] = { 0x24, 0x50, 0x52, 0x54 };
@@ -248,9 +261,9 @@ struct smem_region {
* struct qcom_smem - device data for the smem device
* @dev: device pointer
* @hwlock: reference to a hwspinlock
- * @global_partition_entry: pointer to global partition entry when in use
- * @ptable_entries: list of pointers to partitions table entry of current
- * processor/host
+ * @ptable_base: virtual base of partition table
+ * @global_partition_desc: descriptor for global partition when in use
+ * @partition_desc: list of partition descriptor of current processor/host
* @item_count: max accepted item number
* @num_regions: number of @regions
* @regions: list of the memory regions defining the shared memory
@@ -260,9 +273,10 @@ struct qcom_smem {
struct hwspinlock *hwlock;
- struct smem_ptable_entry *global_partition_entry;
- struct smem_ptable_entry *ptable_entries[SMEM_HOST_COUNT];
u32 item_count;
+ struct smem_ptable *ptable_base;
+ struct smem_partition_desc global_partition_desc;
+ struct smem_partition_desc partition_desc[SMEM_HOST_COUNT];
unsigned num_regions;
struct smem_region regions[0];
@@ -274,12 +288,6 @@ static struct qcom_smem *__smem;
/* Timeout (ms) for the trylock of remote spinlocks */
#define HWSPINLOCK_TIMEOUT 1000
-static struct smem_partition_header *
-ptable_entry_to_phdr(struct smem_ptable_entry *entry)
-{
- return __smem->regions[0].virt_base + le32_to_cpu(entry->offset);
-}
-
static struct smem_private_entry *
phdr_to_last_uncached_entry(struct smem_partition_header *phdr)
{
@@ -346,7 +354,7 @@ static void *cached_entry_to_item(struct smem_private_entry *e)
}
static int qcom_smem_alloc_private(struct qcom_smem *smem,
- struct smem_ptable_entry *entry,
+ struct smem_partition_desc *p_desc,
unsigned item,
size_t size)
{
@@ -356,8 +364,8 @@ static int qcom_smem_alloc_private(struct qcom_smem *smem,
void *cached;
void *p_end;
- phdr = ptable_entry_to_phdr(entry);
- p_end = (void *)phdr + le32_to_cpu(entry->size);
+ phdr = p_desc->virt_base;
+ p_end = (void *)phdr + p_desc->size;
hdr = phdr_to_first_uncached_entry(phdr);
end = phdr_to_last_uncached_entry(phdr);
@@ -450,7 +458,7 @@ static int qcom_smem_alloc_global(struct qcom_smem *smem,
*/
int qcom_smem_alloc(unsigned host, unsigned item, size_t size)
{
- struct smem_ptable_entry *entry;
+ struct smem_partition_desc *p_desc;
unsigned long flags;
int ret;
@@ -472,12 +480,12 @@ int qcom_smem_alloc(unsigned host, unsigned item, size_t size)
if (ret)
return ret;
- if (host < SMEM_HOST_COUNT && __smem->ptable_entries[host]) {
- entry = __smem->ptable_entries[host];
- ret = qcom_smem_alloc_private(__smem, entry, item, size);
- } else if (__smem->global_partition_entry) {
- entry = __smem->global_partition_entry;
- ret = qcom_smem_alloc_private(__smem, entry, item, size);
+ if (host < SMEM_HOST_COUNT && __smem->partition_desc[host].virt_base) {
+ p_desc = &__smem->partition_desc[host];
+ ret = qcom_smem_alloc_private(__smem, p_desc, item, size);
+ } else if (__smem->global_partition_desc.virt_base) {
+ p_desc = &__smem->global_partition_desc;
+ ret = qcom_smem_alloc_private(__smem, p_desc, item, size);
} else {
ret = qcom_smem_alloc_global(__smem, item, size);
}
@@ -528,22 +536,20 @@ static void *qcom_smem_get_global(struct qcom_smem *smem,
}
static void *qcom_smem_get_private(struct qcom_smem *smem,
- struct smem_ptable_entry *entry,
+ struct smem_partition_desc *p_desc,
unsigned item,
size_t *size)
{
struct smem_private_entry *e, *end;
struct smem_partition_header *phdr;
void *item_ptr, *p_end;
- u32 partition_size;
size_t cacheline;
u32 padding_data;
u32 e_size;
- phdr = ptable_entry_to_phdr(entry);
- partition_size = le32_to_cpu(entry->size);
- p_end = (void *)phdr + partition_size;
- cacheline = le32_to_cpu(entry->cacheline);
+ phdr = p_desc->virt_base;
+ p_end = (void *)phdr + p_desc->size;
+ cacheline = p_desc->cacheline;
e = phdr_to_first_uncached_entry(phdr);
end = phdr_to_last_uncached_entry(phdr);
@@ -560,7 +566,7 @@ static void *qcom_smem_get_private(struct qcom_smem *smem,
e_size = le32_to_cpu(e->size);
padding_data = le16_to_cpu(e->padding_data);
- if (e_size < partition_size
+ if (e_size < p_desc->size
&& padding_data < e_size)
*size = e_size - padding_data;
else
@@ -596,7 +602,7 @@ static void *qcom_smem_get_private(struct qcom_smem *smem,
e_size = le32_to_cpu(e->size);
padding_data = le16_to_cpu(e->padding_data);
- if (e_size < partition_size
+ if (e_size < p_desc->size
&& padding_data < e_size)
*size = e_size - padding_data;
else
@@ -635,7 +641,7 @@ static void *qcom_smem_get_private(struct qcom_smem *smem,
*/
void *qcom_smem_get(unsigned host, unsigned item, size_t *size)
{
- struct smem_ptable_entry *entry;
+ struct smem_partition_desc *p_desc;
unsigned long flags;
int ret;
void *ptr = ERR_PTR(-EPROBE_DEFER);
@@ -652,12 +658,12 @@ void *qcom_smem_get(unsigned host, unsigned item, size_t *size)
if (ret)
return ERR_PTR(ret);
- if (host < SMEM_HOST_COUNT && __smem->ptable_entries[host]) {
- entry = __smem->ptable_entries[host];
- ptr = qcom_smem_get_private(__smem, entry, item, size);
- } else if (__smem->global_partition_entry) {
- entry = __smem->global_partition_entry;
- ptr = qcom_smem_get_private(__smem, entry, item, size);
+ if (host < SMEM_HOST_COUNT && __smem->partition_desc[host].virt_base) {
+ p_desc = &__smem->partition_desc[host];
+ ptr = qcom_smem_get_private(__smem, p_desc, item, size);
+ } else if (__smem->global_partition_desc.virt_base) {
+ p_desc = &__smem->global_partition_desc;
+ ptr = qcom_smem_get_private(__smem, p_desc, item, size);
} else {
ptr = qcom_smem_get_global(__smem, item, size);
}
@@ -679,30 +685,30 @@ EXPORT_SYMBOL(qcom_smem_get);
int qcom_smem_get_free_space(unsigned host)
{
struct smem_partition_header *phdr;
- struct smem_ptable_entry *entry;
+ struct smem_partition_desc *p_desc;
struct smem_header *header;
unsigned ret;
if (!__smem)
return -EPROBE_DEFER;
- if (host < SMEM_HOST_COUNT && __smem->ptable_entries[host]) {
- entry = __smem->ptable_entries[host];
- phdr = ptable_entry_to_phdr(entry);
+ if (host < SMEM_HOST_COUNT && __smem->partition_desc[host].virt_base) {
+ p_desc = &__smem->partition_desc[host];
+ phdr = p_desc->virt_base;
ret = le32_to_cpu(phdr->offset_free_cached) -
le32_to_cpu(phdr->offset_free_uncached);
- if (ret > le32_to_cpu(entry->size))
+ if (ret > p_desc->size)
return -EINVAL;
- } else if (__smem->global_partition_entry) {
- entry = __smem->global_partition_entry;
- phdr = ptable_entry_to_phdr(entry);
+ } else if (__smem->global_partition_desc.virt_base) {
+ p_desc = &__smem->global_partition_desc;
+ phdr = p_desc->virt_base;
ret = le32_to_cpu(phdr->offset_free_cached) -
le32_to_cpu(phdr->offset_free_uncached);
- if (ret > le32_to_cpu(entry->size))
+ if (ret > p_desc->size)
return -EINVAL;
} else {
header = __smem->regions[0].virt_base;
@@ -716,6 +722,15 @@ int qcom_smem_get_free_space(unsigned host)
}
EXPORT_SYMBOL(qcom_smem_get_free_space);
+static int addr_in_range(void *virt_base, unsigned int size, void *addr)
+{
+ if (virt_base && addr >= virt_base &&
+ addr < virt_base + size)
+ return 1;
+
+ return 0;
+}
+
/**
* qcom_smem_virt_to_phys() - return the physical address associated
* with an smem item pointer (previously returned by qcom_smem_get()
@@ -725,17 +740,36 @@ EXPORT_SYMBOL(qcom_smem_get_free_space);
*/
phys_addr_t qcom_smem_virt_to_phys(void *p)
{
- unsigned i;
+ struct smem_partition_desc *p_desc;
+ struct smem_region *area;
+ u64 offset;
+ u32 i;
+
+ for (i = 0; i < SMEM_HOST_COUNT; i++) {
+ p_desc = &__smem->partition_desc[i];
+
+ if (addr_in_range(p_desc->virt_base, p_desc->size, p)) {
+ offset = p - p_desc->virt_base;
+
+ return (phys_addr_t)p_desc->phys_base + offset;
+ }
+ }
+
+ p_desc = &__smem->global_partition_desc;
+
+ if (addr_in_range(p_desc->virt_base, p_desc->size, p)) {
+ offset = p - p_desc->virt_base;
+
+ return (phys_addr_t)p_desc->phys_base + offset;
+ }
for (i = 0; i < __smem->num_regions; i++) {
- struct smem_region *region = &__smem->regions[i];
+ area = &__smem->regions[i];
- if (p < region->virt_base)
- continue;
- if (p < region->virt_base + region->size) {
- u64 offset = p - region->virt_base;
+ if (addr_in_range(area->virt_base, area->size, p)) {
+ offset = p - area->virt_base;
- return (phys_addr_t)region->aux_base + offset;
+ return (phys_addr_t)area->aux_base + offset;
}
}
@@ -759,7 +793,7 @@ static struct smem_ptable *qcom_smem_get_ptable(struct qcom_smem *smem)
struct smem_ptable *ptable;
u32 version;
- ptable = smem->regions[0].virt_base + smem->regions[0].size - SZ_4K;
+ ptable = smem->ptable_base;
if (memcmp(ptable->magic, SMEM_PTABLE_MAGIC, sizeof(ptable->magic)))
return ERR_PTR(-ENOENT);
@@ -793,11 +827,12 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem)
struct smem_partition_header *header;
struct smem_ptable_entry *entry;
struct smem_ptable *ptable;
+ u32 phys_addr;
u32 host0, host1, size;
bool found = false;
int i;
- if (smem->global_partition_entry) {
+ if (smem->global_partition_desc.virt_base) {
dev_err(smem->dev, "Already found the global partition\n");
return -EINVAL;
}
@@ -827,7 +862,12 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem)
return -EINVAL;
}
- header = smem->regions[0].virt_base + le32_to_cpu(entry->offset);
+ phys_addr = smem->regions[0].aux_base + le32_to_cpu(entry->offset);
+ header = devm_ioremap_wc(smem->dev,
+ phys_addr, le32_to_cpu(entry->size));
+ if (!header)
+ return -ENOMEM;
+
host0 = le16_to_cpu(header->host0);
host1 = le16_to_cpu(header->host1);
@@ -853,7 +893,10 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem)
return -EINVAL;
}
- smem->global_partition_entry = entry;
+ smem->global_partition_desc.virt_base = (void __iomem *)header;
+ smem->global_partition_desc.phys_base = phys_addr;
+ smem->global_partition_desc.size = le32_to_cpu(entry->size);
+ smem->global_partition_desc.cacheline = le32_to_cpu(entry->cacheline);
return 0;
}
@@ -864,6 +907,7 @@ static int qcom_smem_enumerate_partitions(struct qcom_smem *smem,
struct smem_partition_header *header;
struct smem_ptable_entry *entry;
struct smem_ptable *ptable;
+ u32 phys_addr;
unsigned int remote_host;
u32 host0, host1;
int i;
@@ -898,14 +942,20 @@ static int qcom_smem_enumerate_partitions(struct qcom_smem *smem,
return -EINVAL;
}
- if (smem->ptable_entries[remote_host]) {
+ if (smem->partition_desc[remote_host].virt_base) {
dev_err(smem->dev,
"Already found a partition for host %d\n",
remote_host);
return -EINVAL;
}
- header = smem->regions[0].virt_base + le32_to_cpu(entry->offset);
+ phys_addr = smem->regions[0].aux_base +
+ le32_to_cpu(entry->offset);
+ header = devm_ioremap_wc(smem->dev,
+ phys_addr, le32_to_cpu(entry->size));
+ if (!header)
+ return -ENOMEM;
+
host0 = le16_to_cpu(header->host0);
host1 = le16_to_cpu(header->host1);
@@ -940,7 +990,13 @@ static int qcom_smem_enumerate_partitions(struct qcom_smem *smem,
return -EINVAL;
}
- smem->ptable_entries[remote_host] = entry;
+ smem->partition_desc[remote_host].virt_base =
+ (void __iomem *)header;
+ smem->partition_desc[remote_host].phys_base = phys_addr;
+ smem->partition_desc[remote_host].size =
+ le32_to_cpu(entry->size);
+ smem->partition_desc[remote_host].cacheline =
+ le32_to_cpu(entry->cacheline);
}
return 0;
@@ -973,6 +1029,61 @@ static int qcom_smem_map_memory(struct qcom_smem *smem, struct device *dev,
return 0;
}
+static int qcom_smem_map_toc(struct qcom_smem *smem, struct device *dev,
+ const char *name, int i)
+{
+ struct device_node *np;
+ struct resource r;
+ int ret;
+
+ np = of_parse_phandle(dev->of_node, name, 0);
+ if (!np) {
+ dev_err(dev, "No %s specified\n", name);
+ return -EINVAL;
+ }
+
+ ret = of_address_to_resource(np, 0, &r);
+ of_node_put(np);
+ if (ret)
+ return ret;
+
+ smem->regions[i].aux_base = (u32)r.start;
+ smem->regions[i].size = resource_size(&r);
+ /* map starting 4K for smem header */
+ smem->regions[i].virt_base = devm_ioremap_wc(dev, r.start, SZ_4K);
+ /* map last 4k for toc */
+ smem->ptable_base = devm_ioremap_wc(dev,
+ r.start + resource_size(&r) - SZ_4K, SZ_4K);
+
+ if (!smem->regions[i].virt_base || !smem->ptable_base)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static int qcom_smem_mamp_legacy(struct qcom_smem *smem)
+{
+ struct smem_header *header;
+ u32 phys_addr;
+ u32 p_size;
+
+ phys_addr = smem->regions[0].aux_base;
+ header = smem->regions[0].virt_base;
+ p_size = header->available;
+
+ /* unmap previously mapped starting 4k for smem header */
+ devm_iounmap(smem->dev, smem->regions[0].virt_base);
+
+ smem->regions[0].size = p_size;
+ smem->regions[0].virt_base = devm_ioremap_wc(smem->dev,
+ phys_addr, p_size);
+
+ if (!smem->regions[0].virt_base)
+ return -ENOMEM;
+
+ return 0;
+}
+
static int qcom_smem_probe(struct platform_device *pdev)
{
struct smem_header *header;
@@ -995,7 +1106,7 @@ static int qcom_smem_probe(struct platform_device *pdev)
smem->dev = &pdev->dev;
smem->num_regions = num_regions;
- ret = qcom_smem_map_memory(smem, &pdev->dev, "memory-region", 0);
+ ret = qcom_smem_map_toc(smem, &pdev->dev, "memory-region", 0);
if (ret)
return ret;
@@ -1019,6 +1130,7 @@ static int qcom_smem_probe(struct platform_device *pdev)
smem->item_count = qcom_smem_get_item_count(smem);
break;
case SMEM_GLOBAL_HEAP_VERSION:
+ qcom_smem_mamp_legacy(smem);
smem->item_count = SMEM_ITEM_COUNT;
break;
default:
diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
index 8734243..9d24d3c 100644
--- a/drivers/soc/qcom/socinfo.c
+++ b/drivers/soc/qcom/socinfo.c
@@ -295,6 +295,9 @@ static struct msm_soc_info cpu_of_id[] = {
[305] = {MSM_CPU_8996, "MSM8996pro"},
[312] = {MSM_CPU_8996, "APQ8096pro"},
+ /* SDM660 ID */
+ [317] = {MSM_CPU_SDM660, "SDM660"},
+
/* sm8150 ID */
[339] = {MSM_CPU_SM8150, "SM8150"},
@@ -1188,6 +1191,10 @@ static void * __init setup_dummy_socinfo(void)
dummy_socinfo.id = 310;
strlcpy(dummy_socinfo.build_id, "msm8996-auto - ",
sizeof(dummy_socinfo.build_id));
+ } else if (early_machine_is_sdm660()) {
+ dummy_socinfo.id = 317;
+ strlcpy(dummy_socinfo.build_id, "sdm660 - ",
+ sizeof(dummy_socinfo.build_id));
} else if (early_machine_is_sm8150()) {
dummy_socinfo.id = 339;
strlcpy(dummy_socinfo.build_id, "sm8150 - ",
diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
index 373e5f0..00bfed4 100644
--- a/drivers/spi/spi-geni-qcom.c
+++ b/drivers/spi/spi-geni-qcom.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2020, The Linux Foundation. All rights reserved.
*/
@@ -148,7 +148,8 @@ struct spi_geni_master {
int num_rx_eot;
int num_xfers;
void *ipc;
- bool shared_se;
+ bool shared_se; /* GSI Mode */
+ bool shared_ee; /* Dual EE use case */
bool dis_autosuspend;
bool cmd_done;
};
@@ -717,6 +718,32 @@ static int spi_geni_prepare_message(struct spi_master *spi,
{
int ret = 0;
struct spi_geni_master *mas = spi_master_get_devdata(spi);
+ int count;
+
+ if (mas->shared_ee) {
+ if (mas->setup) {
+ ret = pm_runtime_get_sync(mas->dev);
+ if (ret < 0) {
+ dev_err(mas->dev,
+ "%s:pm_runtime_get_sync failed %d\n",
+ __func__, ret);
+ pm_runtime_put_noidle(mas->dev);
+ goto exit_prepare_message;
+ }
+ ret = 0;
+
+ if (mas->dis_autosuspend) {
+ count =
+ atomic_read(&mas->dev->power.usage_count);
+ if (count <= 0)
+ GENI_SE_ERR(mas->ipc, false, NULL,
+ "resume usage count mismatch:%d",
+ count);
+ }
+ } else {
+ mas->setup = true;
+ }
+ }
mas->cur_xfer_mode = select_xfer_mode(spi, spi_msg);
@@ -734,6 +761,7 @@ static int spi_geni_prepare_message(struct spi_master *spi,
ret = setup_fifo_params(spi_msg->spi, spi);
}
+exit_prepare_message:
return ret;
}
@@ -741,11 +769,27 @@ static int spi_geni_unprepare_message(struct spi_master *spi_mas,
struct spi_message *spi_msg)
{
struct spi_geni_master *mas = spi_master_get_devdata(spi_mas);
+ int count = 0;
mas->cur_speed_hz = 0;
mas->cur_word_len = 0;
if (mas->cur_xfer_mode == GSI_DMA)
spi_geni_unmap_buf(mas, spi_msg);
+
+ if (mas->shared_ee) {
+ if (mas->dis_autosuspend) {
+ pm_runtime_put_sync(mas->dev);
+ count = atomic_read(&mas->dev->power.usage_count);
+ if (count < 0)
+ GENI_SE_ERR(mas->ipc, false, NULL,
+ "suspend usage count mismatch:%d",
+ count);
+ } else {
+ pm_runtime_mark_last_busy(mas->dev);
+ pm_runtime_put_autosuspend(mas->dev);
+ }
+ }
+
return 0;
}
@@ -758,7 +802,7 @@ static int spi_geni_prepare_transfer_hardware(struct spi_master *spi)
/* Adjust the IB based on the max speed of the slave.*/
rsc->ib = max_speed * DEFAULT_BUS_WIDTH;
- if (mas->shared_se) {
+ if (mas->shared_se && !mas->shared_ee) {
struct se_geni_rsc *rsc;
int ret = 0;
@@ -770,20 +814,23 @@ static int spi_geni_prepare_transfer_hardware(struct spi_master *spi)
"%s: Error %d pinctrl_select_state\n", __func__, ret);
}
- ret = pm_runtime_get_sync(mas->dev);
- if (ret < 0) {
- dev_err(mas->dev, "%s:Error enabling SE resources %d\n",
+ if (!mas->setup || !mas->shared_ee) {
+ ret = pm_runtime_get_sync(mas->dev);
+ if (ret < 0) {
+ dev_err(mas->dev,
+ "%s:pm_runtime_get_sync failed %d\n",
__func__, ret);
- pm_runtime_put_noidle(mas->dev);
- goto exit_prepare_transfer_hardware;
- } else {
+ pm_runtime_put_noidle(mas->dev);
+ goto exit_prepare_transfer_hardware;
+ }
ret = 0;
- }
- if (mas->dis_autosuspend) {
- count = atomic_read(&mas->dev->power.usage_count);
- if (count <= 0)
- GENI_SE_ERR(mas->ipc, false, NULL,
+
+ if (mas->dis_autosuspend) {
+ count = atomic_read(&mas->dev->power.usage_count);
+ if (count <= 0)
+ GENI_SE_ERR(mas->ipc, false, NULL,
"resume usage count mismatch:%d", count);
+ }
}
if (unlikely(!mas->setup)) {
int proto = get_se_proto(mas->base);
@@ -857,7 +904,8 @@ static int spi_geni_prepare_transfer_hardware(struct spi_master *spi)
dev_info(mas->dev, "tx_fifo %d rx_fifo %d tx_width %d\n",
mas->tx_fifo_depth, mas->rx_fifo_depth,
mas->tx_fifo_width);
- mas->setup = true;
+ if (!mas->shared_ee)
+ mas->setup = true;
hw_ver = geni_se_qupv3_hw_version(mas->wrapper_dev, &major,
&minor, &step);
if (hw_ver)
@@ -886,6 +934,9 @@ static int spi_geni_unprepare_transfer_hardware(struct spi_master *spi)
struct spi_geni_master *mas = spi_master_get_devdata(spi);
int count = 0;
+ if (mas->shared_ee)
+ return 0;
+
if (mas->shared_se) {
struct se_geni_rsc *rsc;
int ret = 0;
@@ -908,6 +959,7 @@ static int spi_geni_unprepare_transfer_hardware(struct spi_master *spi)
pm_runtime_mark_last_busy(mas->dev);
pm_runtime_put_autosuspend(mas->dev);
}
+
return 0;
}
@@ -1459,6 +1511,15 @@ static int spi_geni_probe(struct platform_device *pdev)
geni_mas->dis_autosuspend =
of_property_read_bool(pdev->dev.of_node,
"qcom,disable-autosuspend");
+ /*
+ * This property will be set when spi is being used from
+ * dual Execution Environments unlike shared_se flag
+ * which is set if SE is in GSI mode.
+ */
+ geni_mas->shared_ee =
+ of_property_read_bool(pdev->dev.of_node,
+ "qcom,shared_ee");
+
geni_mas->phys_addr = res->start;
geni_mas->size = resource_size(res);
geni_mas->base = devm_ioremap(&pdev->dev, res->start,
@@ -1536,14 +1597,19 @@ static int spi_geni_runtime_suspend(struct device *dev)
struct spi_master *spi = get_spi_master(dev);
struct spi_geni_master *geni_mas = spi_master_get_devdata(spi);
+ if (geni_mas->shared_ee)
+ goto exit_rt_suspend;
+
if (geni_mas->shared_se) {
ret = se_geni_clks_off(&geni_mas->spi_rsc);
if (ret)
GENI_SE_ERR(geni_mas->ipc, false, NULL,
"%s: Error %d turning off clocks\n", __func__, ret);
- } else {
- ret = se_geni_resources_off(&geni_mas->spi_rsc);
+ return ret;
}
+
+exit_rt_suspend:
+ ret = se_geni_resources_off(&geni_mas->spi_rsc);
return ret;
}
@@ -1553,14 +1619,19 @@ static int spi_geni_runtime_resume(struct device *dev)
struct spi_master *spi = get_spi_master(dev);
struct spi_geni_master *geni_mas = spi_master_get_devdata(spi);
+ if (geni_mas->shared_ee)
+ goto exit_rt_resume;
+
if (geni_mas->shared_se) {
ret = se_geni_clks_on(&geni_mas->spi_rsc);
if (ret)
GENI_SE_ERR(geni_mas->ipc, false, NULL,
"%s: Error %d turning on clocks\n", __func__, ret);
- } else {
- ret = se_geni_resources_on(&geni_mas->spi_rsc);
+ return ret;
}
+
+exit_rt_resume:
+ ret = se_geni_resources_on(&geni_mas->spi_rsc);
return ret;
}
diff --git a/drivers/staging/android/ion/ion_cma_heap.c b/drivers/staging/android/ion/ion_cma_heap.c
index e236c71..2591a45 100644
--- a/drivers/staging/android/ion/ion_cma_heap.c
+++ b/drivers/staging/android/ion/ion_cma_heap.c
@@ -3,7 +3,7 @@
* Copyright (C) Linaro 2012
* Author: <benjamin.gaignard@linaro.org> for ST-Ericsson.
*
- * Copyright (c) 2016-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2016-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/device.h>
@@ -25,6 +25,11 @@ struct ion_cma_heap {
#define to_cma_heap(x) container_of(x, struct ion_cma_heap, heap)
+static bool ion_heap_is_cma_heap_type(enum ion_heap_type type)
+{
+ return type == ION_HEAP_TYPE_DMA;
+}
+
/* ION CMA heap operations functions */
static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer,
unsigned long len,
@@ -39,6 +44,13 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer,
int ret;
struct device *dev = heap->priv;
+ if (ion_heap_is_cma_heap_type(buffer->heap->type) &&
+ is_secure_allocation(buffer->flags)) {
+ pr_err("%s: CMA heap doesn't support secure allocations\n",
+ __func__);
+ return -EINVAL;
+ }
+
if (align > CONFIG_CMA_ALIGNMENT)
align = CONFIG_CMA_ALIGNMENT;
@@ -46,7 +58,7 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer,
if (!pages)
return -ENOMEM;
- if (!(flags & ION_FLAG_SECURE)) {
+ if (hlos_accessible_buffer(buffer)) {
if (PageHighMem(pages)) {
unsigned long nr_clear_pages = nr_pages;
struct page *page = pages;
@@ -65,7 +77,7 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer,
}
if (MAKE_ION_ALLOC_DMA_READY ||
- (flags & ION_FLAG_SECURE) ||
+ (!hlos_accessible_buffer(buffer)) ||
(!ion_buffer_cached(buffer)))
ion_pages_sync_for_device(dev, pages, size,
DMA_BIDIRECTIONAL);
diff --git a/drivers/staging/android/ion/ion_secure_util.c b/drivers/staging/android/ion/ion_secure_util.c
index c0b4c4d..4cbf2ca 100644
--- a/drivers/staging/android/ion/ion_secure_util.c
+++ b/drivers/staging/android/ion/ion_secure_util.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/slab.h>
@@ -24,6 +24,11 @@ bool is_secure_vmid_valid(int vmid)
vmid == VMID_CP_CDSP);
}
+bool is_secure_allocation(unsigned long flags)
+{
+ return !!(flags & (ION_FLAGS_CP_MASK | ION_FLAG_SECURE));
+}
+
int get_secure_vmid(unsigned long flags)
{
if (flags & ION_FLAG_CP_TOUCH)
diff --git a/drivers/staging/android/ion/ion_secure_util.h b/drivers/staging/android/ion/ion_secure_util.h
index bd525e5..97d7555 100644
--- a/drivers/staging/android/ion/ion_secure_util.h
+++ b/drivers/staging/android/ion/ion_secure_util.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
- * Copyright (c) 2017-2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2018,2020, The Linux Foundation. All rights reserved.
*/
#include "ion.h"
@@ -23,4 +23,6 @@ int ion_hyp_assign_from_flags(u64 base, u64 size, unsigned long flags);
bool hlos_accessible_buffer(struct ion_buffer *buffer);
+bool is_secure_allocation(unsigned long flags);
+
#endif /* _ION_SECURE_UTIL_H */
diff --git a/drivers/thermal/tsens.h b/drivers/thermal/tsens.h
index 036ee11..4398615 100644
--- a/drivers/thermal/tsens.h
+++ b/drivers/thermal/tsens.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
- * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2020, The Linux Foundation. All rights reserved.
*/
#ifndef __QCOM_TSENS_H__
@@ -87,11 +87,11 @@ struct tsens_device;
} \
} while (0)
#else
-#define TSENS_DBG1(x...) pr_debug(x)
-#define TSENS_DBG(x...) pr_debug(x)
-#define TSENS_INFO(x...) pr_info(x)
-#define TSENS_ERR(x...) pr_err(x)
-#define TSENS_DUMP(x...) pr_info(x)
+#define TSENS_DBG1(dev, msg, x...) pr_debug(msg, ##x)
+#define TSENS_DBG(dev, msg, x...) pr_debug(msg, ##x)
+#define TSENS_INFO(dev, msg, x...) pr_info(msg, ##x)
+#define TSENS_ERR(dev, msg, x...) pr_err(msg, ##x)
+#define TSENS_DUMP(dev, msg, x...) pr_info(msg, ##x)
#endif
#if defined(CONFIG_THERMAL_TSENS)
@@ -214,6 +214,7 @@ struct tsens_device {
struct workqueue_struct *tsens_reinit_work;
struct work_struct therm_fwk_notify;
bool tsens_reinit_wa;
+ int tsens_reinit_cnt;
struct tsens_sensor sensor[0];
};
diff --git a/drivers/thermal/tsens2xxx.c b/drivers/thermal/tsens2xxx.c
index ef31fcf..941f7f4 100644
--- a/drivers/thermal/tsens2xxx.c
+++ b/drivers/thermal/tsens2xxx.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2012-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/module.h>
@@ -69,6 +69,7 @@
#define TSENS_INIT_ID 0x5
#define TSENS_RECOVERY_LOOP_COUNT 5
+#define TSENS_RE_INIT_MAX_COUNT 5
static void msm_tsens_convert_temp(int last_temp, int *temp)
{
@@ -88,6 +89,7 @@ static int tsens2xxx_get_temp(struct tsens_sensor *sensor, int *temp)
unsigned int code, ret, tsens_ret;
void __iomem *sensor_addr, *trdy;
int last_temp = 0, last_temp2 = 0, last_temp3 = 0, count = 0;
+ static atomic_t in_tsens_reinit;
if (!sensor)
return -EINVAL;
@@ -100,8 +102,14 @@ static int tsens2xxx_get_temp(struct tsens_sensor *sensor, int *temp)
if (!((code & TSENS_TM_TRDY_FIRST_ROUND_COMPLETE) >>
TSENS_TM_TRDY_FIRST_ROUND_COMPLETE_SHIFT)) {
+ if (atomic_read(&in_tsens_reinit)) {
+ pr_err("%s: tsens re-init is in progress\n", __func__);
+ return -EAGAIN;
+ }
+
pr_err("%s: tsens device first round not complete0x%x\n",
__func__, code);
+
/* Wait for 2.5 ms for tsens controller to recover */
do {
udelay(500);
@@ -120,9 +128,26 @@ static int tsens2xxx_get_temp(struct tsens_sensor *sensor, int *temp)
if (tmdev->tsens_reinit_wa) {
struct scm_desc desc = { 0 };
+ if (atomic_read(&in_tsens_reinit)) {
+ pr_err("%s: tsens re-init is in progress\n",
+ __func__);
+ return -EAGAIN;
+ }
+
+ atomic_set(&in_tsens_reinit, 1);
+
if (tmdev->ops->dbg)
tmdev->ops->dbg(tmdev, 0,
TSENS_DBG_LOG_BUS_ID_DATA, NULL);
+
+ if (tmdev->tsens_reinit_cnt >=
+ TSENS_RE_INIT_MAX_COUNT) {
+ pr_err(
+ "%s: TSENS not recovered after %d re-init\n",
+ __func__, tmdev->tsens_reinit_cnt);
+ BUG();
+ }
+
/* Make an scm call to re-init TSENS */
TSENS_DBG(tmdev, "%s",
"Calling TZ to re-init TSENS\n");
@@ -141,6 +166,9 @@ static int tsens2xxx_get_temp(struct tsens_sensor *sensor, int *temp)
__func__, tsens_ret);
BUG();
}
+ tmdev->tsens_reinit_cnt++;
+ atomic_set(&in_tsens_reinit, 0);
+
/* Notify thermal fwk */
list_for_each_entry(tmdev_itr,
&tsens_device_list, list) {
@@ -158,6 +186,7 @@ static int tsens2xxx_get_temp(struct tsens_sensor *sensor, int *temp)
sensor_read:
tmdev->trdy_fail_ctr = 0;
+ tmdev->tsens_reinit_cnt = 0;
code = readl_relaxed_no_log(sensor_addr +
(sensor->hw_id << TSENS_STATUS_ADDR_OFFSET));
diff --git a/drivers/tty/serial/msm_geni_serial.c b/drivers/tty/serial/msm_geni_serial.c
index d33263a..462ab9a 100644
--- a/drivers/tty/serial/msm_geni_serial.c
+++ b/drivers/tty/serial/msm_geni_serial.c
@@ -10,6 +10,7 @@
#include <linux/console.h>
#include <linux/io.h>
#include <linux/ipc_logging.h>
+#include <linux/irq.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_device.h>
@@ -800,8 +801,8 @@ static void msm_geni_serial_poll_tx_done(struct uart_port *uport)
* Failure IPC logs are not added as this API is
* used by early console and it doesn't have log handle.
*/
- geni_write_reg(S_GENI_CMD_CANCEL, uport->membase,
- SE_GENI_S_CMD_CTRL_REG);
+ geni_write_reg(M_GENI_CMD_CANCEL, uport->membase,
+ SE_GENI_M_CMD_CTRL_REG);
done = msm_geni_serial_poll_bit(uport, SE_GENI_M_IRQ_STATUS,
M_CMD_CANCEL_EN, true);
if (!done) {
@@ -1686,7 +1687,7 @@ static int msm_geni_serial_handle_dma_rx(struct uart_port *uport, bool drop_rx)
struct msm_geni_serial_port *msm_port = GET_DEV_PORT(uport);
unsigned int rx_bytes = 0;
struct tty_port *tport;
- int ret;
+ int ret = 0;
unsigned int geni_status;
geni_status = geni_read_reg_nolog(uport->membase, SE_GENI_STATUS);
@@ -1888,7 +1889,8 @@ static void msm_geni_serial_handle_isr(struct uart_port *uport)
uport->icount.brk);
}
- if (dma_rx_status & RX_EOT) {
+ if (dma_rx_status & RX_EOT ||
+ dma_rx_status & RX_DMA_DONE) {
msm_geni_serial_handle_dma_rx(uport,
drop_rx);
if (!(dma_rx_status & RX_GENI_CANCEL_IRQ)) {
@@ -2019,14 +2021,12 @@ static void msm_geni_serial_shutdown(struct uart_port *uport)
/* Stop the console before stopping the current tx */
if (uart_console(uport)) {
console_stop(uport->cons);
+ disable_irq(uport->irq);
} else {
msm_geni_serial_power_on(uport);
wait_for_transfers_inflight(uport);
}
- msm_geni_serial_stop_tx(uport);
- msm_geni_serial_stop_rx(uport);
-
if (!uart_console(uport)) {
if (msm_port->ioctl_count) {
int i;
@@ -2086,9 +2086,9 @@ static int msm_geni_serial_port_setup(struct uart_port *uport)
goto exit_portsetup;
}
- msm_port->rx_buf = dma_alloc_coherent(msm_port->wrapper_dev,
- DMA_RX_BUF_SIZE, &dma_address, GFP_KERNEL);
-
+ msm_port->rx_buf =
+ geni_se_iommu_alloc_buf(msm_port->wrapper_dev,
+ &dma_address, DMA_RX_BUF_SIZE);
if (!msm_port->rx_buf) {
devm_kfree(uport->dev, msm_port->rx_fifo);
msm_port->rx_fifo = NULL;
@@ -2137,8 +2137,8 @@ static int msm_geni_serial_port_setup(struct uart_port *uport)
return 0;
free_dma:
if (msm_port->rx_dma) {
- dma_free_coherent(msm_port->wrapper_dev, DMA_RX_BUF_SIZE,
- msm_port->rx_buf, msm_port->rx_dma);
+ geni_se_iommu_free_buf(msm_port->wrapper_dev,
+ &msm_port->rx_dma, msm_port->rx_buf, DMA_RX_BUF_SIZE);
msm_port->rx_dma = (dma_addr_t)NULL;
}
exit_portsetup:
@@ -2180,6 +2180,16 @@ static int msm_geni_serial_startup(struct uart_port *uport)
*/
mb();
+ /* Console usecase requires irq to be in enable state after early
+ * console switch from probe to handle RX data. Hence enable IRQ
+ * from starup and disable it form shutdown APIs for cosnole case.
+ * BT HSUART usecase, IRQ will be enabled from runtime_resume()
+ * and disabled in runtime_suspend to avoid spurious interrupts
+ * after suspend.
+ */
+ if (uart_console(uport))
+ enable_irq(uport->irq);
+
if (msm_port->wakeup_irq > 0) {
ret = request_irq(msm_port->wakeup_irq, msm_geni_wakeup_isr,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
@@ -3154,6 +3164,7 @@ static int msm_geni_serial_probe(struct platform_device *pdev)
dev_port->name = devm_kasprintf(uport->dev, GFP_KERNEL,
"msm_serial_geni%d", uport->line);
+ irq_set_status_flags(uport->irq, IRQ_NOAUTOEN);
ret = devm_request_irq(uport->dev, uport->irq, msm_geni_serial_isr,
IRQF_TRIGGER_HIGH, dev_port->name, uport);
if (ret) {
@@ -3184,11 +3195,6 @@ static int msm_geni_serial_probe(struct platform_device *pdev)
dev_info(&pdev->dev, "Serial port%d added.FifoSize %d is_console%d\n",
line, uport->fifosize, is_console);
- /*
- * We are using this spinlock before the serial layer initialises it.
- * Hence, we are initializing it.
- */
- spin_lock_init(&uport->lock);
device_create_file(uport->dev, &dev_attr_loopback);
device_create_file(uport->dev, &dev_attr_xfer_mode);
@@ -3199,13 +3205,10 @@ static int msm_geni_serial_probe(struct platform_device *pdev)
if (ret)
goto exit_geni_serial_probe;
- IPC_LOG_MSG(dev_port->ipc_log_misc, "%s: port:%s irq:%d\n", __func__,
- uport->name, uport->irq);
- return uart_add_one_port(drv, uport);
+ ret = uart_add_one_port(drv, uport);
exit_geni_serial_probe:
- IPC_LOG_MSG(dev_port->ipc_log_misc, "%s: fail port:%s ret:%d\n",
- __func__, uport->name, ret);
+ IPC_LOG_MSG(dev_port->ipc_log_misc, "%s: ret:%d\n", __func__, ret);
return ret;
}
@@ -3218,8 +3221,8 @@ static int msm_geni_serial_remove(struct platform_device *pdev)
wakeup_source_trash(&port->geni_wake);
uart_remove_one_port(drv, &port->uport);
if (port->rx_dma) {
- dma_free_coherent(port->wrapper_dev, DMA_RX_BUF_SIZE,
- port->rx_buf, port->rx_dma);
+ geni_se_iommu_free_buf(port->wrapper_dev, &port->rx_dma,
+ port->rx_buf, DMA_RX_BUF_SIZE);
port->rx_dma = (dma_addr_t)NULL;
}
return 0;
@@ -3237,9 +3240,9 @@ static int msm_geni_serial_runtime_suspend(struct device *dev)
wait_for_transfers_inflight(&port->uport);
/*
- * Disable Interrupt
* Manual RFR On.
* Stop Rx.
+ * Disable Interrupt
* Resources off
*/
stop_rx_sequencer(&port->uport);
@@ -3248,6 +3251,7 @@ static int msm_geni_serial_runtime_suspend(struct device *dev)
if ((geni_status & M_GENI_CMD_ACTIVE))
stop_tx_sequencer(&port->uport);
+ disable_irq(port->uport.irq);
ret = se_geni_resources_off(&port->serial_rsc);
if (ret) {
dev_err(dev, "%s: Error ret %d\n", __func__, ret);
@@ -3293,10 +3297,9 @@ static int msm_geni_serial_runtime_resume(struct device *dev)
start_rx_sequencer(&port->uport);
/* Ensure that the Rx is running before enabling interrupts */
mb();
- /*
- * Do not enable irq before interrupt registration which happens
- * at port open time.
- */
+ /* Enable interrupt */
+ enable_irq(port->uport.irq);
+
IPC_LOG_MSG(port->ipc_log_pwr, "%s:\n", __func__);
exit_runtime_resume:
return ret;
diff --git a/drivers/usb/dwc3/dwc3-msm.c b/drivers/usb/dwc3/dwc3-msm.c
index 3f20bf7..fff99dd 100644
--- a/drivers/usb/dwc3/dwc3-msm.c
+++ b/drivers/usb/dwc3/dwc3-msm.c
@@ -311,7 +311,9 @@ struct dwc3_msm {
bool in_device_mode;
enum usb_device_speed max_rh_port_speed;
unsigned int tx_fifo_size;
+ bool check_eud_state;
bool vbus_active;
+ bool eud_active;
bool suspend;
bool use_pdc_interrupts;
enum dwc3_id_state id_state;
@@ -691,6 +693,7 @@ static int __dwc3_msm_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req)
memset(trb, 0, sizeof(*trb));
req->trb = trb;
+ req->num_trbs++;
trb->bph = DBM_TRB_BIT | DBM_TRB_DMA | DBM_TRB_EP_NUM(dep->number);
trb->size = DWC3_TRB_SIZE_LENGTH(req->request.length);
trb->ctrl = DWC3_TRBCTL_NORMAL | DWC3_TRB_CTRL_HWO |
@@ -1001,11 +1004,20 @@ static void gsi_store_ringbase_dbl_info(struct usb_ep *ep,
lower_32_bits(dwc3_trb_dma_offset(dep, &dep->trb_pool[0])),
upper_32_bits(dwc3_trb_dma_offset(dep, &dep->trb_pool[0])));
+ if (request->mapped_db_reg_phs_addr_lsb &&
+ dwc->sysdev != request->dev) {
+ dma_unmap_resource(request->dev,
+ request->mapped_db_reg_phs_addr_lsb,
+ PAGE_SIZE, DMA_BIDIRECTIONAL, 0);
+ request->mapped_db_reg_phs_addr_lsb = 0;
+ }
+
if (!request->mapped_db_reg_phs_addr_lsb) {
request->mapped_db_reg_phs_addr_lsb =
dma_map_resource(dwc->sysdev,
(phys_addr_t)request->db_reg_phs_addr_lsb,
PAGE_SIZE, DMA_BIDIRECTIONAL, 0);
+ request->dev = dwc->sysdev;
if (dma_mapping_error(dwc->sysdev,
request->mapped_db_reg_phs_addr_lsb))
dev_err(mdwc->dev, "mapping error for db_reg_phs_addr_lsb\n");
@@ -2162,6 +2174,9 @@ static void dwc3_msm_notify_event(struct dwc3 *dwc, unsigned int event,
break;
case DWC3_CONTROLLER_NOTIFY_CLEAR_DB:
dev_dbg(mdwc->dev, "DWC3_CONTROLLER_NOTIFY_CLEAR_DB\n");
+ if (!mdwc->gsi_ev_buff)
+ break;
+
dwc3_msm_write_reg_field(mdwc->base,
GSI_GENERAL_CFG_REG(mdwc->gsi_reg),
BLOCK_GSI_WR_GO_MASK, true);
@@ -2846,8 +2861,13 @@ static void dwc3_ext_event_notify(struct dwc3_msm *mdwc)
}
if (mdwc->vbus_active && !mdwc->in_restart) {
- dev_dbg(mdwc->dev, "XCVR: BSV set\n");
- set_bit(B_SESS_VLD, &mdwc->inputs);
+ if (mdwc->hs_phy->flags & EUD_SPOOF_DISCONNECT) {
+ dev_dbg(mdwc->dev, "XCVR:EUD: BSV clear\n");
+ clear_bit(B_SESS_VLD, &mdwc->inputs);
+ } else {
+ dev_dbg(mdwc->dev, "XCVR: BSV set\n");
+ set_bit(B_SESS_VLD, &mdwc->inputs);
+ }
} else {
dev_dbg(mdwc->dev, "XCVR: BSV clear\n");
clear_bit(B_SESS_VLD, &mdwc->inputs);
@@ -2861,6 +2881,39 @@ static void dwc3_ext_event_notify(struct dwc3_msm *mdwc)
clear_bit(B_SUSPEND, &mdwc->inputs);
}
+ if (mdwc->check_eud_state) {
+ mdwc->hs_phy->flags &=
+ ~(EUD_SPOOF_CONNECT | EUD_SPOOF_DISCONNECT);
+ dev_dbg(mdwc->dev, "eud: state:%d active:%d hs_phy_flags:0x%x\n",
+ mdwc->check_eud_state, mdwc->eud_active,
+ mdwc->hs_phy->flags);
+ if (mdwc->eud_active) {
+ mdwc->hs_phy->flags |= EUD_SPOOF_CONNECT;
+ dev_dbg(mdwc->dev, "EUD: XCVR: BSV set\n");
+ set_bit(B_SESS_VLD, &mdwc->inputs);
+ } else {
+ mdwc->hs_phy->flags |= EUD_SPOOF_DISCONNECT;
+ dev_dbg(mdwc->dev, "EUD: XCVR: BSV clear\n");
+ clear_bit(B_SESS_VLD, &mdwc->inputs);
+ }
+
+ mdwc->check_eud_state = false;
+ }
+
+
+ dev_dbg(mdwc->dev, "eud: state:%d active:%d hs_phy_flags:0x%x\n",
+ mdwc->check_eud_state, mdwc->eud_active, mdwc->hs_phy->flags);
+
+ /* handle case of USB cable disconnect after USB spoof disconnect */
+ if (!mdwc->vbus_active &&
+ (mdwc->hs_phy->flags & EUD_SPOOF_DISCONNECT)) {
+ mdwc->hs_phy->flags &= ~EUD_SPOOF_DISCONNECT;
+ mdwc->hs_phy->flags |= PHY_SUS_OVERRIDE;
+ usb_phy_set_suspend(mdwc->hs_phy, 1);
+ mdwc->hs_phy->flags &= ~PHY_SUS_OVERRIDE;
+ return;
+ }
+
queue_delayed_work(mdwc->sm_usb_wq, &mdwc->sm_work, 0);
}
@@ -3226,6 +3279,8 @@ static int dwc3_msm_vbus_notifier(struct notifier_block *nb,
struct extcon_dev *edev = ptr;
struct extcon_nb *enb = container_of(nb, struct extcon_nb, vbus_nb);
struct dwc3_msm *mdwc = enb->mdwc;
+ char *eud_str;
+ const char *edev_name;
if (!edev || !mdwc)
return NOTIFY_DONE;
@@ -3233,15 +3288,22 @@ static int dwc3_msm_vbus_notifier(struct notifier_block *nb,
dwc = platform_get_drvdata(mdwc->dwc3);
dbg_event(0xFF, "extcon idx", enb->idx);
-
- if (mdwc->vbus_active == event)
- return NOTIFY_DONE;
-
- mdwc->ext_idx = enb->idx;
-
dev_dbg(mdwc->dev, "vbus:%ld event received\n", event);
+ edev_name = extcon_get_edev_name(edev);
+ dbg_log_string("edev:%s\n", edev_name);
- mdwc->vbus_active = event;
+ /* detect USB spoof disconnect/connect notification with EUD device */
+ eud_str = strnstr(edev_name, "eud", strlen(edev_name));
+ if (eud_str) {
+ if (mdwc->eud_active == event)
+ return NOTIFY_DONE;
+ mdwc->eud_active = event;
+ mdwc->check_eud_state = true;
+ } else {
+ if (mdwc->vbus_active == event)
+ return NOTIFY_DONE;
+ mdwc->vbus_active = event;
+ }
if (get_psy_type(mdwc) == POWER_SUPPLY_TYPE_USB_CDP &&
mdwc->vbus_active) {
@@ -3974,10 +4036,10 @@ static int dwc3_msm_remove(struct platform_device *pdev)
if (mdwc->hs_phy)
mdwc->hs_phy->flags &= ~PHY_HOST_MODE;
+ dbg_event(0xFF, "Remov put", 0);
platform_device_put(mdwc->dwc3);
of_platform_depopulate(&pdev->dev);
- dbg_event(0xFF, "Remov put", 0);
pm_runtime_disable(mdwc->dev);
pm_runtime_barrier(mdwc->dev);
pm_runtime_put_sync(mdwc->dev);
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 42fc8bf..51625d6 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -470,7 +470,7 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned cmd,
ret = -ETIMEDOUT;
dev_err(dwc->dev, "%s command timeout for %s\n",
dwc3_gadget_ep_cmd_string(cmd), dep->name);
- if (!(cmd & DWC3_DEPCMD_ENDTRANSFER)) {
+ if (DWC3_DEPCMD_CMD(cmd) != DWC3_DEPCMD_ENDTRANSFER) {
dwc->ep_cmd_timeout_cnt++;
dwc3_notify_event(dwc,
DWC3_CONTROLLER_RESTART_USB_SESSION, 0);
@@ -1450,6 +1450,11 @@ static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep)
if (!dwc3_calc_trbs_left(dep))
return 0;
+ if (dep->flags & DWC3_EP_END_TRANSFER_PENDING) {
+ dbg_event(dep->number, "ENDXFER Pending", dep->flags);
+ return -EBUSY;
+ }
+
starting = !(dep->flags & DWC3_EP_TRANSFER_STARTED);
dwc3_prepare_trbs(dep);
@@ -1639,6 +1644,7 @@ static int dwc3_gadget_ep_queue(struct usb_ep *ep, struct usb_request *request,
static void dwc3_gadget_ep_skip_trbs(struct dwc3_ep *dep, struct dwc3_request *req)
{
int i;
+ struct dwc3_trb *trb = req->trb;
/*
* If request was already started, this means we had to
@@ -1651,11 +1657,11 @@ static void dwc3_gadget_ep_skip_trbs(struct dwc3_ep *dep, struct dwc3_request *r
* pointer.
*/
for (i = 0; i < req->num_trbs; i++) {
- struct dwc3_trb *trb;
-
- trb = req->trb + i;
trb->ctrl &= ~DWC3_TRB_CTRL_HWO;
dwc3_ep_inc_deq(dep);
+ trb++;
+ if (trb->ctrl & DWC3_TRBCTL_LINK_TRB)
+ trb = dep->trb_pool;
}
req->num_trbs = 0;
@@ -2414,8 +2420,6 @@ static int dwc3_gadget_vbus_session(struct usb_gadget *_gadget, int is_active)
* signaled by the gadget driver.
*/
ret = dwc3_gadget_run_stop(dwc, 1, false);
- } else {
- ret = dwc3_gadget_run_stop(dwc, 0, false);
}
}
@@ -2424,6 +2428,7 @@ static int dwc3_gadget_vbus_session(struct usb_gadget *_gadget, int is_active)
* Make sure to let gadget driver know in that case.
*/
if (!dwc->vbus_active) {
+ ret = dwc3_gadget_run_stop(dwc, 0, false);
dev_dbg(dwc->dev, "calling disconnect from %s\n", __func__);
dwc3_gadget_disconnect_interrupt(dwc);
}
diff --git a/drivers/usb/gadget/function/f_cdev.c b/drivers/usb/gadget/function/f_cdev.c
index d9554b5..616558e 100644
--- a/drivers/usb/gadget/function/f_cdev.c
+++ b/drivers/usb/gadget/function/f_cdev.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2011, 2013-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2011, 2013-2020, The Linux Foundation. All rights reserved.
* Linux Foundation chooses to take subject only to the GPLv2 license terms,
* and distributes only under these terms.
*
@@ -49,7 +49,7 @@
#define BRIDGE_RX_QUEUE_SIZE 8
#define BRIDGE_RX_BUF_SIZE 2048
#define BRIDGE_TX_QUEUE_SIZE 8
-#define BRIDGE_TX_BUF_SIZE 2048
+#define BRIDGE_TX_BUF_SIZE (50 * 1024)
#define GS_LOG2_NOTIFY_INTERVAL 5 /* 1 << 5 == 32 msec */
#define GS_NOTIFY_MAXPACKET 10 /* notification + 2 bytes */
diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
index 9a8b9a0..4024d3d 100644
--- a/drivers/usb/gadget/function/f_fs.c
+++ b/drivers/usb/gadget/function/f_fs.c
@@ -1905,6 +1905,10 @@ static void ffs_data_reset(struct ffs_data *ffs)
ffs->state = FFS_READ_DESCRIPTORS;
ffs->setup_state = FFS_NO_SETUP;
ffs->flags = 0;
+
+ ffs->ms_os_descs_ext_prop_count = 0;
+ ffs->ms_os_descs_ext_prop_name_len = 0;
+ ffs->ms_os_descs_ext_prop_data_len = 0;
}
diff --git a/drivers/usb/gadget/function/f_gsi.c b/drivers/usb/gadget/function/f_gsi.c
index 870a0ce..7b5fb0f 100644
--- a/drivers/usb/gadget/function/f_gsi.c
+++ b/drivers/usb/gadget/function/f_gsi.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2015-2020, The Linux Foundation. All rights reserved.
*/
#include <linux/module.h>
@@ -859,8 +859,10 @@ static int gsi_ep_enable(struct f_gsi *gsi)
ret = usb_gsi_ep_op(gsi->d_port.out_ep,
&gsi->d_port.out_request, GSI_EP_OP_CONFIG);
if (ret) {
- usb_gsi_ep_op(gsi->d_port.in_ep,
- &gsi->d_port.in_request, GSI_EP_OP_DISABLE);
+ if (gsi->d_port.in_ep)
+ usb_gsi_ep_op(gsi->d_port.in_ep,
+ &gsi->d_port.in_request,
+ GSI_EP_OP_DISABLE);
return ret;
}
}
diff --git a/drivers/usb/gadget/function/f_mtp.c b/drivers/usb/gadget/function/f_mtp.c
index 00dc4cb..19a6511 100644
--- a/drivers/usb/gadget/function/f_mtp.c
+++ b/drivers/usb/gadget/function/f_mtp.c
@@ -1592,7 +1592,7 @@ static int debug_mtp_read_stats(struct seq_file *s, void *unused)
}
seq_printf(s, "vfs_write(time in usec) min:%d\t max:%d\t avg:%d\n",
- min, max, sum / iteration);
+ min, max, (iteration ? (sum / iteration) : 0));
min = max = sum = iteration = 0;
seq_puts(s, "\n=======================\n");
seq_puts(s, "MTP Read Stats:\n");
@@ -1614,7 +1614,7 @@ static int debug_mtp_read_stats(struct seq_file *s, void *unused)
}
seq_printf(s, "vfs_read(time in usec) min:%d\t max:%d\t avg:%d\n",
- min, max, sum / iteration);
+ min, max, (iteration ? (sum / iteration) : 0));
spin_unlock_irqrestore(&dev->lock, flags);
return 0;
}
diff --git a/drivers/usb/gadget/function/f_qdss.c b/drivers/usb/gadget/function/f_qdss.c
index 5bdcf38..e5c179b 100644
--- a/drivers/usb/gadget/function/f_qdss.c
+++ b/drivers/usb/gadget/function/f_qdss.c
@@ -219,13 +219,14 @@ static void qdss_write_complete(struct usb_ep *ep,
struct usb_request *req)
{
struct f_qdss *qdss = ep->driver_data;
- struct qdss_request *d_req = req->context;
+ struct qdss_req *qreq = req->context;
+ struct qdss_request *d_req = qreq->qdss_req;
struct usb_ep *in;
struct list_head *list_pool;
enum qdss_state state;
unsigned long flags;
- pr_debug("%s\n", __func__);
+ qdss_log("%s\n", __func__);
if (qdss->debug_inface_enabled) {
in = qdss->port.ctrl_in;
@@ -239,9 +240,9 @@ static void qdss_write_complete(struct usb_ep *ep,
spin_lock_irqsave(&qdss->lock, flags);
if (!qdss->debug_inface_enabled)
- list_del(&req->list);
- list_add_tail(&req->list, list_pool);
- complete(&d_req->write_done);
+ list_del(&qreq->list);
+ list_add_tail(&qreq->list, list_pool);
+ complete(&qreq->write_done);
if (req->length != 0) {
d_req->actual = req->actual;
d_req->status = req->status;
@@ -252,34 +253,11 @@ static void qdss_write_complete(struct usb_ep *ep,
qdss->ch.notify(qdss->ch.priv, state, d_req, NULL);
}
-static void qdss_ctrl_read_complete(struct usb_ep *ep,
- struct usb_request *req)
-{
- struct f_qdss *qdss = ep->driver_data;
- struct qdss_request *d_req = req->context;
- unsigned long flags;
-
- pr_debug("%s\n", __func__);
-
- d_req->actual = req->actual;
- d_req->status = req->status;
-
- spin_lock_irqsave(&qdss->lock, flags);
- list_add_tail(&req->list, &qdss->ctrl_read_pool);
- spin_unlock_irqrestore(&qdss->lock, flags);
-
- if (qdss->ch.notify)
- qdss->ch.notify(qdss->ch.priv, USB_QDSS_CTRL_READ_DONE, d_req,
- NULL);
-}
-
void usb_qdss_free_req(struct usb_qdss_ch *ch)
{
struct f_qdss *qdss;
- struct usb_request *req;
struct list_head *act, *tmp;
-
- pr_debug("%s\n", __func__);
+ struct qdss_req *qreq;
qdss = ch->priv_usb;
if (!qdss) {
@@ -287,46 +265,44 @@ void usb_qdss_free_req(struct usb_qdss_ch *ch)
return;
}
+ qdss_log("%s: channel name = %s\n", __func__, qdss->ch.name);
+
list_for_each_safe(act, tmp, &qdss->data_write_pool) {
- req = list_entry(act, struct usb_request, list);
- list_del(&req->list);
- usb_ep_free_request(qdss->port.data, req);
+ qreq = list_entry(act, struct qdss_req, list);
+ list_del(&qreq->list);
+ usb_ep_free_request(qdss->port.data, qreq->usb_req);
+ kfree(qreq);
+
}
list_for_each_safe(act, tmp, &qdss->ctrl_write_pool) {
- req = list_entry(act, struct usb_request, list);
- list_del(&req->list);
- usb_ep_free_request(qdss->port.ctrl_in, req);
- }
+ qreq = list_entry(act, struct qdss_req, list);
+ list_del(&qreq->list);
+ usb_ep_free_request(qdss->port.ctrl_in, qreq->usb_req);
+ kfree(qreq);
- list_for_each_safe(act, tmp, &qdss->ctrl_read_pool) {
- req = list_entry(act, struct usb_request, list);
- list_del(&req->list);
- usb_ep_free_request(qdss->port.ctrl_out, req);
}
}
EXPORT_SYMBOL(usb_qdss_free_req);
-int usb_qdss_alloc_req(struct usb_qdss_ch *ch, int no_write_buf,
- int no_read_buf)
+int usb_qdss_alloc_req(struct usb_qdss_ch *ch, int no_write_buf)
{
struct f_qdss *qdss = ch->priv_usb;
struct usb_request *req;
struct usb_ep *in;
struct list_head *list_pool;
int i;
+ struct qdss_req *qreq;
- pr_debug("%s\n", __func__);
+ qdss_log("%s\n", __func__);
if (!qdss) {
pr_err("%s: %s closed\n", __func__, ch->name);
return -ENODEV;
}
- if ((qdss->debug_inface_enabled &&
- (no_write_buf <= 0 || no_read_buf <= 0)) ||
- (!qdss->debug_inface_enabled &&
- (no_write_buf <= 0 || no_read_buf))) {
+ if ((qdss->debug_inface_enabled && no_write_buf <= 0) ||
+ (!qdss->debug_inface_enabled && no_write_buf <= 0)) {
pr_err("%s: missing params\n", __func__);
return -ENODEV;
}
@@ -340,23 +316,17 @@ int usb_qdss_alloc_req(struct usb_qdss_ch *ch, int no_write_buf,
}
for (i = 0; i < no_write_buf; i++) {
+ qreq = kzalloc(sizeof(struct qdss_req), GFP_KERNEL);
req = usb_ep_alloc_request(in, GFP_ATOMIC);
if (!req) {
pr_err("%s: ctrl_in allocation err\n", __func__);
goto fail;
}
+ qreq->usb_req = req;
+ req->context = qreq;
req->complete = qdss_write_complete;
- list_add_tail(&req->list, list_pool);
- }
-
- for (i = 0; i < no_read_buf; i++) {
- req = usb_ep_alloc_request(qdss->port.ctrl_out, GFP_ATOMIC);
- if (!req) {
- pr_err("%s: ctrl_out allocation err\n", __func__);
- goto fail;
- }
- req->complete = qdss_ctrl_read_complete;
- list_add_tail(&req->list, &qdss->ctrl_read_pool);
+ list_add_tail(&qreq->list, list_pool);
+ init_completion(&qreq->write_done);
}
return 0;
@@ -371,7 +341,7 @@ static void clear_eps(struct usb_function *f)
{
struct f_qdss *qdss = func_to_qdss(f);
- pr_debug("%s\n", __func__);
+ qdss_log("%s: channel name = %s\n", __func__, qdss->ch.name);
if (qdss->port.ctrl_in)
qdss->port.ctrl_in->driver_data = NULL;
@@ -383,7 +353,9 @@ static void clear_eps(struct usb_function *f)
static void clear_desc(struct usb_gadget *gadget, struct usb_function *f)
{
- pr_debug("%s\n", __func__);
+ struct f_qdss *qdss = func_to_qdss(f);
+
+ qdss_log("%s: channel name = %s\n", __func__, qdss->ch.name);
usb_free_all_descriptors(f);
}
@@ -395,7 +367,7 @@ static int qdss_bind(struct usb_configuration *c, struct usb_function *f)
struct usb_ep *ep;
int iface, id, ret;
- pr_debug("%s\n", __func__);
+ qdss_log("%s: channel name = %s\n", __func__, qdss->ch.name);
/* Allocate data I/F */
iface = usb_interface_id(c, f);
@@ -529,7 +501,7 @@ static void qdss_unbind(struct usb_configuration *c, struct usb_function *f)
struct f_qdss *qdss = func_to_qdss(f);
struct usb_gadget *gadget = c->cdev->gadget;
- pr_debug("%s\n", __func__);
+ qdss_log("%s: channel name = %s\n", __func__, qdss->ch.name);
flush_workqueue(qdss->wq);
@@ -553,7 +525,7 @@ static void qdss_eps_disable(struct usb_function *f)
{
struct f_qdss *qdss = func_to_qdss(f);
- pr_debug("%s\n", __func__);
+ qdss_log("%s: channel name = %s\n", __func__, qdss->ch.name);
if (qdss->ctrl_in_enabled) {
usb_ep_disable(qdss->port.ctrl_in);
@@ -578,7 +550,7 @@ static void usb_qdss_disconnect_work(struct work_struct *work)
int status;
qdss = container_of(work, struct f_qdss, disconnect_w);
- pr_debug("%s\n", __func__);
+ qdss_log("%s: channel name = %s\n", __func__, qdss->ch.name);
/* Notify qdss to cancel all active transfers */
@@ -611,7 +583,7 @@ static void qdss_disable(struct usb_function *f)
struct f_qdss *qdss = func_to_qdss(f);
unsigned long flags;
- pr_debug("%s\n", __func__);
+ qdss_log("%s: channel name = %s\n", __func__, qdss->ch.name);
spin_lock_irqsave(&qdss->lock, flags);
if (!qdss->usb_connected) {
spin_unlock_irqrestore(&qdss->lock, flags);
@@ -636,12 +608,12 @@ static void usb_qdss_connect_work(struct work_struct *work)
/* If cable is already removed, discard connect_work */
if (qdss->usb_connected == 0) {
- pr_debug("%s: discard connect_work\n", __func__);
+ qdss_log("%s: discard connect_work\n", __func__);
cancel_work_sync(&qdss->disconnect_w);
return;
}
- pr_debug("%s\n", __func__);
+ qdss_log("%s: channel name = %s\n", __func__, qdss->ch.name);
if (!strcmp(qdss->ch.name, USB_QDSS_CH_MDM))
goto notify;
@@ -678,7 +650,7 @@ static int qdss_set_alt(struct usb_function *f, unsigned int intf,
struct usb_qdss_ch *ch = &qdss->ch;
int ret = 0;
- pr_debug("%s qdss pointer = %pK\n", __func__, qdss);
+ qdss_log("%s qdss pointer = %pK\n", __func__, qdss);
qdss->gadget = gadget;
if (alt != 0)
@@ -744,12 +716,12 @@ static int qdss_set_alt(struct usb_function *f, unsigned int intf,
if (qdss->ctrl_out_enabled && qdss->ctrl_in_enabled &&
qdss->data_enabled) {
qdss->usb_connected = 1;
- pr_debug("%s usb_connected INTF enabled\n", __func__);
+ qdss_log("%s usb_connected INTF enabled\n", __func__);
}
} else {
if (qdss->data_enabled) {
qdss->usb_connected = 1;
- pr_debug("%s usb_connected INTF disabled\n", __func__);
+ qdss_log("%s usb_connected INTF disabled\n", __func__);
}
}
@@ -808,7 +780,6 @@ static struct f_qdss *alloc_usb_qdss(char *channel_name)
spin_unlock_irqrestore(&qdss_lock, flags);
spin_lock_init(&qdss->lock);
- INIT_LIST_HEAD(&qdss->ctrl_read_pool);
INIT_LIST_HEAD(&qdss->ctrl_write_pool);
INIT_LIST_HEAD(&qdss->data_write_pool);
INIT_LIST_HEAD(&qdss->queued_data_pool);
@@ -818,58 +789,14 @@ static struct f_qdss *alloc_usb_qdss(char *channel_name)
return qdss;
}
-int usb_qdss_ctrl_read(struct usb_qdss_ch *ch, struct qdss_request *d_req)
-{
- struct f_qdss *qdss = ch->priv_usb;
- unsigned long flags;
- struct usb_request *req = NULL;
-
- pr_debug("%s\n", __func__);
-
- if (!qdss)
- return -ENODEV;
-
- spin_lock_irqsave(&qdss->lock, flags);
-
- if (qdss->usb_connected == 0) {
- spin_unlock_irqrestore(&qdss->lock, flags);
- return -EIO;
- }
-
- if (list_empty(&qdss->ctrl_read_pool)) {
- spin_unlock_irqrestore(&qdss->lock, flags);
- pr_err("error: %s list is empty\n", __func__);
- return -EAGAIN;
- }
-
- req = list_first_entry(&qdss->ctrl_read_pool, struct usb_request, list);
- list_del(&req->list);
- spin_unlock_irqrestore(&qdss->lock, flags);
-
- req->buf = d_req->buf;
- req->length = d_req->length;
- req->context = d_req;
-
- if (usb_ep_queue(qdss->port.ctrl_out, req, GFP_ATOMIC)) {
- /* If error add the link to linked list again*/
- spin_lock_irqsave(&qdss->lock, flags);
- list_add_tail(&req->list, &qdss->ctrl_read_pool);
- spin_unlock_irqrestore(&qdss->lock, flags);
- pr_err("qdss usb_ep_queue failed\n");
- return -EIO;
- }
-
- return 0;
-}
-EXPORT_SYMBOL(usb_qdss_ctrl_read);
-
int usb_qdss_ctrl_write(struct usb_qdss_ch *ch, struct qdss_request *d_req)
{
struct f_qdss *qdss = ch->priv_usb;
unsigned long flags;
struct usb_request *req = NULL;
+ struct qdss_req *qreq;
- pr_debug("%s\n", __func__);
+ qdss_log("%s\n", __func__);
if (!qdss)
return -ENODEV;
@@ -887,17 +814,18 @@ int usb_qdss_ctrl_write(struct usb_qdss_ch *ch, struct qdss_request *d_req)
return -EAGAIN;
}
- req = list_first_entry(&qdss->ctrl_write_pool, struct usb_request,
+ qreq = list_first_entry(&qdss->ctrl_write_pool, struct qdss_req,
list);
- list_del(&req->list);
+ list_del(&qreq->list);
spin_unlock_irqrestore(&qdss->lock, flags);
+ qreq->qdss_req = d_req;
+ req = qreq->usb_req;
req->buf = d_req->buf;
req->length = d_req->length;
- req->context = d_req;
if (usb_ep_queue(qdss->port.ctrl_in, req, GFP_ATOMIC)) {
spin_lock_irqsave(&qdss->lock, flags);
- list_add_tail(&req->list, &qdss->ctrl_write_pool);
+ list_add_tail(&qreq->list, &qdss->ctrl_write_pool);
spin_unlock_irqrestore(&qdss->lock, flags);
pr_err("%s usb_ep_queue failed\n", __func__);
return -EIO;
@@ -912,8 +840,9 @@ int usb_qdss_write(struct usb_qdss_ch *ch, struct qdss_request *d_req)
struct f_qdss *qdss = ch->priv_usb;
unsigned long flags;
struct usb_request *req = NULL;
+ struct qdss_req *qreq;
- pr_debug("usb_qdss_data_write\n");
+ qdss_log("usb_qdss_data_write\n");
if (!qdss)
return -ENODEV;
@@ -931,23 +860,24 @@ int usb_qdss_write(struct usb_qdss_ch *ch, struct qdss_request *d_req)
return -EAGAIN;
}
- req = list_first_entry(&qdss->data_write_pool, struct usb_request,
+ qreq = list_first_entry(&qdss->data_write_pool, struct qdss_req,
list);
- list_move_tail(&req->list, &qdss->queued_data_pool);
+ list_move_tail(&qreq->list, &qdss->queued_data_pool);
spin_unlock_irqrestore(&qdss->lock, flags);
+ qreq->qdss_req = d_req;
+ req = qreq->usb_req;
req->buf = d_req->buf;
req->length = d_req->length;
- req->context = d_req;
req->sg = d_req->sg;
req->num_sgs = d_req->num_sgs;
req->num_mapped_sgs = d_req->num_mapped_sgs;
- reinit_completion(&d_req->write_done);
+ reinit_completion(&qreq->write_done);
if (usb_ep_queue(qdss->port.data, req, GFP_ATOMIC)) {
spin_lock_irqsave(&qdss->lock, flags);
/* Remove from queued pool and add back to data pool */
- list_move_tail(&req->list, &qdss->data_write_pool);
- complete(&d_req->write_done);
+ list_move_tail(&qreq->list, &qdss->data_write_pool);
+ complete(&qreq->write_done);
spin_unlock_irqrestore(&qdss->lock, flags);
pr_err("qdss usb_ep_queue failed\n");
return -EIO;
@@ -966,7 +896,7 @@ struct usb_qdss_ch *usb_qdss_open(const char *name, void *priv,
unsigned long flags;
int found = 0;
- pr_debug("%s\n", __func__);
+ qdss_log("%s\n", __func__);
if (!notify) {
pr_err("%s: notification func is missing\n", __func__);
@@ -984,11 +914,11 @@ struct usb_qdss_ch *usb_qdss_open(const char *name, void *priv,
if (!found) {
spin_unlock_irqrestore(&qdss_lock, flags);
- pr_debug("%s failed as %s not found\n", __func__, name);
+ qdss_log("%s failed as %s not found\n", __func__, name);
return NULL;
}
- pr_debug("%s: qdss ctx found\n", __func__);
+ qdss_log("%s: qdss ctx found\n", __func__);
qdss = container_of(ch, struct f_qdss, ch);
ch->priv_usb = qdss;
ch->priv = priv;
@@ -1011,27 +941,26 @@ void usb_qdss_close(struct usb_qdss_ch *ch)
struct usb_gadget *gadget;
unsigned long flags;
int status;
- struct usb_request *req;
- struct qdss_request *d_req;
+ struct qdss_req *qreq;
- pr_debug("%s\n", __func__);
+ qdss_log("%s\n", __func__);
spin_lock_irqsave(&qdss_lock, flags);
if (!qdss)
goto close;
qdss->qdss_close = true;
while (!list_empty(&qdss->queued_data_pool)) {
- req = list_first_entry(&qdss->queued_data_pool,
- struct usb_request, list);
- d_req = req->context;
+ qreq = list_first_entry(&qdss->queued_data_pool,
+ struct qdss_req, list);
spin_unlock_irqrestore(&qdss_lock, flags);
- usb_ep_dequeue(qdss->port.data, req);
- wait_for_completion(&d_req->write_done);
+ usb_ep_dequeue(qdss->port.data, qreq->usb_req);
+ wait_for_completion(&qreq->write_done);
spin_lock_irqsave(&qdss_lock, flags);
}
usb_qdss_free_req(ch);
close:
ch->priv_usb = NULL;
+ ch->notify = NULL;
if (!qdss || !qdss->usb_connected ||
!strcmp(qdss->ch.name, USB_QDSS_CH_MDM)) {
ch->app_conn = 0;
@@ -1065,7 +994,7 @@ static void qdss_cleanup(void)
struct usb_qdss_ch *_ch;
unsigned long flags;
- pr_debug("%s\n", __func__);
+ qdss_log("%s\n", __func__);
list_for_each_safe(act, tmp, &usb_qdss_ch_list) {
_ch = list_entry(act, struct usb_qdss_ch, list);
@@ -1177,7 +1106,7 @@ static int usb_qdss_set_inst_name(struct usb_function_instance *f,
}
opts->channel_name = ptr;
- pr_debug("qdss: channel_name:%s\n", opts->channel_name);
+ qdss_log("qdss: channel_name:%s\n", opts->channel_name);
usb_qdss = alloc_usb_qdss(opts->channel_name);
if (IS_ERR(usb_qdss)) {
@@ -1230,6 +1159,10 @@ static int __init usb_qdss_init(void)
{
int ret;
+ _qdss_ipc_log = ipc_log_context_create(NUM_PAGES, "usb_qdss", 0);
+ if (IS_ERR_OR_NULL(_qdss_ipc_log))
+ _qdss_ipc_log = NULL;
+
INIT_LIST_HEAD(&usb_qdss_ch_list);
ret = usb_function_register(&qdssusb_func);
if (ret) {
@@ -1241,6 +1174,7 @@ static int __init usb_qdss_init(void)
static void __exit usb_qdss_exit(void)
{
+ ipc_log_context_destroy(_qdss_ipc_log);
usb_function_unregister(&qdssusb_func);
qdss_cleanup();
}
diff --git a/drivers/usb/gadget/function/f_qdss.h b/drivers/usb/gadget/function/f_qdss.h
index cd7c554..50b2f2d 100644
--- a/drivers/usb/gadget/function/f_qdss.h
+++ b/drivers/usb/gadget/function/f_qdss.h
@@ -7,6 +7,7 @@
#define _F_QDSS_H
#include <linux/kernel.h>
+#include <linux/ipc_logging.h>
#include <linux/usb/ch9.h>
#include <linux/usb/gadget.h>
#include <linux/usb/composite.h>
@@ -49,7 +50,6 @@ struct f_qdss {
bool debug_inface_enabled;
struct usb_request *endless_req;
struct usb_qdss_ch ch;
- struct list_head ctrl_read_pool;
struct list_head ctrl_write_pool;
/* for mdm channel SW path */
@@ -66,6 +66,20 @@ struct f_qdss {
bool qdss_close;
};
+static void *_qdss_ipc_log;
+
+#define NUM_PAGES 10 /* # of pages for ipc logging */
+
+#ifdef CONFIG_DYNAMIC_DEBUG
+#define qdss_log(fmt, ...) do { \
+ ipc_log_string(_qdss_ipc_log, "%s: " fmt, __func__, ##__VA_ARGS__); \
+ dynamic_pr_debug("%s: " fmt, __func__, ##__VA_ARGS__); \
+} while (0)
+#else
+#define qdss_log(fmt, ...) \
+ ipc_log_string(_qdss_ipc_log, "%s: " fmt, __func__, ##__VA_ARGS__)
+#endif
+
struct usb_qdss_opts {
struct usb_function_instance func_inst;
struct f_qdss *usb_qdss;
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index eff69c2..05688610 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -3305,8 +3305,8 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
/* New sg entry */
--num_sgs;
sent_len -= block_len;
- if (num_sgs != 0) {
- sg = sg_next(sg);
+ sg = sg_next(sg);
+ if (num_sgs != 0 && sg) {
block_len = sg_dma_len(sg);
addr = (u64) sg_dma_address(sg);
addr += sent_len;
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index c65237b..3892106 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -1034,6 +1034,12 @@ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup)
if (xhci_handshake(&xhci->op_regs->status,
STS_HALT, STS_HALT, delay)) {
xhci_warn(xhci, "WARN: xHC CMD_RUN timeout\n");
+ /* Set the HW_ACCESSIBLE so that any pending interrupts are
+ * served.
+ */
+ set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
+ set_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags);
+ xhci_hc_died(xhci);
spin_unlock_irq(&xhci->lock);
return -ETIMEDOUT;
}
diff --git a/drivers/usb/phy/phy-msm-qusb-v2.c b/drivers/usb/phy/phy-msm-qusb-v2.c
index c5d4d82..0171a2a 100644
--- a/drivers/usb/phy/phy-msm-qusb-v2.c
+++ b/drivers/usb/phy/phy-msm-qusb-v2.c
@@ -22,6 +22,7 @@
/* QUSB2PHY_PWR_CTRL1 register related bits */
#define PWR_CTRL1_POWR_DOWN BIT(0)
+#define CLAMP_N_EN BIT(1)
/* QUSB2PHY_PLL_COMMON_STATUS_ONE register related bits */
#define CORE_READY_STATUS BIT(0)
@@ -69,6 +70,10 @@
/* STAT5 register bits */
#define VSTATUS_PLL_LOCK_STATUS_MASK BIT(0)
+/* DEBUG_CTRL4 register bits */
+#define FORCED_UTMI_DPPULLDOWN BIT(2)
+#define FORCED_UTMI_DMPULLDOWN BIT(3)
+
enum qusb_phy_reg {
PORT_TUNE1,
PLL_COMMON_STATUS_ONE,
@@ -79,6 +84,8 @@ enum qusb_phy_reg {
BIAS_CTRL_2,
DEBUG_CTRL1,
DEBUG_CTRL2,
+ DEBUG_CTRL3,
+ DEBUG_CTRL4,
STAT5,
USB2_PHY_REG_MAX,
};
@@ -89,6 +96,7 @@ struct qusb_phy {
void __iomem *base;
void __iomem *efuse_reg;
void __iomem *refgen_north_bg_reg;
+ void __iomem *eud_enable_reg;
struct clk *ref_clk_src;
struct clk *ref_clk;
@@ -396,6 +404,25 @@ static void qusb_phy_write_seq(void __iomem *base, u32 *seq, int cnt,
}
}
+static void msm_usb_write_readback(void __iomem *base, u32 offset,
+ const u32 mask, u32 val)
+{
+ u32 write_val, tmp = readl_relaxed(base + offset);
+
+ tmp &= ~mask; /* retain other bits */
+ write_val = tmp | val;
+
+ writel_relaxed(write_val, base + offset);
+
+ /* Read back to see if val was written */
+ tmp = readl_relaxed(base + offset);
+ tmp &= mask; /* clear other bits */
+
+ if (tmp != val)
+ pr_err("%s: write: %x to QSCRATCH: %x FAILED\n",
+ __func__, val, offset);
+}
+
static void qusb_phy_reset(struct qusb_phy *qphy)
{
int ret;
@@ -492,6 +519,11 @@ static int qusb_phy_init(struct usb_phy *phy)
dev_dbg(phy->dev, "%s\n", __func__);
+ if (qphy->eud_enable_reg && readl_relaxed(qphy->eud_enable_reg)) {
+ dev_err(qphy->phy.dev, "eud is enabled\n");
+ return 0;
+ }
+
qusb_phy_reset(qphy);
if (qphy->qusb_phy_host_init_seq && qphy->phy.flags & PHY_HOST_MODE) {
@@ -618,11 +650,15 @@ static int qusb_phy_set_suspend(struct usb_phy *phy, int suspend)
u32 linestate = 0, intr_mask = 0;
if (qphy->suspended == suspend) {
+ if (qphy->phy.flags & PHY_SUS_OVERRIDE)
+ goto suspend;
+
dev_dbg(phy->dev, "%s: USB PHY is already suspended\n",
- __func__);
+ __func__);
return 0;
}
+suspend:
if (suspend) {
/* Bus suspend case */
if (qphy->cable_connected) {
@@ -667,11 +703,16 @@ static int qusb_phy_set_suspend(struct usb_phy *phy, int suspend)
qusb_phy_enable_clocks(qphy, false);
} else { /* Cable disconnect case */
/* Disable all interrupts */
- writel_relaxed(0x00,
- qphy->base + qphy->phy_reg[INTR_CTRL]);
- qusb_phy_reset(qphy);
- qusb_phy_enable_clocks(qphy, false);
- qusb_phy_disable_power(qphy);
+ dev_dbg(phy->dev, "%s: phy->flags:0x%x\n",
+ __func__, qphy->phy.flags);
+ if (!(qphy->phy.flags & EUD_SPOOF_DISCONNECT)) {
+ dev_dbg(phy->dev, "turning off clocks/ldo\n");
+ writel_relaxed(0x00,
+ qphy->base + qphy->phy_reg[INTR_CTRL]);
+ qusb_phy_reset(qphy);
+ qusb_phy_enable_clocks(qphy, false);
+ qusb_phy_disable_power(qphy);
+ }
}
qphy->suspended = true;
} else {
@@ -736,6 +777,61 @@ static int qusb_phy_notify_disconnect(struct usb_phy *phy,
return 0;
}
+static int msm_qusb_phy_drive_dp_pulse(struct usb_phy *phy,
+ unsigned int interval_ms)
+{
+ struct qusb_phy *qphy = container_of(phy, struct qusb_phy, phy);
+ int ret;
+
+ ret = qusb_phy_enable_power(qphy);
+ if (ret < 0) {
+ dev_dbg(qphy->phy.dev,
+ "dpdm regulator enable failed:%d\n", ret);
+ return ret;
+ }
+ qusb_phy_enable_clocks(qphy, true);
+ msm_usb_write_readback(qphy->base, qphy->phy_reg[PWR_CTRL1],
+ PWR_CTRL1_POWR_DOWN, 0x00);
+ msm_usb_write_readback(qphy->base, qphy->phy_reg[DEBUG_CTRL4],
+ FORCED_UTMI_DPPULLDOWN, 0x00);
+ msm_usb_write_readback(qphy->base, qphy->phy_reg[DEBUG_CTRL4],
+ FORCED_UTMI_DMPULLDOWN,
+ FORCED_UTMI_DMPULLDOWN);
+ msm_usb_write_readback(qphy->base, qphy->phy_reg[DEBUG_CTRL3],
+ 0xd1, 0xd1);
+ msm_usb_write_readback(qphy->base, qphy->phy_reg[PWR_CTRL1],
+ CLAMP_N_EN, CLAMP_N_EN);
+ msm_usb_write_readback(qphy->base, qphy->phy_reg[INTR_CTRL],
+ DPSE_INTR_HIGH_SEL, 0x00);
+ msm_usb_write_readback(qphy->base, qphy->phy_reg[INTR_CTRL],
+ DPSE_INTR_EN, DPSE_INTR_EN);
+
+ msleep(interval_ms);
+
+ msm_usb_write_readback(qphy->base, qphy->phy_reg[INTR_CTRL],
+ DPSE_INTR_HIGH_SEL |
+ DPSE_INTR_EN, 0x00);
+ msm_usb_write_readback(qphy->base, qphy->phy_reg[DEBUG_CTRL3],
+ 0xd1, 0x00);
+ msm_usb_write_readback(qphy->base, qphy->phy_reg[DEBUG_CTRL4],
+ FORCED_UTMI_DPPULLDOWN |
+ FORCED_UTMI_DMPULLDOWN, 0x00);
+ msm_usb_write_readback(qphy->base, qphy->phy_reg[PWR_CTRL1],
+ PWR_CTRL1_POWR_DOWN |
+ CLAMP_N_EN, 0x00);
+
+ msleep(20);
+
+ qusb_phy_enable_clocks(qphy, false);
+ ret = qusb_phy_disable_power(qphy);
+ if (ret < 0) {
+ dev_dbg(qphy->phy.dev,
+ "dpdm regulator disable failed:%d\n", ret);
+ }
+
+ return 0;
+}
+
static int qusb_phy_dpdm_regulator_enable(struct regulator_dev *rdev)
{
int ret = 0;
@@ -744,6 +840,11 @@ static int qusb_phy_dpdm_regulator_enable(struct regulator_dev *rdev)
dev_dbg(qphy->phy.dev, "%s dpdm_enable:%d\n",
__func__, qphy->dpdm_enable);
+ if (qphy->eud_enable_reg && readl_relaxed(qphy->eud_enable_reg)) {
+ dev_err(qphy->phy.dev, "eud is enabled\n");
+ return 0;
+ }
+
if (!qphy->dpdm_enable) {
ret = qusb_phy_enable_power(qphy);
if (ret < 0) {
@@ -919,6 +1020,16 @@ static int qusb_phy_probe(struct platform_device *pdev)
qphy->refgen_north_bg_reg = devm_ioremap(dev, res->start,
resource_size(res));
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ "eud_enable_reg");
+ if (res) {
+ qphy->eud_enable_reg = devm_ioremap_resource(dev, res);
+ if (IS_ERR(qphy->eud_enable_reg)) {
+ dev_err(dev, "err getting eud_enable_reg address\n");
+ return PTR_ERR(qphy->eud_enable_reg);
+ }
+ }
+
/* ref_clk_src is needed irrespective of SE_CLK or DIFF_CLK usage */
qphy->ref_clk_src = devm_clk_get(dev, "ref_clk_src");
if (IS_ERR(qphy->ref_clk_src)) {
@@ -1133,6 +1244,7 @@ static int qusb_phy_probe(struct platform_device *pdev)
qphy->phy.type = USB_PHY_TYPE_USB2;
qphy->phy.notify_connect = qusb_phy_notify_connect;
qphy->phy.notify_disconnect = qusb_phy_notify_disconnect;
+ qphy->phy.drive_dp_pulse = msm_qusb_phy_drive_dp_pulse;
ret = usb_add_phy_dev(&qphy->phy);
if (ret)
@@ -1145,6 +1257,14 @@ static int qusb_phy_probe(struct platform_device *pdev)
qphy->suspended = true;
qusb_phy_create_debugfs(qphy);
+ /*
+ * EUD may be enable in boot loader and to keep EUD session alive across
+ * kernel boot till USB phy driver is initialized based on cable status,
+ * keep LDOs on here.
+ */
+ if (qphy->eud_enable_reg && readl_relaxed(qphy->eud_enable_reg))
+ qusb_phy_enable_power(qphy);
+
return ret;
}
diff --git a/drivers/usb/phy/phy-msm-qusb.c b/drivers/usb/phy/phy-msm-qusb.c
index 7f3135c..5216bc7 100644
--- a/drivers/usb/phy/phy-msm-qusb.c
+++ b/drivers/usb/phy/phy-msm-qusb.c
@@ -1194,10 +1194,18 @@ static void qusb_phy_chg_det_enable_seq(struct qusb_phy *qphy, int state)
#define CHG_PRIMARY_DET_TIME_MSEC 100
#define CHG_SECONDARY_DET_TIME_MSEC 100
-static int qusb_phy_enable_phy(struct qusb_phy *qphy)
+static int qusb_phy_prepare_chg_det(struct qusb_phy *qphy)
{
int ret;
+ /*
+ * Set dpdm_enable to indicate charger detection
+ * is in progress. This also prevents the core
+ * driver from doing the set_suspend and init
+ * calls of the PHY which inteferes with the charger
+ * detection during bootup.
+ */
+ qphy->dpdm_enable = true;
ret = qusb_phy_enable_power(qphy, true);
if (ret)
return ret;
@@ -1209,7 +1217,7 @@ static int qusb_phy_enable_phy(struct qusb_phy *qphy)
return 0;
}
-static void qusb_phy_disable_phy(struct qusb_phy *qphy)
+static void qusb_phy_unprepare_chg_det(struct qusb_phy *qphy)
{
int ret;
@@ -1227,6 +1235,10 @@ static void qusb_phy_disable_phy(struct qusb_phy *qphy)
if (qphy->tcsr_clamp_dig_n)
writel_relaxed(0x0, qphy->tcsr_clamp_dig_n);
qusb_phy_enable_power(qphy, false);
+
+ qphy->dpdm_enable = false;
+ regulator_notifier_call_chain(qphy->dpdm_rdev,
+ REGULATOR_EVENT_DISABLE, NULL);
}
static void qusb_phy_port_state_work(struct work_struct *w)
@@ -1248,7 +1260,7 @@ static void qusb_phy_port_state_work(struct work_struct *w)
if (qphy->vbus_active) {
/* Enable DCD sequence */
- ret = qusb_phy_enable_phy(qphy);
+ ret = qusb_phy_prepare_chg_det(qphy);
if (ret)
return;
@@ -1260,7 +1272,7 @@ static void qusb_phy_port_state_work(struct work_struct *w)
}
return;
case PORT_DISCONNECTED:
- qusb_phy_disable_phy(qphy);
+ qusb_phy_unprepare_chg_det(qphy);
qphy->port_state = PORT_UNKNOWN;
break;
case PORT_DCD_IN_PROGRESS:
@@ -1281,7 +1293,7 @@ static void qusb_phy_port_state_work(struct work_struct *w)
} else if (qphy->dcd_timeout >= CHG_DCD_TIMEOUT_MSEC) {
qusb_phy_notify_charger(qphy,
POWER_SUPPLY_TYPE_USB_DCP);
- qusb_phy_disable_phy(qphy);
+ qusb_phy_unprepare_chg_det(qphy);
qphy->port_state = PORT_CHG_DET_DONE;
}
break;
@@ -1298,7 +1310,7 @@ static void qusb_phy_port_state_work(struct work_struct *w)
delay = CHG_SECONDARY_DET_TIME_MSEC;
} else {
- qusb_phy_disable_phy(qphy);
+ qusb_phy_unprepare_chg_det(qphy);
qusb_phy_notify_charger(qphy, POWER_SUPPLY_TYPE_USB);
qusb_phy_notify_extcon(qphy, EXTCON_USB, 1);
qphy->port_state = PORT_CHG_DET_DONE;
@@ -1320,7 +1332,7 @@ static void qusb_phy_port_state_work(struct work_struct *w)
qusb_phy_notify_extcon(qphy, EXTCON_USB, 1);
}
- qusb_phy_disable_phy(qphy);
+ qusb_phy_unprepare_chg_det(qphy);
qphy->port_state = PORT_CHG_DET_DONE;
/*
* Fall through to check if cable got disconnected
diff --git a/fs/buffer.c b/fs/buffer.c
index e08639c..5583977 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -46,6 +46,7 @@
#include <linux/pagevec.h>
#include <linux/sched/mm.h>
#include <trace/events/block.h>
+#include <linux/fscrypt.h>
static int fsync_buffers_list(spinlock_t *lock, struct list_head *list);
static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
@@ -3104,6 +3105,8 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
*/
bio = bio_alloc(GFP_NOIO, 1);
+ fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO);
+
if (wbc) {
wbc_init_bio(wbc, bio);
wbc_account_io(wbc, bh->b_page, bh->b_size);
diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig
index 4f7235e..0701bb9 100644
--- a/fs/crypto/Kconfig
+++ b/fs/crypto/Kconfig
@@ -6,6 +6,8 @@
select CRYPTO_ECB
select CRYPTO_XTS
select CRYPTO_CTS
+ select CRYPTO_SHA512
+ select CRYPTO_HMAC
select KEYS
help
Enable encryption of files and directories. This
@@ -13,3 +15,9 @@
efficient since it avoids caching the encrypted and
decrypted pages in the page cache. Currently Ext4,
F2FS and UBIFS make use of this feature.
+
+config FS_ENCRYPTION_INLINE_CRYPT
+ bool "Enable fscrypt to use inline crypto"
+ depends on FS_ENCRYPTION && BLK_INLINE_ENCRYPTION
+ help
+ Enable fscrypt to use inline encryption hardware if available.
diff --git a/fs/crypto/Makefile b/fs/crypto/Makefile
index b0ca0e6..1a6b077 100644
--- a/fs/crypto/Makefile
+++ b/fs/crypto/Makefile
@@ -1,8 +1,13 @@
obj-$(CONFIG_FS_ENCRYPTION) += fscrypto.o
-ccflags-y += -Ifs/ext4
-ccflags-y += -Ifs/f2fs
+fscrypto-y := crypto.o \
+ fname.o \
+ hkdf.o \
+ hooks.o \
+ keyring.o \
+ keysetup.o \
+ keysetup_v1.o \
+ policy.o
-fscrypto-y := crypto.o fname.o hooks.o keyinfo.o policy.o
fscrypto-$(CONFIG_BLOCK) += bio.o
-fscrypto-$(CONFIG_PFK) += fscrypt_ice.o
+fscrypto-$(CONFIG_FS_ENCRYPTION_INLINE_CRYPT) += inline_crypt.o
diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c
index b871f7d..6927578 100644
--- a/fs/crypto/bio.c
+++ b/fs/crypto/bio.c
@@ -26,81 +26,59 @@
#include <linux/namei.h>
#include "fscrypt_private.h"
-static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
+void fscrypt_decrypt_bio(struct bio *bio)
{
struct bio_vec *bv;
int i;
bio_for_each_segment_all(bv, bio, i) {
struct page *page = bv->bv_page;
- if (fscrypt_using_hardware_encryption(page->mapping->host)) {
- SetPageUptodate(page);
- } else {
- int ret = fscrypt_decrypt_pagecache_blocks(page,
- bv->bv_len,
- bv->bv_offset);
- if (ret)
- SetPageError(page);
- else if (done)
- SetPageUptodate(page);
- }
- if (done)
- unlock_page(page);
+ int ret = fscrypt_decrypt_pagecache_blocks(page,
+ bv->bv_len,
+ bv->bv_offset);
+ if (ret)
+ SetPageError(page);
}
}
-
-void fscrypt_decrypt_bio(struct bio *bio)
-{
- __fscrypt_decrypt_bio(bio, false);
-}
EXPORT_SYMBOL(fscrypt_decrypt_bio);
-static void completion_pages(struct work_struct *work)
-{
- struct fscrypt_ctx *ctx = container_of(work, struct fscrypt_ctx, work);
- struct bio *bio = ctx->bio;
-
- __fscrypt_decrypt_bio(bio, true);
- fscrypt_release_ctx(ctx);
- bio_put(bio);
-}
-
-void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx, struct bio *bio)
-{
- INIT_WORK(&ctx->work, completion_pages);
- ctx->bio = bio;
- fscrypt_enqueue_decrypt_work(&ctx->work);
-}
-EXPORT_SYMBOL(fscrypt_enqueue_decrypt_bio);
-
int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
sector_t pblk, unsigned int len)
{
const unsigned int blockbits = inode->i_blkbits;
const unsigned int blocksize = 1 << blockbits;
+ const bool inlinecrypt = fscrypt_inode_uses_inline_crypto(inode);
struct page *ciphertext_page;
struct bio *bio;
int ret, err = 0;
- ciphertext_page = fscrypt_alloc_bounce_page(GFP_NOWAIT);
- if (!ciphertext_page)
- return -ENOMEM;
+ if (inlinecrypt) {
+ ciphertext_page = ZERO_PAGE(0);
+ } else {
+ ciphertext_page = fscrypt_alloc_bounce_page(GFP_NOWAIT);
+ if (!ciphertext_page)
+ return -ENOMEM;
+ }
while (len--) {
- err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk,
- ZERO_PAGE(0), ciphertext_page,
- blocksize, 0, GFP_NOFS);
- if (err)
- goto errout;
+ if (!inlinecrypt) {
+ err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk,
+ ZERO_PAGE(0), ciphertext_page,
+ blocksize, 0, GFP_NOFS);
+ if (err)
+ goto errout;
+ }
bio = bio_alloc(GFP_NOWAIT, 1);
if (!bio) {
err = -ENOMEM;
goto errout;
}
+ fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOIO);
+
bio_set_dev(bio, inode->i_sb->s_bdev);
bio->bi_iter.bi_sector = pblk << (blockbits - 9);
- bio_set_op_attrs(bio, REQ_OP_WRITE, REQ_NOENCRYPT);
+ bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
ret = bio_add_page(bio, ciphertext_page, blocksize, 0);
if (WARN_ON(ret != blocksize)) {
/* should never happen! */
@@ -119,7 +97,8 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
}
err = 0;
errout:
- fscrypt_free_bounce_page(ciphertext_page);
+ if (!inlinecrypt)
+ fscrypt_free_bounce_page(ciphertext_page);
return err;
}
EXPORT_SYMBOL(fscrypt_zeroout_range);
diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
index dcf630d..05ba4ff 100644
--- a/fs/crypto/crypto.c
+++ b/fs/crypto/crypto.c
@@ -26,29 +26,20 @@
#include <linux/ratelimit.h>
#include <linux/dcache.h>
#include <linux/namei.h>
-#include <crypto/aes.h>
#include <crypto/skcipher.h>
#include "fscrypt_private.h"
static unsigned int num_prealloc_crypto_pages = 32;
-static unsigned int num_prealloc_crypto_ctxs = 128;
module_param(num_prealloc_crypto_pages, uint, 0444);
MODULE_PARM_DESC(num_prealloc_crypto_pages,
"Number of crypto pages to preallocate");
-module_param(num_prealloc_crypto_ctxs, uint, 0444);
-MODULE_PARM_DESC(num_prealloc_crypto_ctxs,
- "Number of crypto contexts to preallocate");
static mempool_t *fscrypt_bounce_page_pool = NULL;
-static LIST_HEAD(fscrypt_free_ctxs);
-static DEFINE_SPINLOCK(fscrypt_ctx_lock);
-
static struct workqueue_struct *fscrypt_read_workqueue;
static DEFINE_MUTEX(fscrypt_init_mutex);
-static struct kmem_cache *fscrypt_ctx_cachep;
struct kmem_cache *fscrypt_info_cachep;
void fscrypt_enqueue_decrypt_work(struct work_struct *work)
@@ -57,62 +48,6 @@ void fscrypt_enqueue_decrypt_work(struct work_struct *work)
}
EXPORT_SYMBOL(fscrypt_enqueue_decrypt_work);
-/**
- * fscrypt_release_ctx() - Release a decryption context
- * @ctx: The decryption context to release.
- *
- * If the decryption context was allocated from the pre-allocated pool, return
- * it to that pool. Else, free it.
- */
-void fscrypt_release_ctx(struct fscrypt_ctx *ctx)
-{
- unsigned long flags;
-
- if (ctx->flags & FS_CTX_REQUIRES_FREE_ENCRYPT_FL) {
- kmem_cache_free(fscrypt_ctx_cachep, ctx);
- } else {
- spin_lock_irqsave(&fscrypt_ctx_lock, flags);
- list_add(&ctx->free_list, &fscrypt_free_ctxs);
- spin_unlock_irqrestore(&fscrypt_ctx_lock, flags);
- }
-}
-EXPORT_SYMBOL(fscrypt_release_ctx);
-
-/**
- * fscrypt_get_ctx() - Get a decryption context
- * @gfp_flags: The gfp flag for memory allocation
- *
- * Allocate and initialize a decryption context.
- *
- * Return: A new decryption context on success; an ERR_PTR() otherwise.
- */
-struct fscrypt_ctx *fscrypt_get_ctx(gfp_t gfp_flags)
-{
- struct fscrypt_ctx *ctx;
- unsigned long flags;
-
- /*
- * First try getting a ctx from the free list so that we don't have to
- * call into the slab allocator.
- */
- spin_lock_irqsave(&fscrypt_ctx_lock, flags);
- ctx = list_first_entry_or_null(&fscrypt_free_ctxs,
- struct fscrypt_ctx, free_list);
- if (ctx)
- list_del(&ctx->free_list);
- spin_unlock_irqrestore(&fscrypt_ctx_lock, flags);
- if (!ctx) {
- ctx = kmem_cache_zalloc(fscrypt_ctx_cachep, gfp_flags);
- if (!ctx)
- return ERR_PTR(-ENOMEM);
- ctx->flags |= FS_CTX_REQUIRES_FREE_ENCRYPT_FL;
- } else {
- ctx->flags &= ~FS_CTX_REQUIRES_FREE_ENCRYPT_FL;
- }
- return ctx;
-}
-EXPORT_SYMBOL(fscrypt_get_ctx);
-
struct page *fscrypt_alloc_bounce_page(gfp_t gfp_flags)
{
return mempool_alloc(fscrypt_bounce_page_pool, gfp_flags);
@@ -137,14 +72,24 @@ EXPORT_SYMBOL(fscrypt_free_bounce_page);
void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,
const struct fscrypt_info *ci)
{
+ u8 flags = fscrypt_policy_flags(&ci->ci_policy);
+
+ bool inlinecrypt = false;
+
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+ inlinecrypt = ci->ci_inlinecrypt;
+#endif
memset(iv, 0, ci->ci_mode->ivsize);
- iv->lblk_num = cpu_to_le64(lblk_num);
- if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY)
+ if (flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 ||
+ ((fscrypt_policy_contents_mode(&ci->ci_policy) ==
+ FSCRYPT_MODE_PRIVATE) && inlinecrypt)) {
+ WARN_ON_ONCE((u32)lblk_num != lblk_num);
+ lblk_num |= (u64)ci->ci_inode->i_ino << 32;
+ } else if (flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
memcpy(iv->nonce, ci->ci_nonce, FS_KEY_DERIVATION_NONCE_SIZE);
-
- if (ci->ci_essiv_tfm != NULL)
- crypto_cipher_encrypt_one(ci->ci_essiv_tfm, iv->raw, iv->raw);
+ }
+ iv->lblk_num = cpu_to_le64(lblk_num);
}
/* Encrypt or decrypt a single filesystem block of file contents */
@@ -158,7 +103,7 @@ int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
DECLARE_CRYPTO_WAIT(wait);
struct scatterlist dst, src;
struct fscrypt_info *ci = inode->i_crypt_info;
- struct crypto_skcipher *tfm = ci->ci_ctfm;
+ struct crypto_skcipher *tfm = ci->ci_key.tfm;
int res = 0;
if (WARN_ON_ONCE(len <= 0))
@@ -187,10 +132,8 @@ int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
skcipher_request_free(req);
if (res) {
- fscrypt_err(inode->i_sb,
- "%scryption failed for inode %lu, block %llu: %d",
- (rw == FS_DECRYPT ? "de" : "en"),
- inode->i_ino, lblk_num, res);
+ fscrypt_err(inode, "%scryption failed for block %llu: %d",
+ (rw == FS_DECRYPT ? "De" : "En"), lblk_num, res);
return res;
}
return 0;
@@ -397,17 +340,6 @@ const struct dentry_operations fscrypt_d_ops = {
.d_revalidate = fscrypt_d_revalidate,
};
-static void fscrypt_destroy(void)
-{
- struct fscrypt_ctx *pos, *n;
-
- list_for_each_entry_safe(pos, n, &fscrypt_free_ctxs, free_list)
- kmem_cache_free(fscrypt_ctx_cachep, pos);
- INIT_LIST_HEAD(&fscrypt_free_ctxs);
- mempool_destroy(fscrypt_bounce_page_pool);
- fscrypt_bounce_page_pool = NULL;
-}
-
/**
* fscrypt_initialize() - allocate major buffers for fs encryption.
* @cop_flags: fscrypt operations flags
@@ -415,11 +347,11 @@ static void fscrypt_destroy(void)
* We only call this when we start accessing encrypted files, since it
* results in memory getting allocated that wouldn't otherwise be used.
*
- * Return: Zero on success, non-zero otherwise.
+ * Return: 0 on success; -errno on failure
*/
int fscrypt_initialize(unsigned int cop_flags)
{
- int i, res = -ENOMEM;
+ int err = 0;
/* No need to allocate a bounce page pool if this FS won't use it. */
if (cop_flags & FS_CFLG_OWN_PAGES)
@@ -427,32 +359,21 @@ int fscrypt_initialize(unsigned int cop_flags)
mutex_lock(&fscrypt_init_mutex);
if (fscrypt_bounce_page_pool)
- goto already_initialized;
+ goto out_unlock;
- for (i = 0; i < num_prealloc_crypto_ctxs; i++) {
- struct fscrypt_ctx *ctx;
-
- ctx = kmem_cache_zalloc(fscrypt_ctx_cachep, GFP_NOFS);
- if (!ctx)
- goto fail;
- list_add(&ctx->free_list, &fscrypt_free_ctxs);
- }
-
+ err = -ENOMEM;
fscrypt_bounce_page_pool =
mempool_create_page_pool(num_prealloc_crypto_pages, 0);
if (!fscrypt_bounce_page_pool)
- goto fail;
+ goto out_unlock;
-already_initialized:
+ err = 0;
+out_unlock:
mutex_unlock(&fscrypt_init_mutex);
- return 0;
-fail:
- fscrypt_destroy();
- mutex_unlock(&fscrypt_init_mutex);
- return res;
+ return err;
}
-void fscrypt_msg(struct super_block *sb, const char *level,
+void fscrypt_msg(const struct inode *inode, const char *level,
const char *fmt, ...)
{
static DEFINE_RATELIMIT_STATE(rs, DEFAULT_RATELIMIT_INTERVAL,
@@ -466,8 +387,9 @@ void fscrypt_msg(struct super_block *sb, const char *level,
va_start(args, fmt);
vaf.fmt = fmt;
vaf.va = &args;
- if (sb)
- printk("%sfscrypt (%s): %pV\n", level, sb->s_id, &vaf);
+ if (inode)
+ printk("%sfscrypt (%s, inode %lu): %pV\n",
+ level, inode->i_sb->s_id, inode->i_ino, &vaf);
else
printk("%sfscrypt: %pV\n", level, &vaf);
va_end(args);
@@ -478,6 +400,8 @@ void fscrypt_msg(struct super_block *sb, const char *level,
*/
static int __init fscrypt_init(void)
{
+ int err = -ENOMEM;
+
/*
* Use an unbound workqueue to allow bios to be decrypted in parallel
* even when they happen to complete on the same CPU. This sacrifices
@@ -492,39 +416,21 @@ static int __init fscrypt_init(void)
if (!fscrypt_read_workqueue)
goto fail;
- fscrypt_ctx_cachep = KMEM_CACHE(fscrypt_ctx, SLAB_RECLAIM_ACCOUNT);
- if (!fscrypt_ctx_cachep)
- goto fail_free_queue;
-
fscrypt_info_cachep = KMEM_CACHE(fscrypt_info, SLAB_RECLAIM_ACCOUNT);
if (!fscrypt_info_cachep)
- goto fail_free_ctx;
+ goto fail_free_queue;
+
+ err = fscrypt_init_keyring();
+ if (err)
+ goto fail_free_info;
return 0;
-fail_free_ctx:
- kmem_cache_destroy(fscrypt_ctx_cachep);
+fail_free_info:
+ kmem_cache_destroy(fscrypt_info_cachep);
fail_free_queue:
destroy_workqueue(fscrypt_read_workqueue);
fail:
- return -ENOMEM;
+ return err;
}
-module_init(fscrypt_init)
-
-/**
- * fscrypt_exit() - Shutdown the fs encryption system
- */
-static void __exit fscrypt_exit(void)
-{
- fscrypt_destroy();
-
- if (fscrypt_read_workqueue)
- destroy_workqueue(fscrypt_read_workqueue);
- kmem_cache_destroy(fscrypt_ctx_cachep);
- kmem_cache_destroy(fscrypt_info_cachep);
-
- fscrypt_essiv_cleanup();
-}
-module_exit(fscrypt_exit);
-
-MODULE_LICENSE("GPL");
+late_initcall(fscrypt_init)
diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c
index 00d150f..3aafdda 100644
--- a/fs/crypto/fname.c
+++ b/fs/crypto/fname.c
@@ -40,7 +40,7 @@ int fname_encrypt(struct inode *inode, const struct qstr *iname,
struct skcipher_request *req = NULL;
DECLARE_CRYPTO_WAIT(wait);
struct fscrypt_info *ci = inode->i_crypt_info;
- struct crypto_skcipher *tfm = ci->ci_ctfm;
+ struct crypto_skcipher *tfm = ci->ci_key.tfm;
union fscrypt_iv iv;
struct scatterlist sg;
int res;
@@ -71,9 +71,7 @@ int fname_encrypt(struct inode *inode, const struct qstr *iname,
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
skcipher_request_free(req);
if (res < 0) {
- fscrypt_err(inode->i_sb,
- "Filename encryption failed for inode %lu: %d",
- inode->i_ino, res);
+ fscrypt_err(inode, "Filename encryption failed: %d", res);
return res;
}
@@ -95,7 +93,7 @@ static int fname_decrypt(struct inode *inode,
DECLARE_CRYPTO_WAIT(wait);
struct scatterlist src_sg, dst_sg;
struct fscrypt_info *ci = inode->i_crypt_info;
- struct crypto_skcipher *tfm = ci->ci_ctfm;
+ struct crypto_skcipher *tfm = ci->ci_key.tfm;
union fscrypt_iv iv;
int res;
@@ -117,9 +115,7 @@ static int fname_decrypt(struct inode *inode,
res = crypto_wait_req(crypto_skcipher_decrypt(req), &wait);
skcipher_request_free(req);
if (res < 0) {
- fscrypt_err(inode->i_sb,
- "Filename decryption failed for inode %lu: %d",
- inode->i_ino, res);
+ fscrypt_err(inode, "Filename decryption failed: %d", res);
return res;
}
@@ -127,44 +123,45 @@ static int fname_decrypt(struct inode *inode,
return 0;
}
-static const char *lookup_table =
+static const char lookup_table[65] =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+,";
#define BASE64_CHARS(nbytes) DIV_ROUND_UP((nbytes) * 4, 3)
/**
- * digest_encode() -
+ * base64_encode() -
*
- * Encodes the input digest using characters from the set [a-zA-Z0-9_+].
+ * Encodes the input string using characters from the set [A-Za-z0-9+,].
* The encoded string is roughly 4/3 times the size of the input string.
+ *
+ * Return: length of the encoded string
*/
-static int digest_encode(const char *src, int len, char *dst)
+static int base64_encode(const u8 *src, int len, char *dst)
{
- int i = 0, bits = 0, ac = 0;
+ int i, bits = 0, ac = 0;
char *cp = dst;
- while (i < len) {
- ac += (((unsigned char) src[i]) << bits);
+ for (i = 0; i < len; i++) {
+ ac += src[i] << bits;
bits += 8;
do {
*cp++ = lookup_table[ac & 0x3f];
ac >>= 6;
bits -= 6;
} while (bits >= 6);
- i++;
}
if (bits)
*cp++ = lookup_table[ac & 0x3f];
return cp - dst;
}
-static int digest_decode(const char *src, int len, char *dst)
+static int base64_decode(const char *src, int len, u8 *dst)
{
- int i = 0, bits = 0, ac = 0;
+ int i, bits = 0, ac = 0;
const char *p;
- char *cp = dst;
+ u8 *cp = dst;
- while (i < len) {
+ for (i = 0; i < len; i++) {
p = strchr(lookup_table, src[i]);
if (p == NULL || src[i] == 0)
return -2;
@@ -175,7 +172,6 @@ static int digest_decode(const char *src, int len, char *dst)
ac >>= 8;
bits -= 8;
}
- i++;
}
if (ac)
return -1;
@@ -185,8 +181,9 @@ static int digest_decode(const char *src, int len, char *dst)
bool fscrypt_fname_encrypted_size(const struct inode *inode, u32 orig_len,
u32 max_len, u32 *encrypted_len_ret)
{
- int padding = 4 << (inode->i_crypt_info->ci_flags &
- FS_POLICY_FLAGS_PAD_MASK);
+ const struct fscrypt_info *ci = inode->i_crypt_info;
+ int padding = 4 << (fscrypt_policy_flags(&ci->ci_policy) &
+ FSCRYPT_POLICY_FLAGS_PAD_MASK);
u32 encrypted_len;
if (orig_len > max_len)
@@ -272,7 +269,7 @@ int fscrypt_fname_disk_to_usr(struct inode *inode,
return fname_decrypt(inode, iname, oname);
if (iname->len <= FSCRYPT_FNAME_MAX_UNDIGESTED_SIZE) {
- oname->len = digest_encode(iname->name, iname->len,
+ oname->len = base64_encode(iname->name, iname->len,
oname->name);
return 0;
}
@@ -287,7 +284,7 @@ int fscrypt_fname_disk_to_usr(struct inode *inode,
FSCRYPT_FNAME_DIGEST(iname->name, iname->len),
FSCRYPT_FNAME_DIGEST_SIZE);
oname->name[0] = '_';
- oname->len = 1 + digest_encode((const char *)&digested_name,
+ oname->len = 1 + base64_encode((const u8 *)&digested_name,
sizeof(digested_name), oname->name + 1);
return 0;
}
@@ -380,8 +377,8 @@ int fscrypt_setup_filename(struct inode *dir, const struct qstr *iname,
if (fname->crypto_buf.name == NULL)
return -ENOMEM;
- ret = digest_decode(iname->name + digested, iname->len - digested,
- fname->crypto_buf.name);
+ ret = base64_decode(iname->name + digested, iname->len - digested,
+ fname->crypto_buf.name);
if (ret < 0) {
ret = -ENOENT;
goto errout;
diff --git a/fs/crypto/fscrypt_ice.c b/fs/crypto/fscrypt_ice.c
deleted file mode 100644
index 6c88233..0000000
--- a/fs/crypto/fscrypt_ice.c
+++ /dev/null
@@ -1,153 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
- */
-
-#include "fscrypt_ice.h"
-
-int fscrypt_using_hardware_encryption(const struct inode *inode)
-{
- struct fscrypt_info *ci = inode->i_crypt_info;
-
- return S_ISREG(inode->i_mode) && ci &&
- ci->ci_data_mode == FS_ENCRYPTION_MODE_PRIVATE;
-}
-EXPORT_SYMBOL(fscrypt_using_hardware_encryption);
-
-/*
- * Retrieves encryption key from the inode
- */
-char *fscrypt_get_ice_encryption_key(const struct inode *inode)
-{
- struct fscrypt_info *ci = NULL;
-
- if (!inode)
- return NULL;
-
- ci = inode->i_crypt_info;
- if (!ci)
- return NULL;
-
- return &(ci->ci_raw_key[0]);
-}
-
-/*
- * Retrieves encryption salt from the inode
- */
-char *fscrypt_get_ice_encryption_salt(const struct inode *inode)
-{
- struct fscrypt_info *ci = NULL;
-
- if (!inode)
- return NULL;
-
- ci = inode->i_crypt_info;
- if (!ci)
- return NULL;
-
- return &(ci->ci_raw_key[fscrypt_get_ice_encryption_key_size(inode)]);
-}
-
-/*
- * returns true if the cipher mode in inode is AES XTS
- */
-int fscrypt_is_aes_xts_cipher(const struct inode *inode)
-{
- struct fscrypt_info *ci = inode->i_crypt_info;
-
- if (!ci)
- return 0;
-
- return (ci->ci_data_mode == FS_ENCRYPTION_MODE_PRIVATE);
-}
-
-/*
- * returns true if encryption info in both inodes is equal
- */
-bool fscrypt_is_ice_encryption_info_equal(const struct inode *inode1,
- const struct inode *inode2)
-{
- char *key1 = NULL;
- char *key2 = NULL;
- char *salt1 = NULL;
- char *salt2 = NULL;
-
- if (!inode1 || !inode2)
- return false;
-
- if (inode1 == inode2)
- return true;
-
- /*
- * both do not belong to ice, so we don't care, they are equal
- * for us
- */
- if (!fscrypt_should_be_processed_by_ice(inode1) &&
- !fscrypt_should_be_processed_by_ice(inode2))
- return true;
-
- /* one belongs to ice, the other does not -> not equal */
- if (fscrypt_should_be_processed_by_ice(inode1) ^
- fscrypt_should_be_processed_by_ice(inode2))
- return false;
-
- key1 = fscrypt_get_ice_encryption_key(inode1);
- key2 = fscrypt_get_ice_encryption_key(inode2);
- salt1 = fscrypt_get_ice_encryption_salt(inode1);
- salt2 = fscrypt_get_ice_encryption_salt(inode2);
-
- /* key and salt should not be null by this point */
- if (!key1 || !key2 || !salt1 || !salt2 ||
- (fscrypt_get_ice_encryption_key_size(inode1) !=
- fscrypt_get_ice_encryption_key_size(inode2)) ||
- (fscrypt_get_ice_encryption_salt_size(inode1) !=
- fscrypt_get_ice_encryption_salt_size(inode2)))
- return false;
-
- if ((memcmp(key1, key2,
- fscrypt_get_ice_encryption_key_size(inode1)) == 0) &&
- (memcmp(salt1, salt2,
- fscrypt_get_ice_encryption_salt_size(inode1)) == 0))
- return true;
-
- return false;
-}
-
-void fscrypt_set_ice_dun(const struct inode *inode, struct bio *bio, u64 dun)
-{
- if (fscrypt_should_be_processed_by_ice(inode))
- bio->bi_iter.bi_dun = dun;
-}
-EXPORT_SYMBOL(fscrypt_set_ice_dun);
-
-void fscrypt_set_ice_skip(struct bio *bio, int bi_crypt_skip)
-{
-#ifdef CONFIG_DM_DEFAULT_KEY
- bio->bi_crypt_skip = bi_crypt_skip;
-#endif
-}
-EXPORT_SYMBOL(fscrypt_set_ice_skip);
-
-/*
- * This function will be used for filesystem when deciding to merge bios.
- * Basic assumption is, if inline_encryption is set, single bio has to
- * guarantee consecutive LBAs as well as ino|pg->index.
- */
-bool fscrypt_mergeable_bio(struct bio *bio, u64 dun, bool bio_encrypted,
- int bi_crypt_skip)
-{
- if (!bio)
- return true;
-
-#ifdef CONFIG_DM_DEFAULT_KEY
- if (bi_crypt_skip != bio->bi_crypt_skip)
- return false;
-#endif
- /* if both of them are not encrypted, no further check is needed */
- if (!bio_dun(bio) && !bio_encrypted)
- return true;
-
- /* ICE allows only consecutive iv_key stream. */
- return bio_end_dun(bio) == dun;
-}
-EXPORT_SYMBOL(fscrypt_mergeable_bio);
diff --git a/fs/crypto/fscrypt_ice.h b/fs/crypto/fscrypt_ice.h
deleted file mode 100644
index 84de010..0000000
--- a/fs/crypto/fscrypt_ice.h
+++ /dev/null
@@ -1,99 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef _FSCRYPT_ICE_H
-#define _FSCRYPT_ICE_H
-
-#include <linux/blkdev.h>
-#include "fscrypt_private.h"
-
-#if IS_ENABLED(CONFIG_FS_ENCRYPTION)
-static inline bool fscrypt_should_be_processed_by_ice(const struct inode *inode)
-{
- if (!inode->i_sb->s_cop)
- return false;
- if (!inode->i_sb->s_cop->is_encrypted((struct inode *)inode))
- return false;
-
- return fscrypt_using_hardware_encryption(inode);
-}
-
-static inline int fscrypt_is_ice_capable(const struct super_block *sb)
-{
- return blk_queue_inlinecrypt(bdev_get_queue(sb->s_bdev));
-}
-
-int fscrypt_is_aes_xts_cipher(const struct inode *inode);
-
-char *fscrypt_get_ice_encryption_key(const struct inode *inode);
-char *fscrypt_get_ice_encryption_salt(const struct inode *inode);
-
-bool fscrypt_is_ice_encryption_info_equal(const struct inode *inode1,
- const struct inode *inode2);
-
-static inline size_t fscrypt_get_ice_encryption_key_size(
- const struct inode *inode)
-{
- return FS_AES_256_XTS_KEY_SIZE / 2;
-}
-
-static inline size_t fscrypt_get_ice_encryption_salt_size(
- const struct inode *inode)
-{
- return FS_AES_256_XTS_KEY_SIZE / 2;
-}
-#else
-static inline bool fscrypt_should_be_processed_by_ice(const struct inode *inode)
-{
- return false;
-}
-
-static inline int fscrypt_is_ice_capable(const struct super_block *sb)
-{
- return false;
-}
-
-static inline char *fscrypt_get_ice_encryption_key(const struct inode *inode)
-{
- return NULL;
-}
-
-static inline char *fscrypt_get_ice_encryption_salt(const struct inode *inode)
-{
- return NULL;
-}
-
-static inline size_t fscrypt_get_ice_encryption_key_size(
- const struct inode *inode)
-{
- return 0;
-}
-
-static inline size_t fscrypt_get_ice_encryption_salt_size(
- const struct inode *inode)
-{
- return 0;
-}
-
-static inline int fscrypt_is_xts_cipher(const struct inode *inode)
-{
- return 0;
-}
-
-static inline bool fscrypt_is_ice_encryption_info_equal(
- const struct inode *inode1,
- const struct inode *inode2)
-{
- return false;
-}
-
-static inline int fscrypt_is_aes_xts_cipher(const struct inode *inode)
-{
- return 0;
-}
-
-#endif
-
-#endif /* _FSCRYPT_ICE_H */
diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
index 70e34437..af6300c 100644
--- a/fs/crypto/fscrypt_private.h
+++ b/fs/crypto/fscrypt_private.h
@@ -4,9 +4,8 @@
*
* Copyright (C) 2015, Google, Inc.
*
- * This contains encryption key functions.
- *
- * Written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar, 2015.
+ * Originally written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar.
+ * Heavily modified since then.
*/
#ifndef _FSCRYPT_PRIVATE_H
@@ -14,41 +13,136 @@
#include <linux/fscrypt.h>
#include <crypto/hash.h>
-#include <linux/pfk.h>
+#include <linux/bio-crypt-ctx.h>
-/* Encryption parameters */
-
-#define FS_AES_128_ECB_KEY_SIZE 16
-#define FS_AES_128_CBC_KEY_SIZE 16
-#define FS_AES_128_CTS_KEY_SIZE 16
-#define FS_AES_256_GCM_KEY_SIZE 32
-#define FS_AES_256_CBC_KEY_SIZE 32
-#define FS_AES_256_CTS_KEY_SIZE 32
-#define FS_AES_256_XTS_KEY_SIZE 64
+#define CONST_STRLEN(str) (sizeof(str) - 1)
#define FS_KEY_DERIVATION_NONCE_SIZE 16
-/**
- * Encryption context for inode
- *
- * Protector format:
- * 1 byte: Protector format (1 = this version)
- * 1 byte: File contents encryption mode
- * 1 byte: File names encryption mode
- * 1 byte: Flags
- * 8 bytes: Master Key descriptor
- * 16 bytes: Encryption Key derivation nonce
- */
-struct fscrypt_context {
- u8 format;
+#define FSCRYPT_MIN_KEY_SIZE 16
+#define FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE 128
+
+#define FSCRYPT_CONTEXT_V1 1
+#define FSCRYPT_CONTEXT_V2 2
+
+struct fscrypt_context_v1 {
+ u8 version; /* FSCRYPT_CONTEXT_V1 */
u8 contents_encryption_mode;
u8 filenames_encryption_mode;
u8 flags;
- u8 master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
+ u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
-} __packed;
+};
-#define FS_ENCRYPTION_CONTEXT_FORMAT_V1 1
+struct fscrypt_context_v2 {
+ u8 version; /* FSCRYPT_CONTEXT_V2 */
+ u8 contents_encryption_mode;
+ u8 filenames_encryption_mode;
+ u8 flags;
+ u8 __reserved[4];
+ u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
+ u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
+};
+
+/**
+ * fscrypt_context - the encryption context of an inode
+ *
+ * This is the on-disk equivalent of an fscrypt_policy, stored alongside each
+ * encrypted file usually in a hidden extended attribute. It contains the
+ * fields from the fscrypt_policy, in order to identify the encryption algorithm
+ * and key with which the file is encrypted. It also contains a nonce that was
+ * randomly generated by fscrypt itself; this is used as KDF input or as a tweak
+ * to cause different files to be encrypted differently.
+ */
+union fscrypt_context {
+ u8 version;
+ struct fscrypt_context_v1 v1;
+ struct fscrypt_context_v2 v2;
+};
+
+/*
+ * Return the size expected for the given fscrypt_context based on its version
+ * number, or 0 if the context version is unrecognized.
+ */
+static inline int fscrypt_context_size(const union fscrypt_context *ctx)
+{
+ switch (ctx->version) {
+ case FSCRYPT_CONTEXT_V1:
+ BUILD_BUG_ON(sizeof(ctx->v1) != 28);
+ return sizeof(ctx->v1);
+ case FSCRYPT_CONTEXT_V2:
+ BUILD_BUG_ON(sizeof(ctx->v2) != 40);
+ return sizeof(ctx->v2);
+ }
+ return 0;
+}
+
+#undef fscrypt_policy
+union fscrypt_policy {
+ u8 version;
+ struct fscrypt_policy_v1 v1;
+ struct fscrypt_policy_v2 v2;
+};
+
+/*
+ * Return the size expected for the given fscrypt_policy based on its version
+ * number, or 0 if the policy version is unrecognized.
+ */
+static inline int fscrypt_policy_size(const union fscrypt_policy *policy)
+{
+ switch (policy->version) {
+ case FSCRYPT_POLICY_V1:
+ return sizeof(policy->v1);
+ case FSCRYPT_POLICY_V2:
+ return sizeof(policy->v2);
+ }
+ return 0;
+}
+
+/* Return the contents encryption mode of a valid encryption policy */
+static inline u8
+fscrypt_policy_contents_mode(const union fscrypt_policy *policy)
+{
+ switch (policy->version) {
+ case FSCRYPT_POLICY_V1:
+ return policy->v1.contents_encryption_mode;
+ case FSCRYPT_POLICY_V2:
+ return policy->v2.contents_encryption_mode;
+ }
+ BUG();
+}
+
+/* Return the filenames encryption mode of a valid encryption policy */
+static inline u8
+fscrypt_policy_fnames_mode(const union fscrypt_policy *policy)
+{
+ switch (policy->version) {
+ case FSCRYPT_POLICY_V1:
+ return policy->v1.filenames_encryption_mode;
+ case FSCRYPT_POLICY_V2:
+ return policy->v2.filenames_encryption_mode;
+ }
+ BUG();
+}
+
+/* Return the flags (FSCRYPT_POLICY_FLAG*) of a valid encryption policy */
+static inline u8
+fscrypt_policy_flags(const union fscrypt_policy *policy)
+{
+ switch (policy->version) {
+ case FSCRYPT_POLICY_V1:
+ return policy->v1.flags;
+ case FSCRYPT_POLICY_V2:
+ return policy->v2.flags;
+ }
+ BUG();
+}
+
+static inline bool
+fscrypt_is_direct_key_policy(const union fscrypt_policy *policy)
+{
+ return fscrypt_policy_flags(policy) & FSCRYPT_POLICY_FLAG_DIRECT_KEY;
+}
/**
* For encrypted symlinks, the ciphertext length is stored at the beginning
@@ -59,6 +153,20 @@ struct fscrypt_symlink_data {
char encrypted_path[1];
} __packed;
+/**
+ * struct fscrypt_prepared_key - a key prepared for actual encryption/decryption
+ * @tfm: crypto API transform object
+ * @blk_key: key for blk-crypto
+ *
+ * Normally only one of the fields will be non-NULL.
+ */
+struct fscrypt_prepared_key {
+ struct crypto_skcipher *tfm;
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+ struct fscrypt_blk_crypto_key *blk_key;
+#endif
+};
+
/*
* fscrypt_info - the "encryption key" for an inode
*
@@ -68,36 +176,53 @@ struct fscrypt_symlink_data {
*/
struct fscrypt_info {
- /* The actual crypto transform used for encryption and decryption */
- struct crypto_skcipher *ci_ctfm;
+ /* The key in a form prepared for actual encryption/decryption */
+ struct fscrypt_prepared_key ci_key;
+ /* True if the key should be freed when this fscrypt_info is freed */
+ bool ci_owns_key;
+
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
/*
- * Cipher for ESSIV IV generation. Only set for CBC contents
- * encryption, otherwise is NULL.
+ * True if this inode will use inline encryption (blk-crypto) instead of
+ * the traditional filesystem-layer encryption.
*/
- struct crypto_cipher *ci_essiv_tfm;
+ bool ci_inlinecrypt;
+#endif
/*
- * Encryption mode used for this inode. It corresponds to either
- * ci_data_mode or ci_filename_mode, depending on the inode type.
+ * Encryption mode used for this inode. It corresponds to either the
+ * contents or filenames encryption mode, depending on the inode type.
*/
struct fscrypt_mode *ci_mode;
+ /* Back-pointer to the inode */
+ struct inode *ci_inode;
+
/*
- * If non-NULL, then this inode uses a master key directly rather than a
- * derived key, and ci_ctfm will equal ci_master_key->mk_ctfm.
- * Otherwise, this inode uses a derived key.
+ * The master key with which this inode was unlocked (decrypted). This
+ * will be NULL if the master key was found in a process-subscribed
+ * keyring rather than in the filesystem-level keyring.
*/
- struct fscrypt_master_key *ci_master_key;
+ struct key *ci_master_key;
- /* fields from the fscrypt_context */
+ /*
+ * Link in list of inodes that were unlocked with the master key.
+ * Only used when ->ci_master_key is set.
+ */
+ struct list_head ci_master_key_link;
- u8 ci_data_mode;
- u8 ci_filename_mode;
- u8 ci_flags;
- u8 ci_master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
+ /*
+ * If non-NULL, then encryption is done using the master key directly
+ * and ci_key will equal ci_direct_key->dk_key.
+ */
+ struct fscrypt_direct_key *ci_direct_key;
+
+ /* The encryption policy used by this inode */
+ union fscrypt_policy ci_policy;
+
+ /* This inode's nonce, copied from the fscrypt_context */
u8 ci_nonce[FS_KEY_DERIVATION_NONCE_SIZE];
- u8 ci_raw_key[FS_MAX_KEY_SIZE];
};
typedef enum {
@@ -105,25 +230,23 @@ typedef enum {
FS_ENCRYPT,
} fscrypt_direction_t;
-#define FS_CTX_REQUIRES_FREE_ENCRYPT_FL 0x00000001
-
static inline bool fscrypt_valid_enc_modes(u32 contents_mode,
u32 filenames_mode)
{
- if (contents_mode == FS_ENCRYPTION_MODE_AES_128_CBC &&
- filenames_mode == FS_ENCRYPTION_MODE_AES_128_CTS)
+ if (contents_mode == FSCRYPT_MODE_AES_128_CBC &&
+ filenames_mode == FSCRYPT_MODE_AES_128_CTS)
return true;
- if (contents_mode == FS_ENCRYPTION_MODE_AES_256_XTS &&
- filenames_mode == FS_ENCRYPTION_MODE_AES_256_CTS)
+ if (contents_mode == FSCRYPT_MODE_AES_256_XTS &&
+ filenames_mode == FSCRYPT_MODE_AES_256_CTS)
return true;
- if (contents_mode == FS_ENCRYPTION_MODE_ADIANTUM &&
- filenames_mode == FS_ENCRYPTION_MODE_ADIANTUM)
+ if (contents_mode == FSCRYPT_MODE_ADIANTUM &&
+ filenames_mode == FSCRYPT_MODE_ADIANTUM)
return true;
- if (contents_mode == FS_ENCRYPTION_MODE_PRIVATE &&
- filenames_mode == FS_ENCRYPTION_MODE_AES_256_CTS)
+ if (contents_mode == FSCRYPT_MODE_PRIVATE &&
+ filenames_mode == FSCRYPT_MODE_AES_256_CTS)
return true;
return false;
@@ -141,12 +264,12 @@ extern struct page *fscrypt_alloc_bounce_page(gfp_t gfp_flags);
extern const struct dentry_operations fscrypt_d_ops;
extern void __printf(3, 4) __cold
-fscrypt_msg(struct super_block *sb, const char *level, const char *fmt, ...);
+fscrypt_msg(const struct inode *inode, const char *level, const char *fmt, ...);
-#define fscrypt_warn(sb, fmt, ...) \
- fscrypt_msg(sb, KERN_WARNING, fmt, ##__VA_ARGS__)
-#define fscrypt_err(sb, fmt, ...) \
- fscrypt_msg(sb, KERN_ERR, fmt, ##__VA_ARGS__)
+#define fscrypt_warn(inode, fmt, ...) \
+ fscrypt_msg((inode), KERN_WARNING, fmt, ##__VA_ARGS__)
+#define fscrypt_err(inode, fmt, ...) \
+ fscrypt_msg((inode), KERN_ERR, fmt, ##__VA_ARGS__)
#define FSCRYPT_MAX_IV_SIZE 32
@@ -159,6 +282,7 @@ union fscrypt_iv {
u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
};
u8 raw[FSCRYPT_MAX_IV_SIZE];
+ __le64 dun[FSCRYPT_MAX_IV_SIZE / sizeof(__le64)];
};
void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,
@@ -171,7 +295,273 @@ extern bool fscrypt_fname_encrypted_size(const struct inode *inode,
u32 orig_len, u32 max_len,
u32 *encrypted_len_ret);
-/* keyinfo.c */
+/* hkdf.c */
+
+struct fscrypt_hkdf {
+ struct crypto_shash *hmac_tfm;
+};
+
+extern int fscrypt_init_hkdf(struct fscrypt_hkdf *hkdf, const u8 *master_key,
+ unsigned int master_key_size);
+
+/*
+ * The list of contexts in which fscrypt uses HKDF. These values are used as
+ * the first byte of the HKDF application-specific info string to guarantee that
+ * info strings are never repeated between contexts. This ensures that all HKDF
+ * outputs are unique and cryptographically isolated, i.e. knowledge of one
+ * output doesn't reveal another.
+ */
+#define HKDF_CONTEXT_KEY_IDENTIFIER 1
+#define HKDF_CONTEXT_PER_FILE_KEY 2
+#define HKDF_CONTEXT_DIRECT_KEY 3
+#define HKDF_CONTEXT_IV_INO_LBLK_64_KEY 4
+
+extern int fscrypt_hkdf_expand(struct fscrypt_hkdf *hkdf, u8 context,
+ const u8 *info, unsigned int infolen,
+ u8 *okm, unsigned int okmlen);
+
+extern void fscrypt_destroy_hkdf(struct fscrypt_hkdf *hkdf);
+
+/* inline_crypt.c */
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+extern void fscrypt_select_encryption_impl(struct fscrypt_info *ci);
+
+static inline bool
+fscrypt_using_inline_encryption(const struct fscrypt_info *ci)
+{
+ return ci->ci_inlinecrypt;
+}
+
+extern int fscrypt_prepare_inline_crypt_key(
+ struct fscrypt_prepared_key *prep_key,
+ const u8 *raw_key,
+ unsigned int raw_key_size,
+ bool is_hw_wrapped,
+ const struct fscrypt_info *ci);
+
+extern void fscrypt_destroy_inline_crypt_key(
+ struct fscrypt_prepared_key *prep_key);
+
+extern int fscrypt_derive_raw_secret(struct super_block *sb,
+ const u8 *wrapped_key,
+ unsigned int wrapped_key_size,
+ u8 *raw_secret,
+ unsigned int raw_secret_size);
+
+/*
+ * Check whether the crypto transform or blk-crypto key has been allocated in
+ * @prep_key, depending on which encryption implementation the file will use.
+ */
+static inline bool
+fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key,
+ const struct fscrypt_info *ci)
+{
+ /*
+ * The READ_ONCE() here pairs with the smp_store_release() in
+ * fscrypt_prepare_key(). (This only matters for the per-mode keys,
+ * which are shared by multiple inodes.)
+ */
+ if (fscrypt_using_inline_encryption(ci))
+ return READ_ONCE(prep_key->blk_key) != NULL;
+ return READ_ONCE(prep_key->tfm) != NULL;
+}
+
+#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
+
+static inline void fscrypt_select_encryption_impl(struct fscrypt_info *ci)
+{
+}
+
+static inline bool fscrypt_using_inline_encryption(
+ const struct fscrypt_info *ci)
+{
+ return false;
+}
+
+static inline int
+fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
+ const u8 *raw_key, unsigned int raw_key_size,
+ bool is_hw_wrapped,
+ const struct fscrypt_info *ci)
+{
+ WARN_ON(1);
+ return -EOPNOTSUPP;
+}
+
+static inline void
+fscrypt_destroy_inline_crypt_key(struct fscrypt_prepared_key *prep_key)
+{
+}
+
+static inline int fscrypt_derive_raw_secret(struct super_block *sb,
+ const u8 *wrapped_key,
+ unsigned int wrapped_key_size,
+ u8 *raw_secret,
+ unsigned int raw_secret_size)
+{
+ fscrypt_warn(NULL,
+ "kernel built without support for hardware-wrapped keys");
+ return -EOPNOTSUPP;
+}
+
+static inline bool
+fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key,
+ const struct fscrypt_info *ci)
+{
+ return READ_ONCE(prep_key->tfm) != NULL;
+}
+#endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
+
+/* keyring.c */
+
+/*
+ * fscrypt_master_key_secret - secret key material of an in-use master key
+ */
+struct fscrypt_master_key_secret {
+
+ /*
+ * For v2 policy keys: HKDF context keyed by this master key.
+ * For v1 policy keys: not set (hkdf.hmac_tfm == NULL).
+ */
+ struct fscrypt_hkdf hkdf;
+
+ /* Size of the raw key in bytes. Set even if ->raw isn't set. */
+ u32 size;
+
+ /* True if the key in ->raw is a hardware-wrapped key. */
+ bool is_hw_wrapped;
+
+ /*
+ * For v1 policy keys: the raw key. Wiped for v2 policy keys, unless
+ * ->is_hw_wrapped is true, in which case this contains the wrapped key
+ * rather than the key with which 'hkdf' was keyed.
+ */
+ u8 raw[FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE];
+
+} __randomize_layout;
+
+/*
+ * fscrypt_master_key - an in-use master key
+ *
+ * This represents a master encryption key which has been added to the
+ * filesystem and can be used to "unlock" the encrypted files which were
+ * encrypted with it.
+ */
+struct fscrypt_master_key {
+
+ /*
+ * The secret key material. After FS_IOC_REMOVE_ENCRYPTION_KEY is
+ * executed, this is wiped and no new inodes can be unlocked with this
+ * key; however, there may still be inodes in ->mk_decrypted_inodes
+ * which could not be evicted. As long as some inodes still remain,
+ * FS_IOC_REMOVE_ENCRYPTION_KEY can be retried, or
+ * FS_IOC_ADD_ENCRYPTION_KEY can add the secret again.
+ *
+ * Locking: protected by key->sem (outer) and mk_secret_sem (inner).
+ * The reason for two locks is that key->sem also protects modifying
+ * mk_users, which ranks it above the semaphore for the keyring key
+ * type, which is in turn above page faults (via keyring_read). But
+ * sometimes filesystems call fscrypt_get_encryption_info() from within
+ * a transaction, which ranks it below page faults. So we need a
+ * separate lock which protects mk_secret but not also mk_users.
+ */
+ struct fscrypt_master_key_secret mk_secret;
+ struct rw_semaphore mk_secret_sem;
+
+ /*
+ * For v1 policy keys: an arbitrary key descriptor which was assigned by
+ * userspace (->descriptor).
+ *
+ * For v2 policy keys: a cryptographic hash of this key (->identifier).
+ */
+ struct fscrypt_key_specifier mk_spec;
+
+ /*
+ * Keyring which contains a key of type 'key_type_fscrypt_user' for each
+ * user who has added this key. Normally each key will be added by just
+ * one user, but it's possible that multiple users share a key, and in
+ * that case we need to keep track of those users so that one user can't
+ * remove the key before the others want it removed too.
+ *
+ * This is NULL for v1 policy keys; those can only be added by root.
+ *
+ * Locking: in addition to this keyrings own semaphore, this is
+ * protected by the master key's key->sem, so we can do atomic
+ * search+insert. It can also be searched without taking any locks, but
+ * in that case the returned key may have already been removed.
+ */
+ struct key *mk_users;
+
+ /*
+ * Length of ->mk_decrypted_inodes, plus one if mk_secret is present.
+ * Once this goes to 0, the master key is removed from ->s_master_keys.
+ * The 'struct fscrypt_master_key' will continue to live as long as the
+ * 'struct key' whose payload it is, but we won't let this reference
+ * count rise again.
+ */
+ refcount_t mk_refcount;
+
+ /*
+ * List of inodes that were unlocked using this key. This allows the
+ * inodes to be evicted efficiently if the key is removed.
+ */
+ struct list_head mk_decrypted_inodes;
+ spinlock_t mk_decrypted_inodes_lock;
+
+ /* Per-mode keys for DIRECT_KEY policies, allocated on-demand */
+ struct fscrypt_prepared_key mk_direct_keys[__FSCRYPT_MODE_MAX + 1];
+
+ /* Per-mode keys for IV_INO_LBLK_64 policies, allocated on-demand */
+ struct fscrypt_prepared_key mk_iv_ino_lblk_64_keys[__FSCRYPT_MODE_MAX + 1];
+
+} __randomize_layout;
+
+static inline bool
+is_master_key_secret_present(const struct fscrypt_master_key_secret *secret)
+{
+ /*
+ * The READ_ONCE() is only necessary for fscrypt_drop_inode() and
+ * fscrypt_key_describe(). These run in atomic context, so they can't
+ * take ->mk_secret_sem and thus 'secret' can change concurrently which
+ * would be a data race. But they only need to know whether the secret
+ * *was* present at the time of check, so READ_ONCE() suffices.
+ */
+ return READ_ONCE(secret->size) != 0;
+}
+
+static inline const char *master_key_spec_type(
+ const struct fscrypt_key_specifier *spec)
+{
+ switch (spec->type) {
+ case FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR:
+ return "descriptor";
+ case FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER:
+ return "identifier";
+ }
+ return "[unknown]";
+}
+
+static inline int master_key_spec_len(const struct fscrypt_key_specifier *spec)
+{
+ switch (spec->type) {
+ case FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR:
+ return FSCRYPT_KEY_DESCRIPTOR_SIZE;
+ case FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER:
+ return FSCRYPT_KEY_IDENTIFIER_SIZE;
+ }
+ return 0;
+}
+
+extern struct key *
+fscrypt_find_master_key(struct super_block *sb,
+ const struct fscrypt_key_specifier *mk_spec);
+
+extern int fscrypt_verify_key_added(struct super_block *sb,
+ const u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE]);
+
+extern int __init fscrypt_init_keyring(void);
+
+/* keysetup.c */
struct fscrypt_mode {
const char *friendly_name;
@@ -179,10 +569,44 @@ struct fscrypt_mode {
int keysize;
int ivsize;
bool logged_impl_name;
- bool needs_essiv;
- bool inline_encryption;
+ enum blk_crypto_mode_num blk_crypto_mode;
};
-extern void __exit fscrypt_essiv_cleanup(void);
+extern struct fscrypt_mode fscrypt_modes[];
+
+static inline bool
+fscrypt_mode_supports_direct_key(const struct fscrypt_mode *mode)
+{
+ return mode->ivsize >= offsetofend(union fscrypt_iv, nonce);
+}
+
+extern int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key,
+ const u8 *raw_key, unsigned int raw_key_size,
+ bool is_hw_wrapped,
+ const struct fscrypt_info *ci);
+
+extern void fscrypt_destroy_prepared_key(struct fscrypt_prepared_key *prep_key);
+
+extern int fscrypt_set_derived_key(struct fscrypt_info *ci,
+ const u8 *derived_key);
+
+/* keysetup_v1.c */
+
+extern void fscrypt_put_direct_key(struct fscrypt_direct_key *dk);
+
+extern int fscrypt_setup_v1_file_key(struct fscrypt_info *ci,
+ const u8 *raw_master_key);
+
+extern int fscrypt_setup_v1_file_key_via_subscribed_keyrings(
+ struct fscrypt_info *ci);
+/* policy.c */
+
+extern bool fscrypt_policies_equal(const union fscrypt_policy *policy1,
+ const union fscrypt_policy *policy2);
+extern bool fscrypt_supported_policy(const union fscrypt_policy *policy_u,
+ const struct inode *inode);
+extern int fscrypt_policy_from_context(union fscrypt_policy *policy_u,
+ const union fscrypt_context *ctx_u,
+ int ctx_size);
#endif /* _FSCRYPT_PRIVATE_H */
diff --git a/fs/crypto/hkdf.c b/fs/crypto/hkdf.c
new file mode 100644
index 0000000..2c02600
--- /dev/null
+++ b/fs/crypto/hkdf.c
@@ -0,0 +1,183 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Implementation of HKDF ("HMAC-based Extract-and-Expand Key Derivation
+ * Function"), aka RFC 5869. See also the original paper (Krawczyk 2010):
+ * "Cryptographic Extraction and Key Derivation: The HKDF Scheme".
+ *
+ * This is used to derive keys from the fscrypt master keys.
+ *
+ * Copyright 2019 Google LLC
+ */
+
+#include <crypto/hash.h>
+#include <crypto/sha.h>
+
+#include "fscrypt_private.h"
+
+/*
+ * HKDF supports any unkeyed cryptographic hash algorithm, but fscrypt uses
+ * SHA-512 because it is reasonably secure and efficient; and since it produces
+ * a 64-byte digest, deriving an AES-256-XTS key preserves all 64 bytes of
+ * entropy from the master key and requires only one iteration of HKDF-Expand.
+ */
+#define HKDF_HMAC_ALG "hmac(sha512)"
+#define HKDF_HASHLEN SHA512_DIGEST_SIZE
+
+/*
+ * HKDF consists of two steps:
+ *
+ * 1. HKDF-Extract: extract a pseudorandom key of length HKDF_HASHLEN bytes from
+ * the input keying material and optional salt.
+ * 2. HKDF-Expand: expand the pseudorandom key into output keying material of
+ * any length, parameterized by an application-specific info string.
+ *
+ * HKDF-Extract can be skipped if the input is already a pseudorandom key of
+ * length HKDF_HASHLEN bytes. However, cipher modes other than AES-256-XTS take
+ * shorter keys, and we don't want to force users of those modes to provide
+ * unnecessarily long master keys. Thus fscrypt still does HKDF-Extract. No
+ * salt is used, since fscrypt master keys should already be pseudorandom and
+ * there's no way to persist a random salt per master key from kernel mode.
+ */
+
+/* HKDF-Extract (RFC 5869 section 2.2), unsalted */
+static int hkdf_extract(struct crypto_shash *hmac_tfm, const u8 *ikm,
+ unsigned int ikmlen, u8 prk[HKDF_HASHLEN])
+{
+ static const u8 default_salt[HKDF_HASHLEN];
+ SHASH_DESC_ON_STACK(desc, hmac_tfm);
+ int err;
+
+ err = crypto_shash_setkey(hmac_tfm, default_salt, HKDF_HASHLEN);
+ if (err)
+ return err;
+
+ desc->tfm = hmac_tfm;
+ desc->flags = 0;
+ err = crypto_shash_digest(desc, ikm, ikmlen, prk);
+ shash_desc_zero(desc);
+ return err;
+}
+
+/*
+ * Compute HKDF-Extract using the given master key as the input keying material,
+ * and prepare an HMAC transform object keyed by the resulting pseudorandom key.
+ *
+ * Afterwards, the keyed HMAC transform object can be used for HKDF-Expand many
+ * times without having to recompute HKDF-Extract each time.
+ */
+int fscrypt_init_hkdf(struct fscrypt_hkdf *hkdf, const u8 *master_key,
+ unsigned int master_key_size)
+{
+ struct crypto_shash *hmac_tfm;
+ u8 prk[HKDF_HASHLEN];
+ int err;
+
+ hmac_tfm = crypto_alloc_shash(HKDF_HMAC_ALG, 0, 0);
+ if (IS_ERR(hmac_tfm)) {
+ fscrypt_err(NULL, "Error allocating " HKDF_HMAC_ALG ": %ld",
+ PTR_ERR(hmac_tfm));
+ return PTR_ERR(hmac_tfm);
+ }
+
+ if (WARN_ON(crypto_shash_digestsize(hmac_tfm) != sizeof(prk))) {
+ err = -EINVAL;
+ goto err_free_tfm;
+ }
+
+ err = hkdf_extract(hmac_tfm, master_key, master_key_size, prk);
+ if (err)
+ goto err_free_tfm;
+
+ err = crypto_shash_setkey(hmac_tfm, prk, sizeof(prk));
+ if (err)
+ goto err_free_tfm;
+
+ hkdf->hmac_tfm = hmac_tfm;
+ goto out;
+
+err_free_tfm:
+ crypto_free_shash(hmac_tfm);
+out:
+ memzero_explicit(prk, sizeof(prk));
+ return err;
+}
+
+/*
+ * HKDF-Expand (RFC 5869 section 2.3). This expands the pseudorandom key, which
+ * was already keyed into 'hkdf->hmac_tfm' by fscrypt_init_hkdf(), into 'okmlen'
+ * bytes of output keying material parameterized by the application-specific
+ * 'info' of length 'infolen' bytes, prefixed by "fscrypt\0" and the 'context'
+ * byte. This is thread-safe and may be called by multiple threads in parallel.
+ *
+ * ('context' isn't part of the HKDF specification; it's just a prefix fscrypt
+ * adds to its application-specific info strings to guarantee that it doesn't
+ * accidentally repeat an info string when using HKDF for different purposes.)
+ */
+int fscrypt_hkdf_expand(struct fscrypt_hkdf *hkdf, u8 context,
+ const u8 *info, unsigned int infolen,
+ u8 *okm, unsigned int okmlen)
+{
+ SHASH_DESC_ON_STACK(desc, hkdf->hmac_tfm);
+ u8 prefix[9];
+ unsigned int i;
+ int err;
+ const u8 *prev = NULL;
+ u8 counter = 1;
+ u8 tmp[HKDF_HASHLEN];
+
+ if (WARN_ON(okmlen > 255 * HKDF_HASHLEN))
+ return -EINVAL;
+
+ desc->tfm = hkdf->hmac_tfm;
+ desc->flags = 0;
+
+ memcpy(prefix, "fscrypt\0", 8);
+ prefix[8] = context;
+
+ for (i = 0; i < okmlen; i += HKDF_HASHLEN) {
+
+ err = crypto_shash_init(desc);
+ if (err)
+ goto out;
+
+ if (prev) {
+ err = crypto_shash_update(desc, prev, HKDF_HASHLEN);
+ if (err)
+ goto out;
+ }
+
+ err = crypto_shash_update(desc, prefix, sizeof(prefix));
+ if (err)
+ goto out;
+
+ err = crypto_shash_update(desc, info, infolen);
+ if (err)
+ goto out;
+
+ BUILD_BUG_ON(sizeof(counter) != 1);
+ if (okmlen - i < HKDF_HASHLEN) {
+ err = crypto_shash_finup(desc, &counter, 1, tmp);
+ if (err)
+ goto out;
+ memcpy(&okm[i], tmp, okmlen - i);
+ memzero_explicit(tmp, sizeof(tmp));
+ } else {
+ err = crypto_shash_finup(desc, &counter, 1, &okm[i]);
+ if (err)
+ goto out;
+ }
+ counter++;
+ prev = &okm[i];
+ }
+ err = 0;
+out:
+ if (unlikely(err))
+ memzero_explicit(okm, okmlen); /* so caller doesn't need to */
+ shash_desc_zero(desc);
+ return err;
+}
+
+void fscrypt_destroy_hkdf(struct fscrypt_hkdf *hkdf)
+{
+ crypto_free_shash(hkdf->hmac_tfm);
+}
diff --git a/fs/crypto/hooks.c b/fs/crypto/hooks.c
index 34c2d03..30b1ca6 100644
--- a/fs/crypto/hooks.c
+++ b/fs/crypto/hooks.c
@@ -38,9 +38,9 @@ int fscrypt_file_open(struct inode *inode, struct file *filp)
dir = dget_parent(file_dentry(filp));
if (IS_ENCRYPTED(d_inode(dir)) &&
!fscrypt_has_permitted_context(d_inode(dir), inode)) {
- fscrypt_warn(inode->i_sb,
- "inconsistent encryption contexts: %lu/%lu",
- d_inode(dir)->i_ino, inode->i_ino);
+ fscrypt_warn(inode,
+ "Inconsistent encryption context (parent directory: %lu)",
+ d_inode(dir)->i_ino);
err = -EPERM;
}
dput(dir);
diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c
new file mode 100644
index 0000000..00da0ef
--- /dev/null
+++ b/fs/crypto/inline_crypt.c
@@ -0,0 +1,353 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Inline encryption support for fscrypt
+ *
+ * Copyright 2019 Google LLC
+ */
+
+/*
+ * With "inline encryption", the block layer handles the decryption/encryption
+ * as part of the bio, instead of the filesystem doing the crypto itself via
+ * crypto API. See Documentation/block/inline-encryption.rst. fscrypt still
+ * provides the key and IV to use.
+ */
+
+#include <linux/blk-crypto.h>
+#include <linux/blkdev.h>
+#include <linux/buffer_head.h>
+#include <linux/keyslot-manager.h>
+
+#include "fscrypt_private.h"
+
+struct fscrypt_blk_crypto_key {
+ struct blk_crypto_key base;
+ int num_devs;
+ struct request_queue *devs[];
+};
+
+/* Enable inline encryption for this file if supported. */
+void fscrypt_select_encryption_impl(struct fscrypt_info *ci)
+{
+ const struct inode *inode = ci->ci_inode;
+ struct super_block *sb = inode->i_sb;
+
+ /* The file must need contents encryption, not filenames encryption */
+ if (!S_ISREG(inode->i_mode))
+ return;
+
+ /* blk-crypto must implement the needed encryption algorithm */
+ if (ci->ci_mode->blk_crypto_mode == BLK_ENCRYPTION_MODE_INVALID)
+ return;
+
+ /* The filesystem must be mounted with -o inlinecrypt */
+ if (!sb->s_cop->inline_crypt_enabled ||
+ !sb->s_cop->inline_crypt_enabled(sb))
+ return;
+
+ ci->ci_inlinecrypt = true;
+}
+
+int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
+ const u8 *raw_key,
+ unsigned int raw_key_size,
+ bool is_hw_wrapped,
+ const struct fscrypt_info *ci)
+{
+ const struct inode *inode = ci->ci_inode;
+ struct super_block *sb = inode->i_sb;
+ enum blk_crypto_mode_num crypto_mode = ci->ci_mode->blk_crypto_mode;
+ int num_devs = 1;
+ int queue_refs = 0;
+ struct fscrypt_blk_crypto_key *blk_key;
+ int err;
+ int i;
+
+ if (sb->s_cop->get_num_devices)
+ num_devs = sb->s_cop->get_num_devices(sb);
+ if (WARN_ON(num_devs < 1))
+ return -EINVAL;
+
+ blk_key = kzalloc(struct_size(blk_key, devs, num_devs), GFP_NOFS);
+ if (!blk_key)
+ return -ENOMEM;
+
+ blk_key->num_devs = num_devs;
+ if (num_devs == 1)
+ blk_key->devs[0] = bdev_get_queue(sb->s_bdev);
+ else
+ sb->s_cop->get_devices(sb, blk_key->devs);
+
+ BUILD_BUG_ON(FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE >
+ BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE);
+
+ err = blk_crypto_init_key(&blk_key->base, raw_key, raw_key_size,
+ is_hw_wrapped, crypto_mode, sb->s_blocksize);
+ if (err) {
+ fscrypt_err(inode, "error %d initializing blk-crypto key", err);
+ goto fail;
+ }
+
+ /*
+ * We have to start using blk-crypto on all the filesystem's devices.
+ * We also have to save all the request_queue's for later so that the
+ * key can be evicted from them. This is needed because some keys
+ * aren't destroyed until after the filesystem was already unmounted
+ * (namely, the per-mode keys in struct fscrypt_master_key).
+ */
+ for (i = 0; i < num_devs; i++) {
+ if (!blk_get_queue(blk_key->devs[i])) {
+ fscrypt_err(inode, "couldn't get request_queue");
+ err = -EAGAIN;
+ goto fail;
+ }
+ queue_refs++;
+
+ err = blk_crypto_start_using_mode(crypto_mode, sb->s_blocksize,
+ blk_key->devs[i]);
+ if (err) {
+ fscrypt_err(inode,
+ "error %d starting to use blk-crypto", err);
+ goto fail;
+ }
+ }
+ /*
+ * Pairs with READ_ONCE() in fscrypt_is_key_prepared(). (Only matters
+ * for the per-mode keys, which are shared by multiple inodes.)
+ */
+ smp_store_release(&prep_key->blk_key, blk_key);
+ return 0;
+
+fail:
+ for (i = 0; i < queue_refs; i++)
+ blk_put_queue(blk_key->devs[i]);
+ kzfree(blk_key);
+ return err;
+}
+
+void fscrypt_destroy_inline_crypt_key(struct fscrypt_prepared_key *prep_key)
+{
+ struct fscrypt_blk_crypto_key *blk_key = prep_key->blk_key;
+ int i;
+
+ if (blk_key) {
+ for (i = 0; i < blk_key->num_devs; i++) {
+ blk_crypto_evict_key(blk_key->devs[i], &blk_key->base);
+ blk_put_queue(blk_key->devs[i]);
+ }
+ kzfree(blk_key);
+ }
+}
+
+int fscrypt_derive_raw_secret(struct super_block *sb,
+ const u8 *wrapped_key,
+ unsigned int wrapped_key_size,
+ u8 *raw_secret, unsigned int raw_secret_size)
+{
+ struct request_queue *q;
+
+ q = sb->s_bdev->bd_queue;
+ if (!q->ksm)
+ return -EOPNOTSUPP;
+
+ return keyslot_manager_derive_raw_secret(q->ksm,
+ wrapped_key, wrapped_key_size,
+ raw_secret, raw_secret_size);
+}
+
+/**
+ * fscrypt_inode_uses_inline_crypto - test whether an inode uses inline
+ * encryption
+ * @inode: an inode
+ *
+ * Return: true if the inode requires file contents encryption and if the
+ * encryption should be done in the block layer via blk-crypto rather
+ * than in the filesystem layer.
+ */
+bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
+{
+ return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
+ inode->i_crypt_info->ci_inlinecrypt;
+}
+EXPORT_SYMBOL_GPL(fscrypt_inode_uses_inline_crypto);
+
+/**
+ * fscrypt_inode_uses_fs_layer_crypto - test whether an inode uses fs-layer
+ * encryption
+ * @inode: an inode
+ *
+ * Return: true if the inode requires file contents encryption and if the
+ * encryption should be done in the filesystem layer rather than in the
+ * block layer via blk-crypto.
+ */
+bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
+{
+ return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
+ !inode->i_crypt_info->ci_inlinecrypt;
+}
+EXPORT_SYMBOL_GPL(fscrypt_inode_uses_fs_layer_crypto);
+
+static void fscrypt_generate_dun(const struct fscrypt_info *ci, u64 lblk_num,
+ u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
+{
+ union fscrypt_iv iv;
+ int i;
+
+ fscrypt_generate_iv(&iv, lblk_num, ci);
+
+ BUILD_BUG_ON(FSCRYPT_MAX_IV_SIZE > BLK_CRYPTO_MAX_IV_SIZE);
+ memset(dun, 0, BLK_CRYPTO_MAX_IV_SIZE);
+ for (i = 0; i < ci->ci_mode->ivsize/sizeof(dun[0]); i++)
+ dun[i] = le64_to_cpu(iv.dun[i]);
+}
+
+/**
+ * fscrypt_set_bio_crypt_ctx - prepare a file contents bio for inline encryption
+ * @bio: a bio which will eventually be submitted to the file
+ * @inode: the file's inode
+ * @first_lblk: the first file logical block number in the I/O
+ * @gfp_mask: memory allocation flags - these must be a waiting mask so that
+ * bio_crypt_set_ctx can't fail.
+ *
+ * If the contents of the file should be encrypted (or decrypted) with inline
+ * encryption, then assign the appropriate encryption context to the bio.
+ *
+ * Normally the bio should be newly allocated (i.e. no pages added yet), as
+ * otherwise fscrypt_mergeable_bio() won't work as intended.
+ *
+ * The encryption context will be freed automatically when the bio is freed.
+ *
+ * This function also handles setting bi_skip_dm_default_key when needed.
+ */
+void fscrypt_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode,
+ u64 first_lblk, gfp_t gfp_mask)
+{
+ const struct fscrypt_info *ci = inode->i_crypt_info;
+ u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+
+ if (fscrypt_inode_should_skip_dm_default_key(inode))
+ bio_set_skip_dm_default_key(bio);
+
+ if (!fscrypt_inode_uses_inline_crypto(inode))
+ return;
+
+ fscrypt_generate_dun(ci, first_lblk, dun);
+ bio_crypt_set_ctx(bio, &ci->ci_key.blk_key->base, dun, gfp_mask);
+}
+EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx);
+
+/* Extract the inode and logical block number from a buffer_head. */
+static bool bh_get_inode_and_lblk_num(const struct buffer_head *bh,
+ const struct inode **inode_ret,
+ u64 *lblk_num_ret)
+{
+ struct page *page = bh->b_page;
+ const struct address_space *mapping;
+ const struct inode *inode;
+
+ /*
+ * The ext4 journal (jbd2) can submit a buffer_head it directly created
+ * for a non-pagecache page. fscrypt doesn't care about these.
+ */
+ mapping = page_mapping(page);
+ if (!mapping)
+ return false;
+ inode = mapping->host;
+
+ *inode_ret = inode;
+ *lblk_num_ret = ((u64)page->index << (PAGE_SHIFT - inode->i_blkbits)) +
+ (bh_offset(bh) >> inode->i_blkbits);
+ return true;
+}
+
+/**
+ * fscrypt_set_bio_crypt_ctx_bh - prepare a file contents bio for inline
+ * encryption
+ * @bio: a bio which will eventually be submitted to the file
+ * @first_bh: the first buffer_head for which I/O will be submitted
+ * @gfp_mask: memory allocation flags
+ *
+ * Same as fscrypt_set_bio_crypt_ctx(), except this takes a buffer_head instead
+ * of an inode and block number directly.
+ */
+void fscrypt_set_bio_crypt_ctx_bh(struct bio *bio,
+ const struct buffer_head *first_bh,
+ gfp_t gfp_mask)
+{
+ const struct inode *inode;
+ u64 first_lblk;
+
+ if (bh_get_inode_and_lblk_num(first_bh, &inode, &first_lblk))
+ fscrypt_set_bio_crypt_ctx(bio, inode, first_lblk, gfp_mask);
+}
+EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx_bh);
+
+/**
+ * fscrypt_mergeable_bio - test whether data can be added to a bio
+ * @bio: the bio being built up
+ * @inode: the inode for the next part of the I/O
+ * @next_lblk: the next file logical block number in the I/O
+ *
+ * When building a bio which may contain data which should undergo inline
+ * encryption (or decryption) via fscrypt, filesystems should call this function
+ * to ensure that the resulting bio contains only logically contiguous data.
+ * This will return false if the next part of the I/O cannot be merged with the
+ * bio because either the encryption key would be different or the encryption
+ * data unit numbers would be discontiguous.
+ *
+ * fscrypt_set_bio_crypt_ctx() must have already been called on the bio.
+ *
+ * This function also returns false if the next part of the I/O would need to
+ * have a different value for the bi_skip_dm_default_key flag.
+ *
+ * Return: true iff the I/O is mergeable
+ */
+bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode,
+ u64 next_lblk)
+{
+ const struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+ u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+
+ if (!!bc != fscrypt_inode_uses_inline_crypto(inode))
+ return false;
+ if (bio_should_skip_dm_default_key(bio) !=
+ fscrypt_inode_should_skip_dm_default_key(inode))
+ return false;
+ if (!bc)
+ return true;
+
+ /*
+ * Comparing the key pointers is good enough, as all I/O for each key
+ * uses the same pointer. I.e., there's currently no need to support
+ * merging requests where the keys are the same but the pointers differ.
+ */
+ if (bc->bc_key != &inode->i_crypt_info->ci_key.blk_key->base)
+ return false;
+
+ fscrypt_generate_dun(inode->i_crypt_info, next_lblk, next_dun);
+ return bio_crypt_dun_is_contiguous(bc, bio->bi_iter.bi_size, next_dun);
+}
+EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio);
+
+/**
+ * fscrypt_mergeable_bio_bh - test whether data can be added to a bio
+ * @bio: the bio being built up
+ * @next_bh: the next buffer_head for which I/O will be submitted
+ *
+ * Same as fscrypt_mergeable_bio(), except this takes a buffer_head instead of
+ * an inode and block number directly.
+ *
+ * Return: true iff the I/O is mergeable
+ */
+bool fscrypt_mergeable_bio_bh(struct bio *bio,
+ const struct buffer_head *next_bh)
+{
+ const struct inode *inode;
+ u64 next_lblk;
+
+ if (!bh_get_inode_and_lblk_num(next_bh, &inode, &next_lblk))
+ return !bio->bi_crypt_context &&
+ !bio_should_skip_dm_default_key(bio);
+
+ return fscrypt_mergeable_bio(bio, inode, next_lblk);
+}
+EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio_bh);
diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
deleted file mode 100644
index 123598c..0000000
--- a/fs/crypto/keyinfo.c
+++ /dev/null
@@ -1,650 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * key management facility for FS encryption support.
- *
- * Copyright (C) 2015, Google, Inc.
- *
- * This contains encryption key functions.
- *
- * Written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar, 2015.
- */
-
-#include <keys/user-type.h>
-#include <linux/hashtable.h>
-#include <linux/scatterlist.h>
-#include <crypto/aes.h>
-#include <crypto/algapi.h>
-#include <crypto/sha.h>
-#include <crypto/skcipher.h>
-#include "fscrypt_private.h"
-#include "fscrypt_ice.h"
-
-static struct crypto_shash *essiv_hash_tfm;
-
-/* Table of keys referenced by FS_POLICY_FLAG_DIRECT_KEY policies */
-static DEFINE_HASHTABLE(fscrypt_master_keys, 6); /* 6 bits = 64 buckets */
-static DEFINE_SPINLOCK(fscrypt_master_keys_lock);
-
-/*
- * Key derivation function. This generates the derived key by encrypting the
- * master key with AES-128-ECB using the inode's nonce as the AES key.
- *
- * The master key must be at least as long as the derived key. If the master
- * key is longer, then only the first 'derived_keysize' bytes are used.
- */
-static int derive_key_aes(const u8 *master_key,
- const struct fscrypt_context *ctx,
- u8 *derived_key, unsigned int derived_keysize)
-{
- int res = 0;
- struct skcipher_request *req = NULL;
- DECLARE_CRYPTO_WAIT(wait);
- struct scatterlist src_sg, dst_sg;
- struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
-
- if (IS_ERR(tfm)) {
- res = PTR_ERR(tfm);
- tfm = NULL;
- goto out;
- }
- crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
- req = skcipher_request_alloc(tfm, GFP_NOFS);
- if (!req) {
- res = -ENOMEM;
- goto out;
- }
- skcipher_request_set_callback(req,
- CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
- crypto_req_done, &wait);
- res = crypto_skcipher_setkey(tfm, ctx->nonce, sizeof(ctx->nonce));
- if (res < 0)
- goto out;
-
- sg_init_one(&src_sg, master_key, derived_keysize);
- sg_init_one(&dst_sg, derived_key, derived_keysize);
- skcipher_request_set_crypt(req, &src_sg, &dst_sg, derived_keysize,
- NULL);
- res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
-out:
- skcipher_request_free(req);
- crypto_free_skcipher(tfm);
- return res;
-}
-
-/*
- * Search the current task's subscribed keyrings for a "logon" key with
- * description prefix:descriptor, and if found acquire a read lock on it and
- * return a pointer to its validated payload in *payload_ret.
- */
-static struct key *
-find_and_lock_process_key(const char *prefix,
- const u8 descriptor[FS_KEY_DESCRIPTOR_SIZE],
- unsigned int min_keysize,
- const struct fscrypt_key **payload_ret)
-{
- char *description;
- struct key *key;
- const struct user_key_payload *ukp;
- const struct fscrypt_key *payload;
-
- description = kasprintf(GFP_NOFS, "%s%*phN", prefix,
- FS_KEY_DESCRIPTOR_SIZE, descriptor);
- if (!description)
- return ERR_PTR(-ENOMEM);
-
- key = request_key(&key_type_logon, description, NULL);
- kfree(description);
- if (IS_ERR(key))
- return key;
-
- down_read(&key->sem);
- ukp = user_key_payload_locked(key);
-
- if (!ukp) /* was the key revoked before we acquired its semaphore? */
- goto invalid;
-
- payload = (const struct fscrypt_key *)ukp->data;
-
- if (ukp->datalen != sizeof(struct fscrypt_key) ||
- payload->size < 1 || payload->size > FS_MAX_KEY_SIZE) {
- fscrypt_warn(NULL,
- "key with description '%s' has invalid payload",
- key->description);
- goto invalid;
- }
-
- if (payload->size < min_keysize) {
- fscrypt_warn(NULL,
- "key with description '%s' is too short (got %u bytes, need %u+ bytes)",
- key->description, payload->size, min_keysize);
- goto invalid;
- }
-
- *payload_ret = payload;
- return key;
-
-invalid:
- up_read(&key->sem);
- key_put(key);
- return ERR_PTR(-ENOKEY);
-}
-
-static struct fscrypt_mode available_modes[] = {
- [FS_ENCRYPTION_MODE_AES_256_XTS] = {
- .friendly_name = "AES-256-XTS",
- .cipher_str = "xts(aes)",
- .keysize = 64,
- .ivsize = 16,
- },
- [FS_ENCRYPTION_MODE_AES_256_CTS] = {
- .friendly_name = "AES-256-CTS-CBC",
- .cipher_str = "cts(cbc(aes))",
- .keysize = 32,
- .ivsize = 16,
- },
- [FS_ENCRYPTION_MODE_AES_128_CBC] = {
- .friendly_name = "AES-128-CBC",
- .cipher_str = "cbc(aes)",
- .keysize = 16,
- .ivsize = 16,
- .needs_essiv = true,
- },
- [FS_ENCRYPTION_MODE_AES_128_CTS] = {
- .friendly_name = "AES-128-CTS-CBC",
- .cipher_str = "cts(cbc(aes))",
- .keysize = 16,
- .ivsize = 16,
- },
- [FS_ENCRYPTION_MODE_ADIANTUM] = {
- .friendly_name = "Adiantum",
- .cipher_str = "adiantum(xchacha12,aes)",
- .keysize = 32,
- .ivsize = 32,
- },
- [FS_ENCRYPTION_MODE_PRIVATE] = {
- .friendly_name = "ice",
- .cipher_str = "xts(aes)",
- .keysize = 64,
- .ivsize = 16,
- .inline_encryption = true,
- },
-};
-
-static struct fscrypt_mode *
-select_encryption_mode(const struct fscrypt_info *ci, const struct inode *inode)
-{
- struct fscrypt_mode *mode = NULL;
-
- if (!fscrypt_valid_enc_modes(ci->ci_data_mode, ci->ci_filename_mode)) {
- fscrypt_warn(inode->i_sb,
- "inode %lu uses unsupported encryption modes (contents mode %d, filenames mode %d)",
- inode->i_ino, ci->ci_data_mode,
- ci->ci_filename_mode);
- return ERR_PTR(-EINVAL);
- }
-
- if (S_ISREG(inode->i_mode)) {
- mode = &available_modes[ci->ci_data_mode];
- if (IS_ERR(mode)) {
- fscrypt_warn(inode->i_sb, "Invalid mode");
- return ERR_PTR(-EINVAL);
- }
- if (mode->inline_encryption &&
- !fscrypt_is_ice_capable(inode->i_sb)) {
- fscrypt_warn(inode->i_sb, "ICE support not available");
- return ERR_PTR(-EINVAL);
- }
- return mode;
- }
-
- if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))
- return &available_modes[ci->ci_filename_mode];
-
- WARN_ONCE(1, "fscrypt: filesystem tried to load encryption info for inode %lu, which is not encryptable (file type %d)\n",
- inode->i_ino, (inode->i_mode & S_IFMT));
- return ERR_PTR(-EINVAL);
-}
-
-/* Find the master key, then derive the inode's actual encryption key */
-static int find_and_derive_key(const struct inode *inode,
- const struct fscrypt_context *ctx,
- u8 *derived_key, const struct fscrypt_mode *mode)
-{
- struct key *key;
- const struct fscrypt_key *payload;
- int err;
-
- key = find_and_lock_process_key(FS_KEY_DESC_PREFIX,
- ctx->master_key_descriptor,
- mode->keysize, &payload);
- if (key == ERR_PTR(-ENOKEY) && inode->i_sb->s_cop->key_prefix) {
- key = find_and_lock_process_key(inode->i_sb->s_cop->key_prefix,
- ctx->master_key_descriptor,
- mode->keysize, &payload);
- }
- if (IS_ERR(key))
- return PTR_ERR(key);
-
- if (ctx->flags & FS_POLICY_FLAG_DIRECT_KEY) {
- if (mode->ivsize < offsetofend(union fscrypt_iv, nonce)) {
- fscrypt_warn(inode->i_sb,
- "direct key mode not allowed with %s",
- mode->friendly_name);
- err = -EINVAL;
- } else if (ctx->contents_encryption_mode !=
- ctx->filenames_encryption_mode) {
- fscrypt_warn(inode->i_sb,
- "direct key mode not allowed with different contents and filenames modes");
- err = -EINVAL;
- } else {
- memcpy(derived_key, payload->raw, mode->keysize);
- err = 0;
- }
- } else if (mode->inline_encryption) {
- memcpy(derived_key, payload->raw, mode->keysize);
- err = 0;
- } else {
- err = derive_key_aes(payload->raw, ctx, derived_key,
- mode->keysize);
- }
- up_read(&key->sem);
- key_put(key);
- return err;
-}
-
-/* Allocate and key a symmetric cipher object for the given encryption mode */
-static struct crypto_skcipher *
-allocate_skcipher_for_mode(struct fscrypt_mode *mode, const u8 *raw_key,
- const struct inode *inode)
-{
- struct crypto_skcipher *tfm;
- int err;
-
- tfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0);
- if (IS_ERR(tfm)) {
- fscrypt_warn(inode->i_sb,
- "error allocating '%s' transform for inode %lu: %ld",
- mode->cipher_str, inode->i_ino, PTR_ERR(tfm));
- return tfm;
- }
- if (unlikely(!mode->logged_impl_name)) {
- /*
- * fscrypt performance can vary greatly depending on which
- * crypto algorithm implementation is used. Help people debug
- * performance problems by logging the ->cra_driver_name the
- * first time a mode is used. Note that multiple threads can
- * race here, but it doesn't really matter.
- */
- mode->logged_impl_name = true;
- pr_info("fscrypt: %s using implementation \"%s\"\n",
- mode->friendly_name,
- crypto_skcipher_alg(tfm)->base.cra_driver_name);
- }
- crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
- err = crypto_skcipher_setkey(tfm, raw_key, mode->keysize);
- if (err)
- goto err_free_tfm;
-
- return tfm;
-
-err_free_tfm:
- crypto_free_skcipher(tfm);
- return ERR_PTR(err);
-}
-
-/* Master key referenced by FS_POLICY_FLAG_DIRECT_KEY policy */
-struct fscrypt_master_key {
- struct hlist_node mk_node;
- refcount_t mk_refcount;
- const struct fscrypt_mode *mk_mode;
- struct crypto_skcipher *mk_ctfm;
- u8 mk_descriptor[FS_KEY_DESCRIPTOR_SIZE];
- u8 mk_raw[FS_MAX_KEY_SIZE];
-};
-
-static void free_master_key(struct fscrypt_master_key *mk)
-{
- if (mk) {
- crypto_free_skcipher(mk->mk_ctfm);
- kzfree(mk);
- }
-}
-
-static void put_master_key(struct fscrypt_master_key *mk)
-{
- if (!refcount_dec_and_lock(&mk->mk_refcount, &fscrypt_master_keys_lock))
- return;
- hash_del(&mk->mk_node);
- spin_unlock(&fscrypt_master_keys_lock);
-
- free_master_key(mk);
-}
-
-/*
- * Find/insert the given master key into the fscrypt_master_keys table. If
- * found, it is returned with elevated refcount, and 'to_insert' is freed if
- * non-NULL. If not found, 'to_insert' is inserted and returned if it's
- * non-NULL; otherwise NULL is returned.
- */
-static struct fscrypt_master_key *
-find_or_insert_master_key(struct fscrypt_master_key *to_insert,
- const u8 *raw_key, const struct fscrypt_mode *mode,
- const struct fscrypt_info *ci)
-{
- unsigned long hash_key;
- struct fscrypt_master_key *mk;
-
- /*
- * Careful: to avoid potentially leaking secret key bytes via timing
- * information, we must key the hash table by descriptor rather than by
- * raw key, and use crypto_memneq() when comparing raw keys.
- */
-
- BUILD_BUG_ON(sizeof(hash_key) > FS_KEY_DESCRIPTOR_SIZE);
- memcpy(&hash_key, ci->ci_master_key_descriptor, sizeof(hash_key));
-
- spin_lock(&fscrypt_master_keys_lock);
- hash_for_each_possible(fscrypt_master_keys, mk, mk_node, hash_key) {
- if (memcmp(ci->ci_master_key_descriptor, mk->mk_descriptor,
- FS_KEY_DESCRIPTOR_SIZE) != 0)
- continue;
- if (mode != mk->mk_mode)
- continue;
- if (crypto_memneq(raw_key, mk->mk_raw, mode->keysize))
- continue;
- /* using existing tfm with same (descriptor, mode, raw_key) */
- refcount_inc(&mk->mk_refcount);
- spin_unlock(&fscrypt_master_keys_lock);
- free_master_key(to_insert);
- return mk;
- }
- if (to_insert)
- hash_add(fscrypt_master_keys, &to_insert->mk_node, hash_key);
- spin_unlock(&fscrypt_master_keys_lock);
- return to_insert;
-}
-
-/* Prepare to encrypt directly using the master key in the given mode */
-static struct fscrypt_master_key *
-fscrypt_get_master_key(const struct fscrypt_info *ci, struct fscrypt_mode *mode,
- const u8 *raw_key, const struct inode *inode)
-{
- struct fscrypt_master_key *mk;
- int err;
-
- /* Is there already a tfm for this key? */
- mk = find_or_insert_master_key(NULL, raw_key, mode, ci);
- if (mk)
- return mk;
-
- /* Nope, allocate one. */
- mk = kzalloc(sizeof(*mk), GFP_NOFS);
- if (!mk)
- return ERR_PTR(-ENOMEM);
- refcount_set(&mk->mk_refcount, 1);
- mk->mk_mode = mode;
- mk->mk_ctfm = allocate_skcipher_for_mode(mode, raw_key, inode);
- if (IS_ERR(mk->mk_ctfm)) {
- err = PTR_ERR(mk->mk_ctfm);
- mk->mk_ctfm = NULL;
- goto err_free_mk;
- }
- memcpy(mk->mk_descriptor, ci->ci_master_key_descriptor,
- FS_KEY_DESCRIPTOR_SIZE);
- memcpy(mk->mk_raw, raw_key, mode->keysize);
-
- return find_or_insert_master_key(mk, raw_key, mode, ci);
-
-err_free_mk:
- free_master_key(mk);
- return ERR_PTR(err);
-}
-
-static int derive_essiv_salt(const u8 *key, int keysize, u8 *salt)
-{
- struct crypto_shash *tfm = READ_ONCE(essiv_hash_tfm);
-
- /* init hash transform on demand */
- if (unlikely(!tfm)) {
- struct crypto_shash *prev_tfm;
-
- tfm = crypto_alloc_shash("sha256", 0, 0);
- if (IS_ERR(tfm)) {
- fscrypt_warn(NULL,
- "error allocating SHA-256 transform: %ld",
- PTR_ERR(tfm));
- return PTR_ERR(tfm);
- }
- prev_tfm = cmpxchg(&essiv_hash_tfm, NULL, tfm);
- if (prev_tfm) {
- crypto_free_shash(tfm);
- tfm = prev_tfm;
- }
- }
-
- {
- SHASH_DESC_ON_STACK(desc, tfm);
- desc->tfm = tfm;
- desc->flags = 0;
-
- return crypto_shash_digest(desc, key, keysize, salt);
- }
-}
-
-static int init_essiv_generator(struct fscrypt_info *ci, const u8 *raw_key,
- int keysize)
-{
- int err;
- struct crypto_cipher *essiv_tfm;
- u8 salt[SHA256_DIGEST_SIZE];
-
- essiv_tfm = crypto_alloc_cipher("aes", 0, 0);
- if (IS_ERR(essiv_tfm))
- return PTR_ERR(essiv_tfm);
-
- ci->ci_essiv_tfm = essiv_tfm;
-
- err = derive_essiv_salt(raw_key, keysize, salt);
- if (err)
- goto out;
-
- /*
- * Using SHA256 to derive the salt/key will result in AES-256 being
- * used for IV generation. File contents encryption will still use the
- * configured keysize (AES-128) nevertheless.
- */
- err = crypto_cipher_setkey(essiv_tfm, salt, sizeof(salt));
- if (err)
- goto out;
-
-out:
- memzero_explicit(salt, sizeof(salt));
- return err;
-}
-
-void __exit fscrypt_essiv_cleanup(void)
-{
- crypto_free_shash(essiv_hash_tfm);
-}
-
-/*
- * Given the encryption mode and key (normally the derived key, but for
- * FS_POLICY_FLAG_DIRECT_KEY mode it's the master key), set up the inode's
- * symmetric cipher transform object(s).
- */
-static int setup_crypto_transform(struct fscrypt_info *ci,
- struct fscrypt_mode *mode,
- const u8 *raw_key, const struct inode *inode)
-{
- struct fscrypt_master_key *mk;
- struct crypto_skcipher *ctfm;
- int err;
-
- if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) {
- mk = fscrypt_get_master_key(ci, mode, raw_key, inode);
- if (IS_ERR(mk))
- return PTR_ERR(mk);
- ctfm = mk->mk_ctfm;
- } else {
- mk = NULL;
- ctfm = allocate_skcipher_for_mode(mode, raw_key, inode);
- if (IS_ERR(ctfm))
- return PTR_ERR(ctfm);
- }
- ci->ci_master_key = mk;
- ci->ci_ctfm = ctfm;
-
- if (mode->needs_essiv) {
- /* ESSIV implies 16-byte IVs which implies !DIRECT_KEY */
- WARN_ON(mode->ivsize != AES_BLOCK_SIZE);
- WARN_ON(ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY);
-
- err = init_essiv_generator(ci, raw_key, mode->keysize);
- if (err) {
- fscrypt_warn(inode->i_sb,
- "error initializing ESSIV generator for inode %lu: %d",
- inode->i_ino, err);
- return err;
- }
- }
- return 0;
-}
-
-static void put_crypt_info(struct fscrypt_info *ci)
-{
- if (!ci)
- return;
-
- if (ci->ci_master_key) {
- put_master_key(ci->ci_master_key);
- } else {
- if (ci->ci_ctfm)
- crypto_free_skcipher(ci->ci_ctfm);
- if (ci->ci_essiv_tfm)
- crypto_free_cipher(ci->ci_essiv_tfm);
- }
- memset(ci->ci_raw_key, 0, sizeof(ci->ci_raw_key));
- kmem_cache_free(fscrypt_info_cachep, ci);
-}
-
-static int fscrypt_data_encryption_mode(struct inode *inode)
-{
- return fscrypt_should_be_processed_by_ice(inode) ?
- FS_ENCRYPTION_MODE_PRIVATE : FS_ENCRYPTION_MODE_AES_256_XTS;
-}
-
-int fscrypt_get_encryption_info(struct inode *inode)
-{
- struct fscrypt_info *crypt_info;
- struct fscrypt_context ctx;
- struct fscrypt_mode *mode;
- u8 *raw_key = NULL;
- int res;
-
- if (fscrypt_has_encryption_key(inode))
- return 0;
-
- res = fscrypt_initialize(inode->i_sb->s_cop->flags);
- if (res)
- return res;
-
- res = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
- if (res < 0) {
- if (!fscrypt_dummy_context_enabled(inode) ||
- IS_ENCRYPTED(inode))
- return res;
- /* Fake up a context for an unencrypted directory */
- memset(&ctx, 0, sizeof(ctx));
- ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
- ctx.contents_encryption_mode =
- fscrypt_data_encryption_mode(inode);
- ctx.filenames_encryption_mode = FS_ENCRYPTION_MODE_AES_256_CTS;
- memset(ctx.master_key_descriptor, 0x42, FS_KEY_DESCRIPTOR_SIZE);
- } else if (res != sizeof(ctx)) {
- return -EINVAL;
- }
-
- if (ctx.format != FS_ENCRYPTION_CONTEXT_FORMAT_V1)
- return -EINVAL;
-
- if (ctx.flags & ~FS_POLICY_FLAGS_VALID)
- return -EINVAL;
-
- crypt_info = kmem_cache_zalloc(fscrypt_info_cachep, GFP_NOFS);
- if (!crypt_info)
- return -ENOMEM;
-
- crypt_info->ci_flags = ctx.flags;
- crypt_info->ci_data_mode = ctx.contents_encryption_mode;
- crypt_info->ci_filename_mode = ctx.filenames_encryption_mode;
- memcpy(crypt_info->ci_master_key_descriptor, ctx.master_key_descriptor,
- FS_KEY_DESCRIPTOR_SIZE);
- memcpy(crypt_info->ci_nonce, ctx.nonce, FS_KEY_DERIVATION_NONCE_SIZE);
-
- mode = select_encryption_mode(crypt_info, inode);
- if (IS_ERR(mode)) {
- res = PTR_ERR(mode);
- goto out;
- }
- WARN_ON(mode->ivsize > FSCRYPT_MAX_IV_SIZE);
- crypt_info->ci_mode = mode;
-
- /*
- * This cannot be a stack buffer because it may be passed to the
- * scatterlist crypto API as part of key derivation.
- */
- res = -ENOMEM;
- raw_key = kmalloc(mode->keysize, GFP_NOFS);
- if (!raw_key)
- goto out;
-
- res = find_and_derive_key(inode, &ctx, raw_key, mode);
- if (res)
- goto out;
-
- if (!mode->inline_encryption) {
- res = setup_crypto_transform(crypt_info, mode, raw_key, inode);
- if (res)
- goto out;
- } else {
- memcpy(crypt_info->ci_raw_key, raw_key, mode->keysize);
- }
-
- if (cmpxchg_release(&inode->i_crypt_info, NULL, crypt_info) == NULL)
- crypt_info = NULL;
-out:
- if (res == -ENOKEY)
- res = 0;
- put_crypt_info(crypt_info);
- kzfree(raw_key);
- return res;
-}
-EXPORT_SYMBOL(fscrypt_get_encryption_info);
-
-/**
- * fscrypt_put_encryption_info - free most of an inode's fscrypt data
- *
- * Free the inode's fscrypt_info. Filesystems must call this when the inode is
- * being evicted. An RCU grace period need not have elapsed yet.
- */
-void fscrypt_put_encryption_info(struct inode *inode)
-{
- put_crypt_info(inode->i_crypt_info);
- inode->i_crypt_info = NULL;
-}
-EXPORT_SYMBOL(fscrypt_put_encryption_info);
-
-/**
- * fscrypt_free_inode - free an inode's fscrypt data requiring RCU delay
- *
- * Free the inode's cached decrypted symlink target, if any. Filesystems must
- * call this after an RCU grace period, just before they free the inode.
- */
-void fscrypt_free_inode(struct inode *inode)
-{
- if (IS_ENCRYPTED(inode) && S_ISLNK(inode->i_mode)) {
- kfree(inode->i_link);
- inode->i_link = NULL;
- }
-}
-EXPORT_SYMBOL(fscrypt_free_inode);
diff --git a/fs/crypto/keyring.c b/fs/crypto/keyring.c
new file mode 100644
index 0000000..d524b43
--- /dev/null
+++ b/fs/crypto/keyring.c
@@ -0,0 +1,1157 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Filesystem-level keyring for fscrypt
+ *
+ * Copyright 2019 Google LLC
+ */
+
+/*
+ * This file implements management of fscrypt master keys in the
+ * filesystem-level keyring, including the ioctls:
+ *
+ * - FS_IOC_ADD_ENCRYPTION_KEY
+ * - FS_IOC_REMOVE_ENCRYPTION_KEY
+ * - FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS
+ * - FS_IOC_GET_ENCRYPTION_KEY_STATUS
+ *
+ * See the "User API" section of Documentation/filesystems/fscrypt.rst for more
+ * information about these ioctls.
+ */
+
+#include <crypto/skcipher.h>
+#include <linux/key-type.h>
+#include <linux/seq_file.h>
+
+#include "fscrypt_private.h"
+
+static void wipe_master_key_secret(struct fscrypt_master_key_secret *secret)
+{
+ fscrypt_destroy_hkdf(&secret->hkdf);
+ memzero_explicit(secret, sizeof(*secret));
+}
+
+static void move_master_key_secret(struct fscrypt_master_key_secret *dst,
+ struct fscrypt_master_key_secret *src)
+{
+ memcpy(dst, src, sizeof(*dst));
+ memzero_explicit(src, sizeof(*src));
+}
+
+static void free_master_key(struct fscrypt_master_key *mk)
+{
+ size_t i;
+
+ wipe_master_key_secret(&mk->mk_secret);
+
+ for (i = 0; i <= __FSCRYPT_MODE_MAX; i++) {
+ fscrypt_destroy_prepared_key(&mk->mk_direct_keys[i]);
+ fscrypt_destroy_prepared_key(&mk->mk_iv_ino_lblk_64_keys[i]);
+ }
+
+ key_put(mk->mk_users);
+ kzfree(mk);
+}
+
+static inline bool valid_key_spec(const struct fscrypt_key_specifier *spec)
+{
+ if (spec->__reserved)
+ return false;
+ return master_key_spec_len(spec) != 0;
+}
+
+static int fscrypt_key_instantiate(struct key *key,
+ struct key_preparsed_payload *prep)
+{
+ key->payload.data[0] = (struct fscrypt_master_key *)prep->data;
+ return 0;
+}
+
+static void fscrypt_key_destroy(struct key *key)
+{
+ free_master_key(key->payload.data[0]);
+}
+
+static void fscrypt_key_describe(const struct key *key, struct seq_file *m)
+{
+ seq_puts(m, key->description);
+
+ if (key_is_positive(key)) {
+ const struct fscrypt_master_key *mk = key->payload.data[0];
+
+ if (!is_master_key_secret_present(&mk->mk_secret))
+ seq_puts(m, ": secret removed");
+ }
+}
+
+/*
+ * Type of key in ->s_master_keys. Each key of this type represents a master
+ * key which has been added to the filesystem. Its payload is a
+ * 'struct fscrypt_master_key'. The "." prefix in the key type name prevents
+ * users from adding keys of this type via the keyrings syscalls rather than via
+ * the intended method of FS_IOC_ADD_ENCRYPTION_KEY.
+ */
+static struct key_type key_type_fscrypt = {
+ .name = "._fscrypt",
+ .instantiate = fscrypt_key_instantiate,
+ .destroy = fscrypt_key_destroy,
+ .describe = fscrypt_key_describe,
+};
+
+static int fscrypt_user_key_instantiate(struct key *key,
+ struct key_preparsed_payload *prep)
+{
+ /*
+ * We just charge FSCRYPT_MAX_KEY_SIZE bytes to the user's key quota for
+ * each key, regardless of the exact key size. The amount of memory
+ * actually used is greater than the size of the raw key anyway.
+ */
+ return key_payload_reserve(key, FSCRYPT_MAX_KEY_SIZE);
+}
+
+static void fscrypt_user_key_describe(const struct key *key, struct seq_file *m)
+{
+ seq_puts(m, key->description);
+}
+
+/*
+ * Type of key in ->mk_users. Each key of this type represents a particular
+ * user who has added a particular master key.
+ *
+ * Note that the name of this key type really should be something like
+ * ".fscrypt-user" instead of simply ".fscrypt". But the shorter name is chosen
+ * mainly for simplicity of presentation in /proc/keys when read by a non-root
+ * user. And it is expected to be rare that a key is actually added by multiple
+ * users, since users should keep their encryption keys confidential.
+ */
+static struct key_type key_type_fscrypt_user = {
+ .name = ".fscrypt",
+ .instantiate = fscrypt_user_key_instantiate,
+ .describe = fscrypt_user_key_describe,
+};
+
+/* Search ->s_master_keys or ->mk_users */
+static struct key *search_fscrypt_keyring(struct key *keyring,
+ struct key_type *type,
+ const char *description)
+{
+ /*
+ * We need to mark the keyring reference as "possessed" so that we
+ * acquire permission to search it, via the KEY_POS_SEARCH permission.
+ */
+ key_ref_t keyref = make_key_ref(keyring, true /* possessed */);
+
+ keyref = keyring_search(keyref, type, description);
+ if (IS_ERR(keyref)) {
+ if (PTR_ERR(keyref) == -EAGAIN || /* not found */
+ PTR_ERR(keyref) == -EKEYREVOKED) /* recently invalidated */
+ keyref = ERR_PTR(-ENOKEY);
+ return ERR_CAST(keyref);
+ }
+ return key_ref_to_ptr(keyref);
+}
+
+#define FSCRYPT_FS_KEYRING_DESCRIPTION_SIZE \
+ (CONST_STRLEN("fscrypt-") + FIELD_SIZEOF(struct super_block, s_id))
+
+#define FSCRYPT_MK_DESCRIPTION_SIZE (2 * FSCRYPT_KEY_IDENTIFIER_SIZE + 1)
+
+#define FSCRYPT_MK_USERS_DESCRIPTION_SIZE \
+ (CONST_STRLEN("fscrypt-") + 2 * FSCRYPT_KEY_IDENTIFIER_SIZE + \
+ CONST_STRLEN("-users") + 1)
+
+#define FSCRYPT_MK_USER_DESCRIPTION_SIZE \
+ (2 * FSCRYPT_KEY_IDENTIFIER_SIZE + CONST_STRLEN(".uid.") + 10 + 1)
+
+static void format_fs_keyring_description(
+ char description[FSCRYPT_FS_KEYRING_DESCRIPTION_SIZE],
+ const struct super_block *sb)
+{
+ sprintf(description, "fscrypt-%s", sb->s_id);
+}
+
+static void format_mk_description(
+ char description[FSCRYPT_MK_DESCRIPTION_SIZE],
+ const struct fscrypt_key_specifier *mk_spec)
+{
+ sprintf(description, "%*phN",
+ master_key_spec_len(mk_spec), (u8 *)&mk_spec->u);
+}
+
+static void format_mk_users_keyring_description(
+ char description[FSCRYPT_MK_USERS_DESCRIPTION_SIZE],
+ const u8 mk_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE])
+{
+ sprintf(description, "fscrypt-%*phN-users",
+ FSCRYPT_KEY_IDENTIFIER_SIZE, mk_identifier);
+}
+
+static void format_mk_user_description(
+ char description[FSCRYPT_MK_USER_DESCRIPTION_SIZE],
+ const u8 mk_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE])
+{
+
+ sprintf(description, "%*phN.uid.%u", FSCRYPT_KEY_IDENTIFIER_SIZE,
+ mk_identifier, __kuid_val(current_fsuid()));
+}
+
+/* Create ->s_master_keys if needed. Synchronized by fscrypt_add_key_mutex. */
+static int allocate_filesystem_keyring(struct super_block *sb)
+{
+ char description[FSCRYPT_FS_KEYRING_DESCRIPTION_SIZE];
+ struct key *keyring;
+
+ if (sb->s_master_keys)
+ return 0;
+
+ format_fs_keyring_description(description, sb);
+ keyring = keyring_alloc(description, GLOBAL_ROOT_UID, GLOBAL_ROOT_GID,
+ current_cred(), KEY_POS_SEARCH |
+ KEY_USR_SEARCH | KEY_USR_READ | KEY_USR_VIEW,
+ KEY_ALLOC_NOT_IN_QUOTA, NULL, NULL);
+ if (IS_ERR(keyring))
+ return PTR_ERR(keyring);
+
+ /* Pairs with READ_ONCE() in fscrypt_find_master_key() */
+ smp_store_release(&sb->s_master_keys, keyring);
+ return 0;
+}
+
+void fscrypt_sb_free(struct super_block *sb)
+{
+ key_put(sb->s_master_keys);
+ sb->s_master_keys = NULL;
+}
+
+/*
+ * Find the specified master key in ->s_master_keys.
+ * Returns ERR_PTR(-ENOKEY) if not found.
+ */
+struct key *fscrypt_find_master_key(struct super_block *sb,
+ const struct fscrypt_key_specifier *mk_spec)
+{
+ struct key *keyring;
+ char description[FSCRYPT_MK_DESCRIPTION_SIZE];
+
+ /* pairs with smp_store_release() in allocate_filesystem_keyring() */
+ keyring = READ_ONCE(sb->s_master_keys);
+ if (keyring == NULL)
+ return ERR_PTR(-ENOKEY); /* No keyring yet, so no keys yet. */
+
+ format_mk_description(description, mk_spec);
+ return search_fscrypt_keyring(keyring, &key_type_fscrypt, description);
+}
+
+static int allocate_master_key_users_keyring(struct fscrypt_master_key *mk)
+{
+ char description[FSCRYPT_MK_USERS_DESCRIPTION_SIZE];
+ struct key *keyring;
+
+ format_mk_users_keyring_description(description,
+ mk->mk_spec.u.identifier);
+ keyring = keyring_alloc(description, GLOBAL_ROOT_UID, GLOBAL_ROOT_GID,
+ current_cred(), KEY_POS_SEARCH |
+ KEY_USR_SEARCH | KEY_USR_READ | KEY_USR_VIEW,
+ KEY_ALLOC_NOT_IN_QUOTA, NULL, NULL);
+ if (IS_ERR(keyring))
+ return PTR_ERR(keyring);
+
+ mk->mk_users = keyring;
+ return 0;
+}
+
+/*
+ * Find the current user's "key" in the master key's ->mk_users.
+ * Returns ERR_PTR(-ENOKEY) if not found.
+ */
+static struct key *find_master_key_user(struct fscrypt_master_key *mk)
+{
+ char description[FSCRYPT_MK_USER_DESCRIPTION_SIZE];
+
+ format_mk_user_description(description, mk->mk_spec.u.identifier);
+ return search_fscrypt_keyring(mk->mk_users, &key_type_fscrypt_user,
+ description);
+}
+
+/*
+ * Give the current user a "key" in ->mk_users. This charges the user's quota
+ * and marks the master key as added by the current user, so that it cannot be
+ * removed by another user with the key. Either the master key's key->sem must
+ * be held for write, or the master key must be still undergoing initialization.
+ */
+static int add_master_key_user(struct fscrypt_master_key *mk)
+{
+ char description[FSCRYPT_MK_USER_DESCRIPTION_SIZE];
+ struct key *mk_user;
+ int err;
+
+ format_mk_user_description(description, mk->mk_spec.u.identifier);
+ mk_user = key_alloc(&key_type_fscrypt_user, description,
+ current_fsuid(), current_gid(), current_cred(),
+ KEY_POS_SEARCH | KEY_USR_VIEW, 0, NULL);
+ if (IS_ERR(mk_user))
+ return PTR_ERR(mk_user);
+
+ err = key_instantiate_and_link(mk_user, NULL, 0, mk->mk_users, NULL);
+ key_put(mk_user);
+ return err;
+}
+
+/*
+ * Remove the current user's "key" from ->mk_users.
+ * The master key's key->sem must be held for write.
+ *
+ * Returns 0 if removed, -ENOKEY if not found, or another -errno code.
+ */
+static int remove_master_key_user(struct fscrypt_master_key *mk)
+{
+ struct key *mk_user;
+ int err;
+
+ mk_user = find_master_key_user(mk);
+ if (IS_ERR(mk_user))
+ return PTR_ERR(mk_user);
+ err = key_unlink(mk->mk_users, mk_user);
+ key_put(mk_user);
+ return err;
+}
+
+/*
+ * Allocate a new fscrypt_master_key which contains the given secret, set it as
+ * the payload of a new 'struct key' of type fscrypt, and link the 'struct key'
+ * into the given keyring. Synchronized by fscrypt_add_key_mutex.
+ */
+static int add_new_master_key(struct fscrypt_master_key_secret *secret,
+ const struct fscrypt_key_specifier *mk_spec,
+ struct key *keyring)
+{
+ struct fscrypt_master_key *mk;
+ char description[FSCRYPT_MK_DESCRIPTION_SIZE];
+ struct key *key;
+ int err;
+
+ mk = kzalloc(sizeof(*mk), GFP_KERNEL);
+ if (!mk)
+ return -ENOMEM;
+
+ mk->mk_spec = *mk_spec;
+
+ move_master_key_secret(&mk->mk_secret, secret);
+ init_rwsem(&mk->mk_secret_sem);
+
+ refcount_set(&mk->mk_refcount, 1); /* secret is present */
+ INIT_LIST_HEAD(&mk->mk_decrypted_inodes);
+ spin_lock_init(&mk->mk_decrypted_inodes_lock);
+
+ if (mk_spec->type == FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER) {
+ err = allocate_master_key_users_keyring(mk);
+ if (err)
+ goto out_free_mk;
+ err = add_master_key_user(mk);
+ if (err)
+ goto out_free_mk;
+ }
+
+ /*
+ * Note that we don't charge this key to anyone's quota, since when
+ * ->mk_users is in use those keys are charged instead, and otherwise
+ * (when ->mk_users isn't in use) only root can add these keys.
+ */
+ format_mk_description(description, mk_spec);
+ key = key_alloc(&key_type_fscrypt, description,
+ GLOBAL_ROOT_UID, GLOBAL_ROOT_GID, current_cred(),
+ KEY_POS_SEARCH | KEY_USR_SEARCH | KEY_USR_VIEW,
+ KEY_ALLOC_NOT_IN_QUOTA, NULL);
+ if (IS_ERR(key)) {
+ err = PTR_ERR(key);
+ goto out_free_mk;
+ }
+ err = key_instantiate_and_link(key, mk, sizeof(*mk), keyring, NULL);
+ key_put(key);
+ if (err)
+ goto out_free_mk;
+
+ return 0;
+
+out_free_mk:
+ free_master_key(mk);
+ return err;
+}
+
+#define KEY_DEAD 1
+
+static int add_existing_master_key(struct fscrypt_master_key *mk,
+ struct fscrypt_master_key_secret *secret)
+{
+ struct key *mk_user;
+ bool rekey;
+ int err;
+
+ /*
+ * If the current user is already in ->mk_users, then there's nothing to
+ * do. (Not applicable for v1 policy keys, which have NULL ->mk_users.)
+ */
+ if (mk->mk_users) {
+ mk_user = find_master_key_user(mk);
+ if (mk_user != ERR_PTR(-ENOKEY)) {
+ if (IS_ERR(mk_user))
+ return PTR_ERR(mk_user);
+ key_put(mk_user);
+ return 0;
+ }
+ }
+
+ /* If we'll be re-adding ->mk_secret, try to take the reference. */
+ rekey = !is_master_key_secret_present(&mk->mk_secret);
+ if (rekey && !refcount_inc_not_zero(&mk->mk_refcount))
+ return KEY_DEAD;
+
+ /* Add the current user to ->mk_users, if applicable. */
+ if (mk->mk_users) {
+ err = add_master_key_user(mk);
+ if (err) {
+ if (rekey && refcount_dec_and_test(&mk->mk_refcount))
+ return KEY_DEAD;
+ return err;
+ }
+ }
+
+ /* Re-add the secret if needed. */
+ if (rekey) {
+ down_write(&mk->mk_secret_sem);
+ move_master_key_secret(&mk->mk_secret, secret);
+ up_write(&mk->mk_secret_sem);
+ }
+ return 0;
+}
+
+static int add_master_key(struct super_block *sb,
+ struct fscrypt_master_key_secret *secret,
+ const struct fscrypt_key_specifier *mk_spec)
+{
+ static DEFINE_MUTEX(fscrypt_add_key_mutex);
+ struct key *key;
+ int err;
+
+ mutex_lock(&fscrypt_add_key_mutex); /* serialize find + link */
+retry:
+ key = fscrypt_find_master_key(sb, mk_spec);
+ if (IS_ERR(key)) {
+ err = PTR_ERR(key);
+ if (err != -ENOKEY)
+ goto out_unlock;
+ /* Didn't find the key in ->s_master_keys. Add it. */
+ err = allocate_filesystem_keyring(sb);
+ if (err)
+ goto out_unlock;
+ err = add_new_master_key(secret, mk_spec, sb->s_master_keys);
+ } else {
+ /*
+ * Found the key in ->s_master_keys. Re-add the secret if
+ * needed, and add the user to ->mk_users if needed.
+ */
+ down_write(&key->sem);
+ err = add_existing_master_key(key->payload.data[0], secret);
+ up_write(&key->sem);
+ if (err == KEY_DEAD) {
+ /* Key being removed or needs to be removed */
+ key_invalidate(key);
+ key_put(key);
+ goto retry;
+ }
+ key_put(key);
+ }
+out_unlock:
+ mutex_unlock(&fscrypt_add_key_mutex);
+ return err;
+}
+
+static int fscrypt_provisioning_key_preparse(struct key_preparsed_payload *prep)
+{
+ const struct fscrypt_provisioning_key_payload *payload = prep->data;
+
+ BUILD_BUG_ON(FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE < FSCRYPT_MAX_KEY_SIZE);
+
+ if (prep->datalen < sizeof(*payload) + FSCRYPT_MIN_KEY_SIZE ||
+ prep->datalen > sizeof(*payload) + FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE)
+ return -EINVAL;
+
+ if (payload->type != FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR &&
+ payload->type != FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER)
+ return -EINVAL;
+
+ if (payload->__reserved)
+ return -EINVAL;
+
+ prep->payload.data[0] = kmemdup(payload, prep->datalen, GFP_KERNEL);
+ if (!prep->payload.data[0])
+ return -ENOMEM;
+
+ prep->quotalen = prep->datalen;
+ return 0;
+}
+
+static void fscrypt_provisioning_key_free_preparse(
+ struct key_preparsed_payload *prep)
+{
+ kzfree(prep->payload.data[0]);
+}
+
+static void fscrypt_provisioning_key_describe(const struct key *key,
+ struct seq_file *m)
+{
+ seq_puts(m, key->description);
+ if (key_is_positive(key)) {
+ const struct fscrypt_provisioning_key_payload *payload =
+ key->payload.data[0];
+
+ seq_printf(m, ": %u [%u]", key->datalen, payload->type);
+ }
+}
+
+static void fscrypt_provisioning_key_destroy(struct key *key)
+{
+ kzfree(key->payload.data[0]);
+}
+
+static struct key_type key_type_fscrypt_provisioning = {
+ .name = "fscrypt-provisioning",
+ .preparse = fscrypt_provisioning_key_preparse,
+ .free_preparse = fscrypt_provisioning_key_free_preparse,
+ .instantiate = generic_key_instantiate,
+ .describe = fscrypt_provisioning_key_describe,
+ .destroy = fscrypt_provisioning_key_destroy,
+};
+
+/*
+ * Retrieve the raw key from the Linux keyring key specified by 'key_id', and
+ * store it into 'secret'.
+ *
+ * The key must be of type "fscrypt-provisioning" and must have the field
+ * fscrypt_provisioning_key_payload::type set to 'type', indicating that it's
+ * only usable with fscrypt with the particular KDF version identified by
+ * 'type'. We don't use the "logon" key type because there's no way to
+ * completely restrict the use of such keys; they can be used by any kernel API
+ * that accepts "logon" keys and doesn't require a specific service prefix.
+ *
+ * The ability to specify the key via Linux keyring key is intended for cases
+ * where userspace needs to re-add keys after the filesystem is unmounted and
+ * re-mounted. Most users should just provide the raw key directly instead.
+ */
+static int get_keyring_key(u32 key_id, u32 type,
+ struct fscrypt_master_key_secret *secret)
+{
+ key_ref_t ref;
+ struct key *key;
+ const struct fscrypt_provisioning_key_payload *payload;
+ int err;
+
+ ref = lookup_user_key(key_id, 0, KEY_NEED_SEARCH);
+ if (IS_ERR(ref))
+ return PTR_ERR(ref);
+ key = key_ref_to_ptr(ref);
+
+ if (key->type != &key_type_fscrypt_provisioning)
+ goto bad_key;
+ payload = key->payload.data[0];
+
+ /* Don't allow fscrypt v1 keys to be used as v2 keys and vice versa. */
+ if (payload->type != type)
+ goto bad_key;
+
+ secret->size = key->datalen - sizeof(*payload);
+ memcpy(secret->raw, payload->raw, secret->size);
+ err = 0;
+ goto out_put;
+
+bad_key:
+ err = -EKEYREJECTED;
+out_put:
+ key_ref_put(ref);
+ return err;
+}
+
+/* Size of software "secret" derived from hardware-wrapped key */
+#define RAW_SECRET_SIZE 32
+
+/*
+ * Add a master encryption key to the filesystem, causing all files which were
+ * encrypted with it to appear "unlocked" (decrypted) when accessed.
+ *
+ * When adding a key for use by v1 encryption policies, this ioctl is
+ * privileged, and userspace must provide the 'key_descriptor'.
+ *
+ * When adding a key for use by v2+ encryption policies, this ioctl is
+ * unprivileged. This is needed, in general, to allow non-root users to use
+ * encryption without encountering the visibility problems of process-subscribed
+ * keyrings and the inability to properly remove keys. This works by having
+ * each key identified by its cryptographically secure hash --- the
+ * 'key_identifier'. The cryptographic hash ensures that a malicious user
+ * cannot add the wrong key for a given identifier. Furthermore, each added key
+ * is charged to the appropriate user's quota for the keyrings service, which
+ * prevents a malicious user from adding too many keys. Finally, we forbid a
+ * user from removing a key while other users have added it too, which prevents
+ * a user who knows another user's key from causing a denial-of-service by
+ * removing it at an inopportune time. (We tolerate that a user who knows a key
+ * can prevent other users from removing it.)
+ *
+ * For more details, see the "FS_IOC_ADD_ENCRYPTION_KEY" section of
+ * Documentation/filesystems/fscrypt.rst.
+ */
+int fscrypt_ioctl_add_key(struct file *filp, void __user *_uarg)
+{
+ struct super_block *sb = file_inode(filp)->i_sb;
+ struct fscrypt_add_key_arg __user *uarg = _uarg;
+ struct fscrypt_add_key_arg arg;
+ struct fscrypt_master_key_secret secret;
+ u8 _kdf_key[RAW_SECRET_SIZE];
+ u8 *kdf_key;
+ unsigned int kdf_key_size;
+ int err;
+
+ if (copy_from_user(&arg, uarg, sizeof(arg)))
+ return -EFAULT;
+
+ if (!valid_key_spec(&arg.key_spec))
+ return -EINVAL;
+
+ if (memchr_inv(arg.__reserved, 0, sizeof(arg.__reserved)))
+ return -EINVAL;
+
+ memset(&secret, 0, sizeof(secret));
+ if (arg.key_id) {
+ if (arg.raw_size != 0)
+ return -EINVAL;
+ err = get_keyring_key(arg.key_id, arg.key_spec.type, &secret);
+ if (err)
+ goto out_wipe_secret;
+ err = -EINVAL;
+ if (!(arg.__flags & __FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED) &&
+ secret.size > FSCRYPT_MAX_KEY_SIZE)
+ goto out_wipe_secret;
+ } else {
+ if (arg.raw_size < FSCRYPT_MIN_KEY_SIZE ||
+ arg.raw_size >
+ ((arg.__flags & __FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED) ?
+ FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE : FSCRYPT_MAX_KEY_SIZE))
+ return -EINVAL;
+ secret.size = arg.raw_size;
+ err = -EFAULT;
+ if (copy_from_user(secret.raw, uarg->raw, secret.size))
+ goto out_wipe_secret;
+ }
+
+ switch (arg.key_spec.type) {
+ case FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR:
+ /*
+ * Only root can add keys that are identified by an arbitrary
+ * descriptor rather than by a cryptographic hash --- since
+ * otherwise a malicious user could add the wrong key.
+ */
+ err = -EACCES;
+ if (!capable(CAP_SYS_ADMIN))
+ goto out_wipe_secret;
+
+ err = -EINVAL;
+ if (arg.__flags & ~__FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED)
+ goto out_wipe_secret;
+ break;
+ case FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER:
+ err = -EINVAL;
+ if (arg.__flags & ~__FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED)
+ goto out_wipe_secret;
+ if (arg.__flags & __FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED) {
+ kdf_key = _kdf_key;
+ kdf_key_size = RAW_SECRET_SIZE;
+ err = fscrypt_derive_raw_secret(sb, secret.raw,
+ secret.size,
+ kdf_key, kdf_key_size);
+ if (err)
+ goto out_wipe_secret;
+ secret.is_hw_wrapped = true;
+ } else {
+ kdf_key = secret.raw;
+ kdf_key_size = secret.size;
+ }
+ err = fscrypt_init_hkdf(&secret.hkdf, kdf_key, kdf_key_size);
+ /*
+ * Now that the HKDF context is initialized, the raw HKDF
+ * key is no longer needed.
+ */
+ memzero_explicit(kdf_key, kdf_key_size);
+ if (err)
+ goto out_wipe_secret;
+
+ /* Calculate the key identifier and return it to userspace. */
+ err = fscrypt_hkdf_expand(&secret.hkdf,
+ HKDF_CONTEXT_KEY_IDENTIFIER,
+ NULL, 0, arg.key_spec.u.identifier,
+ FSCRYPT_KEY_IDENTIFIER_SIZE);
+ if (err)
+ goto out_wipe_secret;
+ err = -EFAULT;
+ if (copy_to_user(uarg->key_spec.u.identifier,
+ arg.key_spec.u.identifier,
+ FSCRYPT_KEY_IDENTIFIER_SIZE))
+ goto out_wipe_secret;
+ break;
+ default:
+ WARN_ON(1);
+ err = -EINVAL;
+ goto out_wipe_secret;
+ }
+
+ err = add_master_key(sb, &secret, &arg.key_spec);
+out_wipe_secret:
+ wipe_master_key_secret(&secret);
+ return err;
+}
+EXPORT_SYMBOL_GPL(fscrypt_ioctl_add_key);
+
+/*
+ * Verify that the current user has added a master key with the given identifier
+ * (returns -ENOKEY if not). This is needed to prevent a user from encrypting
+ * their files using some other user's key which they don't actually know.
+ * Cryptographically this isn't much of a problem, but the semantics of this
+ * would be a bit weird, so it's best to just forbid it.
+ *
+ * The system administrator (CAP_FOWNER) can override this, which should be
+ * enough for any use cases where encryption policies are being set using keys
+ * that were chosen ahead of time but aren't available at the moment.
+ *
+ * Note that the key may have already removed by the time this returns, but
+ * that's okay; we just care whether the key was there at some point.
+ *
+ * Return: 0 if the key is added, -ENOKEY if it isn't, or another -errno code
+ */
+int fscrypt_verify_key_added(struct super_block *sb,
+ const u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE])
+{
+ struct fscrypt_key_specifier mk_spec;
+ struct key *key, *mk_user;
+ struct fscrypt_master_key *mk;
+ int err;
+
+ mk_spec.type = FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER;
+ memcpy(mk_spec.u.identifier, identifier, FSCRYPT_KEY_IDENTIFIER_SIZE);
+
+ key = fscrypt_find_master_key(sb, &mk_spec);
+ if (IS_ERR(key)) {
+ err = PTR_ERR(key);
+ goto out;
+ }
+ mk = key->payload.data[0];
+ mk_user = find_master_key_user(mk);
+ if (IS_ERR(mk_user)) {
+ err = PTR_ERR(mk_user);
+ } else {
+ key_put(mk_user);
+ err = 0;
+ }
+ key_put(key);
+out:
+ if (err == -ENOKEY && capable(CAP_FOWNER))
+ err = 0;
+ return err;
+}
+
+/*
+ * Try to evict the inode's dentries from the dentry cache. If the inode is a
+ * directory, then it can have at most one dentry; however, that dentry may be
+ * pinned by child dentries, so first try to evict the children too.
+ */
+static void shrink_dcache_inode(struct inode *inode)
+{
+ struct dentry *dentry;
+
+ if (S_ISDIR(inode->i_mode)) {
+ dentry = d_find_any_alias(inode);
+ if (dentry) {
+ shrink_dcache_parent(dentry);
+ dput(dentry);
+ }
+ }
+ d_prune_aliases(inode);
+}
+
+static void evict_dentries_for_decrypted_inodes(struct fscrypt_master_key *mk)
+{
+ struct fscrypt_info *ci;
+ struct inode *inode;
+ struct inode *toput_inode = NULL;
+
+ spin_lock(&mk->mk_decrypted_inodes_lock);
+
+ list_for_each_entry(ci, &mk->mk_decrypted_inodes, ci_master_key_link) {
+ inode = ci->ci_inode;
+ spin_lock(&inode->i_lock);
+ if (inode->i_state & (I_FREEING | I_WILL_FREE | I_NEW)) {
+ spin_unlock(&inode->i_lock);
+ continue;
+ }
+ __iget(inode);
+ spin_unlock(&inode->i_lock);
+ spin_unlock(&mk->mk_decrypted_inodes_lock);
+
+ shrink_dcache_inode(inode);
+ iput(toput_inode);
+ toput_inode = inode;
+
+ spin_lock(&mk->mk_decrypted_inodes_lock);
+ }
+
+ spin_unlock(&mk->mk_decrypted_inodes_lock);
+ iput(toput_inode);
+}
+
+static int check_for_busy_inodes(struct super_block *sb,
+ struct fscrypt_master_key *mk)
+{
+ struct list_head *pos;
+ size_t busy_count = 0;
+ unsigned long ino;
+ struct dentry *dentry;
+ char _path[256];
+ char *path = NULL;
+
+ spin_lock(&mk->mk_decrypted_inodes_lock);
+
+ list_for_each(pos, &mk->mk_decrypted_inodes)
+ busy_count++;
+
+ if (busy_count == 0) {
+ spin_unlock(&mk->mk_decrypted_inodes_lock);
+ return 0;
+ }
+
+ {
+ /* select an example file to show for debugging purposes */
+ struct inode *inode =
+ list_first_entry(&mk->mk_decrypted_inodes,
+ struct fscrypt_info,
+ ci_master_key_link)->ci_inode;
+ ino = inode->i_ino;
+ dentry = d_find_alias(inode);
+ }
+ spin_unlock(&mk->mk_decrypted_inodes_lock);
+
+ if (dentry) {
+ path = dentry_path(dentry, _path, sizeof(_path));
+ dput(dentry);
+ }
+ if (IS_ERR_OR_NULL(path))
+ path = "(unknown)";
+
+ fscrypt_warn(NULL,
+ "%s: %zu inode(s) still busy after removing key with %s %*phN, including ino %lu (%s)",
+ sb->s_id, busy_count, master_key_spec_type(&mk->mk_spec),
+ master_key_spec_len(&mk->mk_spec), (u8 *)&mk->mk_spec.u,
+ ino, path);
+ return -EBUSY;
+}
+
+static BLOCKING_NOTIFIER_HEAD(fscrypt_key_removal_notifiers);
+
+/*
+ * Register a function to be executed when the FS_IOC_REMOVE_ENCRYPTION_KEY
+ * ioctl has removed a key and is about to try evicting inodes.
+ */
+int fscrypt_register_key_removal_notifier(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_register(&fscrypt_key_removal_notifiers,
+ nb);
+}
+EXPORT_SYMBOL_GPL(fscrypt_register_key_removal_notifier);
+
+int fscrypt_unregister_key_removal_notifier(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_unregister(&fscrypt_key_removal_notifiers,
+ nb);
+}
+EXPORT_SYMBOL_GPL(fscrypt_unregister_key_removal_notifier);
+
+static int try_to_lock_encrypted_files(struct super_block *sb,
+ struct fscrypt_master_key *mk)
+{
+ int err1;
+ int err2;
+
+ blocking_notifier_call_chain(&fscrypt_key_removal_notifiers, 0, NULL);
+
+ /*
+ * An inode can't be evicted while it is dirty or has dirty pages.
+ * Thus, we first have to clean the inodes in ->mk_decrypted_inodes.
+ *
+ * Just do it the easy way: call sync_filesystem(). It's overkill, but
+ * it works, and it's more important to minimize the amount of caches we
+ * drop than the amount of data we sync. Also, unprivileged users can
+ * already call sync_filesystem() via sys_syncfs() or sys_sync().
+ */
+ down_read(&sb->s_umount);
+ err1 = sync_filesystem(sb);
+ up_read(&sb->s_umount);
+ /* If a sync error occurs, still try to evict as much as possible. */
+
+ /*
+ * Inodes are pinned by their dentries, so we have to evict their
+ * dentries. shrink_dcache_sb() would suffice, but would be overkill
+ * and inappropriate for use by unprivileged users. So instead go
+ * through the inodes' alias lists and try to evict each dentry.
+ */
+ evict_dentries_for_decrypted_inodes(mk);
+
+ /*
+ * evict_dentries_for_decrypted_inodes() already iput() each inode in
+ * the list; any inodes for which that dropped the last reference will
+ * have been evicted due to fscrypt_drop_inode() detecting the key
+ * removal and telling the VFS to evict the inode. So to finish, we
+ * just need to check whether any inodes couldn't be evicted.
+ */
+ err2 = check_for_busy_inodes(sb, mk);
+
+ return err1 ?: err2;
+}
+
+/*
+ * Try to remove an fscrypt master encryption key.
+ *
+ * FS_IOC_REMOVE_ENCRYPTION_KEY (all_users=false) removes the current user's
+ * claim to the key, then removes the key itself if no other users have claims.
+ * FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS (all_users=true) always removes the
+ * key itself.
+ *
+ * To "remove the key itself", first we wipe the actual master key secret, so
+ * that no more inodes can be unlocked with it. Then we try to evict all cached
+ * inodes that had been unlocked with the key.
+ *
+ * If all inodes were evicted, then we unlink the fscrypt_master_key from the
+ * keyring. Otherwise it remains in the keyring in the "incompletely removed"
+ * state (without the actual secret key) where it tracks the list of remaining
+ * inodes. Userspace can execute the ioctl again later to retry eviction, or
+ * alternatively can re-add the secret key again.
+ *
+ * For more details, see the "Removing keys" section of
+ * Documentation/filesystems/fscrypt.rst.
+ */
+static int do_remove_key(struct file *filp, void __user *_uarg, bool all_users)
+{
+ struct super_block *sb = file_inode(filp)->i_sb;
+ struct fscrypt_remove_key_arg __user *uarg = _uarg;
+ struct fscrypt_remove_key_arg arg;
+ struct key *key;
+ struct fscrypt_master_key *mk;
+ u32 status_flags = 0;
+ int err;
+ bool dead;
+
+ if (copy_from_user(&arg, uarg, sizeof(arg)))
+ return -EFAULT;
+
+ if (!valid_key_spec(&arg.key_spec))
+ return -EINVAL;
+
+ if (memchr_inv(arg.__reserved, 0, sizeof(arg.__reserved)))
+ return -EINVAL;
+
+ /*
+ * Only root can add and remove keys that are identified by an arbitrary
+ * descriptor rather than by a cryptographic hash.
+ */
+ if (arg.key_spec.type == FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR &&
+ !capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ /* Find the key being removed. */
+ key = fscrypt_find_master_key(sb, &arg.key_spec);
+ if (IS_ERR(key))
+ return PTR_ERR(key);
+ mk = key->payload.data[0];
+
+ down_write(&key->sem);
+
+ /* If relevant, remove current user's (or all users) claim to the key */
+ if (mk->mk_users && mk->mk_users->keys.nr_leaves_on_tree != 0) {
+ if (all_users)
+ err = keyring_clear(mk->mk_users);
+ else
+ err = remove_master_key_user(mk);
+ if (err) {
+ up_write(&key->sem);
+ goto out_put_key;
+ }
+ if (mk->mk_users->keys.nr_leaves_on_tree != 0) {
+ /*
+ * Other users have still added the key too. We removed
+ * the current user's claim to the key, but we still
+ * can't remove the key itself.
+ */
+ status_flags |=
+ FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS;
+ err = 0;
+ up_write(&key->sem);
+ goto out_put_key;
+ }
+ }
+
+ /* No user claims remaining. Go ahead and wipe the secret. */
+ dead = false;
+ if (is_master_key_secret_present(&mk->mk_secret)) {
+ down_write(&mk->mk_secret_sem);
+ wipe_master_key_secret(&mk->mk_secret);
+ dead = refcount_dec_and_test(&mk->mk_refcount);
+ up_write(&mk->mk_secret_sem);
+ }
+ up_write(&key->sem);
+ if (dead) {
+ /*
+ * No inodes reference the key, and we wiped the secret, so the
+ * key object is free to be removed from the keyring.
+ */
+ key_invalidate(key);
+ err = 0;
+ } else {
+ /* Some inodes still reference this key; try to evict them. */
+ err = try_to_lock_encrypted_files(sb, mk);
+ if (err == -EBUSY) {
+ status_flags |=
+ FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY;
+ err = 0;
+ }
+ }
+ /*
+ * We return 0 if we successfully did something: removed a claim to the
+ * key, wiped the secret, or tried locking the files again. Users need
+ * to check the informational status flags if they care whether the key
+ * has been fully removed including all files locked.
+ */
+out_put_key:
+ key_put(key);
+ if (err == 0)
+ err = put_user(status_flags, &uarg->removal_status_flags);
+ return err;
+}
+
+int fscrypt_ioctl_remove_key(struct file *filp, void __user *uarg)
+{
+ return do_remove_key(filp, uarg, false);
+}
+EXPORT_SYMBOL_GPL(fscrypt_ioctl_remove_key);
+
+int fscrypt_ioctl_remove_key_all_users(struct file *filp, void __user *uarg)
+{
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+ return do_remove_key(filp, uarg, true);
+}
+EXPORT_SYMBOL_GPL(fscrypt_ioctl_remove_key_all_users);
+
+/*
+ * Retrieve the status of an fscrypt master encryption key.
+ *
+ * We set ->status to indicate whether the key is absent, present, or
+ * incompletely removed. "Incompletely removed" means that the master key
+ * secret has been removed, but some files which had been unlocked with it are
+ * still in use. This field allows applications to easily determine the state
+ * of an encrypted directory without using a hack such as trying to open a
+ * regular file in it (which can confuse the "incompletely removed" state with
+ * absent or present).
+ *
+ * In addition, for v2 policy keys we allow applications to determine, via
+ * ->status_flags and ->user_count, whether the key has been added by the
+ * current user, by other users, or by both. Most applications should not need
+ * this, since ordinarily only one user should know a given key. However, if a
+ * secret key is shared by multiple users, applications may wish to add an
+ * already-present key to prevent other users from removing it. This ioctl can
+ * be used to check whether that really is the case before the work is done to
+ * add the key --- which might e.g. require prompting the user for a passphrase.
+ *
+ * For more details, see the "FS_IOC_GET_ENCRYPTION_KEY_STATUS" section of
+ * Documentation/filesystems/fscrypt.rst.
+ */
+int fscrypt_ioctl_get_key_status(struct file *filp, void __user *uarg)
+{
+ struct super_block *sb = file_inode(filp)->i_sb;
+ struct fscrypt_get_key_status_arg arg;
+ struct key *key;
+ struct fscrypt_master_key *mk;
+ int err;
+
+ if (copy_from_user(&arg, uarg, sizeof(arg)))
+ return -EFAULT;
+
+ if (!valid_key_spec(&arg.key_spec))
+ return -EINVAL;
+
+ if (memchr_inv(arg.__reserved, 0, sizeof(arg.__reserved)))
+ return -EINVAL;
+
+ arg.status_flags = 0;
+ arg.user_count = 0;
+ memset(arg.__out_reserved, 0, sizeof(arg.__out_reserved));
+
+ key = fscrypt_find_master_key(sb, &arg.key_spec);
+ if (IS_ERR(key)) {
+ if (key != ERR_PTR(-ENOKEY))
+ return PTR_ERR(key);
+ arg.status = FSCRYPT_KEY_STATUS_ABSENT;
+ err = 0;
+ goto out;
+ }
+ mk = key->payload.data[0];
+ down_read(&key->sem);
+
+ if (!is_master_key_secret_present(&mk->mk_secret)) {
+ arg.status = FSCRYPT_KEY_STATUS_INCOMPLETELY_REMOVED;
+ err = 0;
+ goto out_release_key;
+ }
+
+ arg.status = FSCRYPT_KEY_STATUS_PRESENT;
+ if (mk->mk_users) {
+ struct key *mk_user;
+
+ arg.user_count = mk->mk_users->keys.nr_leaves_on_tree;
+ mk_user = find_master_key_user(mk);
+ if (!IS_ERR(mk_user)) {
+ arg.status_flags |=
+ FSCRYPT_KEY_STATUS_FLAG_ADDED_BY_SELF;
+ key_put(mk_user);
+ } else if (mk_user != ERR_PTR(-ENOKEY)) {
+ err = PTR_ERR(mk_user);
+ goto out_release_key;
+ }
+ }
+ err = 0;
+out_release_key:
+ up_read(&key->sem);
+ key_put(key);
+out:
+ if (!err && copy_to_user(uarg, &arg, sizeof(arg)))
+ err = -EFAULT;
+ return err;
+}
+EXPORT_SYMBOL_GPL(fscrypt_ioctl_get_key_status);
+
+int __init fscrypt_init_keyring(void)
+{
+ int err;
+
+ err = register_key_type(&key_type_fscrypt);
+ if (err)
+ return err;
+
+ err = register_key_type(&key_type_fscrypt_user);
+ if (err)
+ goto err_unregister_fscrypt;
+
+ err = register_key_type(&key_type_fscrypt_provisioning);
+ if (err)
+ goto err_unregister_fscrypt_user;
+
+ return 0;
+
+err_unregister_fscrypt_user:
+ unregister_key_type(&key_type_fscrypt_user);
+err_unregister_fscrypt:
+ unregister_key_type(&key_type_fscrypt);
+ return err;
+}
diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
new file mode 100644
index 0000000..5414e27
--- /dev/null
+++ b/fs/crypto/keysetup.c
@@ -0,0 +1,605 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Key setup facility for FS encryption support.
+ *
+ * Copyright (C) 2015, Google, Inc.
+ *
+ * Originally written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar.
+ * Heavily modified since then.
+ */
+
+#include <crypto/skcipher.h>
+#include <linux/key.h>
+
+#include "fscrypt_private.h"
+
+struct fscrypt_mode fscrypt_modes[] = {
+ [FSCRYPT_MODE_AES_256_XTS] = {
+ .friendly_name = "AES-256-XTS",
+ .cipher_str = "xts(aes)",
+ .keysize = 64,
+ .ivsize = 16,
+ .blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_256_XTS,
+ },
+ [FSCRYPT_MODE_AES_256_CTS] = {
+ .friendly_name = "AES-256-CTS-CBC",
+ .cipher_str = "cts(cbc(aes))",
+ .keysize = 32,
+ .ivsize = 16,
+ },
+ [FSCRYPT_MODE_AES_128_CBC] = {
+ .friendly_name = "AES-128-CBC-ESSIV",
+ .cipher_str = "essiv(cbc(aes),sha256)",
+ .keysize = 16,
+ .ivsize = 16,
+ .blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
+ },
+ [FSCRYPT_MODE_AES_128_CTS] = {
+ .friendly_name = "AES-128-CTS-CBC",
+ .cipher_str = "cts(cbc(aes))",
+ .keysize = 16,
+ .ivsize = 16,
+ },
+ [FSCRYPT_MODE_ADIANTUM] = {
+ .friendly_name = "Adiantum",
+ .cipher_str = "adiantum(xchacha12,aes)",
+ .keysize = 32,
+ .ivsize = 32,
+ .blk_crypto_mode = BLK_ENCRYPTION_MODE_ADIANTUM,
+ },
+ [FSCRYPT_MODE_PRIVATE] = {
+ .friendly_name = "ice",
+ .cipher_str = "xts(aes)",
+ .keysize = 64,
+ .ivsize = 16,
+ .blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_256_XTS,
+ },
+};
+
+static struct fscrypt_mode *
+select_encryption_mode(const union fscrypt_policy *policy,
+ const struct inode *inode)
+{
+ if (S_ISREG(inode->i_mode))
+ return &fscrypt_modes[fscrypt_policy_contents_mode(policy)];
+
+ if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))
+ return &fscrypt_modes[fscrypt_policy_fnames_mode(policy)];
+
+ WARN_ONCE(1, "fscrypt: filesystem tried to load encryption info for inode %lu, which is not encryptable (file type %d)\n",
+ inode->i_ino, (inode->i_mode & S_IFMT));
+ return ERR_PTR(-EINVAL);
+}
+
+/* Create a symmetric cipher object for the given encryption mode and key */
+static struct crypto_skcipher *
+fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key,
+ const struct inode *inode)
+{
+ struct crypto_skcipher *tfm;
+ int err;
+
+ tfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0);
+ if (IS_ERR(tfm)) {
+ if (PTR_ERR(tfm) == -ENOENT) {
+ fscrypt_warn(inode,
+ "Missing crypto API support for %s (API name: \"%s\")",
+ mode->friendly_name, mode->cipher_str);
+ return ERR_PTR(-ENOPKG);
+ }
+ fscrypt_err(inode, "Error allocating '%s' transform: %ld",
+ mode->cipher_str, PTR_ERR(tfm));
+ return tfm;
+ }
+ if (unlikely(!mode->logged_impl_name)) {
+ /*
+ * fscrypt performance can vary greatly depending on which
+ * crypto algorithm implementation is used. Help people debug
+ * performance problems by logging the ->cra_driver_name the
+ * first time a mode is used. Note that multiple threads can
+ * race here, but it doesn't really matter.
+ */
+ mode->logged_impl_name = true;
+ pr_info("fscrypt: %s using implementation \"%s\"\n",
+ mode->friendly_name,
+ crypto_skcipher_alg(tfm)->base.cra_driver_name);
+ }
+ crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
+ err = crypto_skcipher_setkey(tfm, raw_key, mode->keysize);
+ if (err)
+ goto err_free_tfm;
+
+ return tfm;
+
+err_free_tfm:
+ crypto_free_skcipher(tfm);
+ return ERR_PTR(err);
+}
+
+/*
+ * Prepare the crypto transform object or blk-crypto key in @prep_key, given the
+ * raw key, encryption mode, and flag indicating which encryption implementation
+ * (fs-layer or blk-crypto) will be used.
+ */
+int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key,
+ const u8 *raw_key, unsigned int raw_key_size,
+ bool is_hw_wrapped, const struct fscrypt_info *ci)
+{
+ struct crypto_skcipher *tfm;
+
+ if (fscrypt_using_inline_encryption(ci))
+ return fscrypt_prepare_inline_crypt_key(prep_key,
+ raw_key, raw_key_size, is_hw_wrapped, ci);
+
+ if (WARN_ON(is_hw_wrapped || raw_key_size != ci->ci_mode->keysize))
+ return -EINVAL;
+
+ tfm = fscrypt_allocate_skcipher(ci->ci_mode, raw_key, ci->ci_inode);
+ if (IS_ERR(tfm))
+ return PTR_ERR(tfm);
+ /*
+ * Pairs with READ_ONCE() in fscrypt_is_key_prepared(). (Only matters
+ * for the per-mode keys, which are shared by multiple inodes.)
+ */
+ smp_store_release(&prep_key->tfm, tfm);
+ return 0;
+}
+
+/* Destroy a crypto transform object and/or blk-crypto key. */
+void fscrypt_destroy_prepared_key(struct fscrypt_prepared_key *prep_key)
+{
+ crypto_free_skcipher(prep_key->tfm);
+ fscrypt_destroy_inline_crypt_key(prep_key);
+}
+
+/* Given the per-file key, set up the file's crypto transform object */
+int fscrypt_set_derived_key(struct fscrypt_info *ci, const u8 *derived_key)
+{
+ ci->ci_owns_key = true;
+ return fscrypt_prepare_key(&ci->ci_key, derived_key,
+ ci->ci_mode->keysize, false, ci);
+}
+
+static int setup_per_mode_key(struct fscrypt_info *ci,
+ struct fscrypt_master_key *mk,
+ struct fscrypt_prepared_key *keys,
+ u8 hkdf_context, bool include_fs_uuid)
+{
+ static DEFINE_MUTEX(mode_key_setup_mutex);
+ const struct inode *inode = ci->ci_inode;
+ const struct super_block *sb = inode->i_sb;
+ struct fscrypt_mode *mode = ci->ci_mode;
+ const u8 mode_num = mode - fscrypt_modes;
+ struct fscrypt_prepared_key *prep_key;
+ u8 mode_key[FSCRYPT_MAX_KEY_SIZE];
+ u8 hkdf_info[sizeof(mode_num) + sizeof(sb->s_uuid)];
+ unsigned int hkdf_infolen = 0;
+ int err;
+
+ if (WARN_ON(mode_num > __FSCRYPT_MODE_MAX))
+ return -EINVAL;
+
+ prep_key = &keys[mode_num];
+ if (fscrypt_is_key_prepared(prep_key, ci)) {
+ ci->ci_key = *prep_key;
+ return 0;
+ }
+
+ mutex_lock(&mode_key_setup_mutex);
+
+ if (fscrypt_is_key_prepared(prep_key, ci))
+ goto done_unlock;
+
+ if (mk->mk_secret.is_hw_wrapped && S_ISREG(inode->i_mode)) {
+ int i;
+
+ if (!fscrypt_using_inline_encryption(ci)) {
+ fscrypt_warn(ci->ci_inode,
+ "Hardware-wrapped keys require inline encryption (-o inlinecrypt)");
+ err = -EINVAL;
+ goto out_unlock;
+ }
+ for (i = 0; i <= __FSCRYPT_MODE_MAX; i++) {
+ if (fscrypt_is_key_prepared(&keys[i], ci)) {
+ fscrypt_warn(ci->ci_inode,
+ "Each hardware-wrapped key can only be used with one encryption mode");
+ err = -EINVAL;
+ goto out_unlock;
+ }
+ }
+ err = fscrypt_prepare_key(prep_key, mk->mk_secret.raw,
+ mk->mk_secret.size, true, ci);
+ if (err)
+ goto out_unlock;
+ } else {
+ BUILD_BUG_ON(sizeof(mode_num) != 1);
+ BUILD_BUG_ON(sizeof(sb->s_uuid) != 16);
+ BUILD_BUG_ON(sizeof(hkdf_info) != 17);
+ hkdf_info[hkdf_infolen++] = mode_num;
+ if (include_fs_uuid) {
+ memcpy(&hkdf_info[hkdf_infolen], &sb->s_uuid,
+ sizeof(sb->s_uuid));
+ hkdf_infolen += sizeof(sb->s_uuid);
+ }
+ err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf,
+ hkdf_context, hkdf_info, hkdf_infolen,
+ mode_key, mode->keysize);
+ if (err)
+ goto out_unlock;
+ err = fscrypt_prepare_key(prep_key, mode_key, mode->keysize,
+ false /*is_hw_wrapped*/, ci);
+ memzero_explicit(mode_key, mode->keysize);
+ if (err)
+ goto out_unlock;
+ }
+done_unlock:
+ ci->ci_key = *prep_key;
+ err = 0;
+out_unlock:
+ mutex_unlock(&mode_key_setup_mutex);
+ return err;
+}
+
+static int fscrypt_setup_v2_file_key(struct fscrypt_info *ci,
+ struct fscrypt_master_key *mk)
+{
+ u8 derived_key[FSCRYPT_MAX_KEY_SIZE];
+ int err;
+
+ if (mk->mk_secret.is_hw_wrapped &&
+ !(ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64)) {
+ fscrypt_warn(ci->ci_inode,
+ "Hardware-wrapped keys are only supported with IV_INO_LBLK_64 policies");
+ return -EINVAL;
+ }
+
+ if (ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
+ /*
+ * DIRECT_KEY: instead of deriving per-file keys, the per-file
+ * nonce will be included in all the IVs. But unlike v1
+ * policies, for v2 policies in this case we don't encrypt with
+ * the master key directly but rather derive a per-mode key.
+ * This ensures that the master key is consistently used only
+ * for HKDF, avoiding key reuse issues.
+ */
+ if (!fscrypt_mode_supports_direct_key(ci->ci_mode)) {
+ fscrypt_warn(ci->ci_inode,
+ "Direct key flag not allowed with %s",
+ ci->ci_mode->friendly_name);
+ return -EINVAL;
+ }
+ return setup_per_mode_key(ci, mk, mk->mk_direct_keys,
+ HKDF_CONTEXT_DIRECT_KEY, false);
+ } else if (ci->ci_policy.v2.flags &
+ FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64) {
+ /*
+ * IV_INO_LBLK_64: encryption keys are derived from (master_key,
+ * mode_num, filesystem_uuid), and inode number is included in
+ * the IVs. This format is optimized for use with inline
+ * encryption hardware compliant with the UFS or eMMC standards.
+ */
+ return setup_per_mode_key(ci, mk, mk->mk_iv_ino_lblk_64_keys,
+ HKDF_CONTEXT_IV_INO_LBLK_64_KEY,
+ true);
+ }
+
+ err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf,
+ HKDF_CONTEXT_PER_FILE_KEY,
+ ci->ci_nonce, FS_KEY_DERIVATION_NONCE_SIZE,
+ derived_key, ci->ci_mode->keysize);
+ if (err)
+ return err;
+
+ err = fscrypt_set_derived_key(ci, derived_key);
+ memzero_explicit(derived_key, ci->ci_mode->keysize);
+ return err;
+}
+
+/*
+ * Find the master key, then set up the inode's actual encryption key.
+ *
+ * If the master key is found in the filesystem-level keyring, then the
+ * corresponding 'struct key' is returned in *master_key_ret with
+ * ->mk_secret_sem read-locked. This is needed to ensure that only one task
+ * links the fscrypt_info into ->mk_decrypted_inodes (as multiple tasks may race
+ * to create an fscrypt_info for the same inode), and to synchronize the master
+ * key being removed with a new inode starting to use it.
+ */
+static int setup_file_encryption_key(struct fscrypt_info *ci,
+ struct key **master_key_ret)
+{
+ struct key *key;
+ struct fscrypt_master_key *mk = NULL;
+ struct fscrypt_key_specifier mk_spec;
+ int err;
+
+ fscrypt_select_encryption_impl(ci);
+
+ switch (ci->ci_policy.version) {
+ case FSCRYPT_POLICY_V1:
+ mk_spec.type = FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR;
+ memcpy(mk_spec.u.descriptor,
+ ci->ci_policy.v1.master_key_descriptor,
+ FSCRYPT_KEY_DESCRIPTOR_SIZE);
+ break;
+ case FSCRYPT_POLICY_V2:
+ mk_spec.type = FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER;
+ memcpy(mk_spec.u.identifier,
+ ci->ci_policy.v2.master_key_identifier,
+ FSCRYPT_KEY_IDENTIFIER_SIZE);
+ break;
+ default:
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
+ key = fscrypt_find_master_key(ci->ci_inode->i_sb, &mk_spec);
+ if (IS_ERR(key)) {
+ if (key != ERR_PTR(-ENOKEY) ||
+ ci->ci_policy.version != FSCRYPT_POLICY_V1)
+ return PTR_ERR(key);
+
+ /*
+ * As a legacy fallback for v1 policies, search for the key in
+ * the current task's subscribed keyrings too. Don't move this
+ * to before the search of ->s_master_keys, since users
+ * shouldn't be able to override filesystem-level keys.
+ */
+ return fscrypt_setup_v1_file_key_via_subscribed_keyrings(ci);
+ }
+
+ mk = key->payload.data[0];
+ down_read(&mk->mk_secret_sem);
+
+ /* Has the secret been removed (via FS_IOC_REMOVE_ENCRYPTION_KEY)? */
+ if (!is_master_key_secret_present(&mk->mk_secret)) {
+ err = -ENOKEY;
+ goto out_release_key;
+ }
+
+ /*
+ * Require that the master key be at least as long as the derived key.
+ * Otherwise, the derived key cannot possibly contain as much entropy as
+ * that required by the encryption mode it will be used for. For v1
+ * policies it's also required for the KDF to work at all.
+ */
+ if (mk->mk_secret.size < ci->ci_mode->keysize) {
+ fscrypt_warn(NULL,
+ "key with %s %*phN is too short (got %u bytes, need %u+ bytes)",
+ master_key_spec_type(&mk_spec),
+ master_key_spec_len(&mk_spec), (u8 *)&mk_spec.u,
+ mk->mk_secret.size, ci->ci_mode->keysize);
+ err = -ENOKEY;
+ goto out_release_key;
+ }
+
+ switch (ci->ci_policy.version) {
+ case FSCRYPT_POLICY_V1:
+ err = fscrypt_setup_v1_file_key(ci, mk->mk_secret.raw);
+ break;
+ case FSCRYPT_POLICY_V2:
+ err = fscrypt_setup_v2_file_key(ci, mk);
+ break;
+ default:
+ WARN_ON(1);
+ err = -EINVAL;
+ break;
+ }
+ if (err)
+ goto out_release_key;
+
+ *master_key_ret = key;
+ return 0;
+
+out_release_key:
+ up_read(&mk->mk_secret_sem);
+ key_put(key);
+ return err;
+}
+
+static void put_crypt_info(struct fscrypt_info *ci)
+{
+ struct key *key;
+
+ if (!ci)
+ return;
+
+ if (ci->ci_direct_key)
+ fscrypt_put_direct_key(ci->ci_direct_key);
+ else if (ci->ci_owns_key)
+ fscrypt_destroy_prepared_key(&ci->ci_key);
+
+ key = ci->ci_master_key;
+ if (key) {
+ struct fscrypt_master_key *mk = key->payload.data[0];
+
+ /*
+ * Remove this inode from the list of inodes that were unlocked
+ * with the master key.
+ *
+ * In addition, if we're removing the last inode from a key that
+ * already had its secret removed, invalidate the key so that it
+ * gets removed from ->s_master_keys.
+ */
+ spin_lock(&mk->mk_decrypted_inodes_lock);
+ list_del(&ci->ci_master_key_link);
+ spin_unlock(&mk->mk_decrypted_inodes_lock);
+ if (refcount_dec_and_test(&mk->mk_refcount))
+ key_invalidate(key);
+ key_put(key);
+ }
+ memzero_explicit(ci, sizeof(*ci));
+ kmem_cache_free(fscrypt_info_cachep, ci);
+}
+
+int fscrypt_get_encryption_info(struct inode *inode)
+{
+ struct fscrypt_info *crypt_info;
+ union fscrypt_context ctx;
+ struct fscrypt_mode *mode;
+ struct key *master_key = NULL;
+ int res;
+
+ if (fscrypt_has_encryption_key(inode))
+ return 0;
+
+ res = fscrypt_initialize(inode->i_sb->s_cop->flags);
+ if (res)
+ return res;
+
+ res = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
+ if (res < 0) {
+ if (!fscrypt_dummy_context_enabled(inode) ||
+ IS_ENCRYPTED(inode)) {
+ fscrypt_warn(inode,
+ "Error %d getting encryption context",
+ res);
+ return res;
+ }
+ /* Fake up a context for an unencrypted directory */
+ memset(&ctx, 0, sizeof(ctx));
+ ctx.version = FSCRYPT_CONTEXT_V1;
+ ctx.v1.contents_encryption_mode = FSCRYPT_MODE_AES_256_XTS;
+ ctx.v1.filenames_encryption_mode = FSCRYPT_MODE_AES_256_CTS;
+ memset(ctx.v1.master_key_descriptor, 0x42,
+ FSCRYPT_KEY_DESCRIPTOR_SIZE);
+ res = sizeof(ctx.v1);
+ }
+
+ crypt_info = kmem_cache_zalloc(fscrypt_info_cachep, GFP_NOFS);
+ if (!crypt_info)
+ return -ENOMEM;
+
+ crypt_info->ci_inode = inode;
+
+ res = fscrypt_policy_from_context(&crypt_info->ci_policy, &ctx, res);
+ if (res) {
+ fscrypt_warn(inode,
+ "Unrecognized or corrupt encryption context");
+ goto out;
+ }
+
+ switch (ctx.version) {
+ case FSCRYPT_CONTEXT_V1:
+ memcpy(crypt_info->ci_nonce, ctx.v1.nonce,
+ FS_KEY_DERIVATION_NONCE_SIZE);
+ break;
+ case FSCRYPT_CONTEXT_V2:
+ memcpy(crypt_info->ci_nonce, ctx.v2.nonce,
+ FS_KEY_DERIVATION_NONCE_SIZE);
+ break;
+ default:
+ WARN_ON(1);
+ res = -EINVAL;
+ goto out;
+ }
+
+ if (!fscrypt_supported_policy(&crypt_info->ci_policy, inode)) {
+ res = -EINVAL;
+ goto out;
+ }
+
+ mode = select_encryption_mode(&crypt_info->ci_policy, inode);
+ if (IS_ERR(mode)) {
+ res = PTR_ERR(mode);
+ goto out;
+ }
+ WARN_ON(mode->ivsize > FSCRYPT_MAX_IV_SIZE);
+ crypt_info->ci_mode = mode;
+
+ res = setup_file_encryption_key(crypt_info, &master_key);
+ if (res)
+ goto out;
+
+ if (cmpxchg_release(&inode->i_crypt_info, NULL, crypt_info) == NULL) {
+ if (master_key) {
+ struct fscrypt_master_key *mk =
+ master_key->payload.data[0];
+
+ refcount_inc(&mk->mk_refcount);
+ crypt_info->ci_master_key = key_get(master_key);
+ spin_lock(&mk->mk_decrypted_inodes_lock);
+ list_add(&crypt_info->ci_master_key_link,
+ &mk->mk_decrypted_inodes);
+ spin_unlock(&mk->mk_decrypted_inodes_lock);
+ }
+ crypt_info = NULL;
+ }
+ res = 0;
+out:
+ if (master_key) {
+ struct fscrypt_master_key *mk = master_key->payload.data[0];
+
+ up_read(&mk->mk_secret_sem);
+ key_put(master_key);
+ }
+ if (res == -ENOKEY)
+ res = 0;
+ put_crypt_info(crypt_info);
+ return res;
+}
+EXPORT_SYMBOL(fscrypt_get_encryption_info);
+
+/**
+ * fscrypt_put_encryption_info - free most of an inode's fscrypt data
+ *
+ * Free the inode's fscrypt_info. Filesystems must call this when the inode is
+ * being evicted. An RCU grace period need not have elapsed yet.
+ */
+void fscrypt_put_encryption_info(struct inode *inode)
+{
+ put_crypt_info(inode->i_crypt_info);
+ inode->i_crypt_info = NULL;
+}
+EXPORT_SYMBOL(fscrypt_put_encryption_info);
+
+/**
+ * fscrypt_free_inode - free an inode's fscrypt data requiring RCU delay
+ *
+ * Free the inode's cached decrypted symlink target, if any. Filesystems must
+ * call this after an RCU grace period, just before they free the inode.
+ */
+void fscrypt_free_inode(struct inode *inode)
+{
+ if (IS_ENCRYPTED(inode) && S_ISLNK(inode->i_mode)) {
+ kfree(inode->i_link);
+ inode->i_link = NULL;
+ }
+}
+EXPORT_SYMBOL(fscrypt_free_inode);
+
+/**
+ * fscrypt_drop_inode - check whether the inode's master key has been removed
+ *
+ * Filesystems supporting fscrypt must call this from their ->drop_inode()
+ * method so that encrypted inodes are evicted as soon as they're no longer in
+ * use and their master key has been removed.
+ *
+ * Return: 1 if fscrypt wants the inode to be evicted now, otherwise 0
+ */
+int fscrypt_drop_inode(struct inode *inode)
+{
+ const struct fscrypt_info *ci = READ_ONCE(inode->i_crypt_info);
+ const struct fscrypt_master_key *mk;
+
+ /*
+ * If ci is NULL, then the inode doesn't have an encryption key set up
+ * so it's irrelevant. If ci_master_key is NULL, then the master key
+ * was provided via the legacy mechanism of the process-subscribed
+ * keyrings, so we don't know whether it's been removed or not.
+ */
+ if (!ci || !ci->ci_master_key)
+ return 0;
+ mk = ci->ci_master_key->payload.data[0];
+
+ /*
+ * Note: since we aren't holding ->mk_secret_sem, the result here can
+ * immediately become outdated. But there's no correctness problem with
+ * unnecessarily evicting. Nor is there a correctness problem with not
+ * evicting while iput() is racing with the key being removed, since
+ * then the thread removing the key will either evict the inode itself
+ * or will correctly detect that it wasn't evicted due to the race.
+ */
+ return !is_master_key_secret_present(&mk->mk_secret);
+}
+EXPORT_SYMBOL_GPL(fscrypt_drop_inode);
diff --git a/fs/crypto/keysetup_v1.c b/fs/crypto/keysetup_v1.c
new file mode 100644
index 0000000..1f0c19d
--- /dev/null
+++ b/fs/crypto/keysetup_v1.c
@@ -0,0 +1,357 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Key setup for v1 encryption policies
+ *
+ * Copyright 2015, 2019 Google LLC
+ */
+
+/*
+ * This file implements compatibility functions for the original encryption
+ * policy version ("v1"), including:
+ *
+ * - Deriving per-file keys using the AES-128-ECB based KDF
+ * (rather than the new method of using HKDF-SHA512)
+ *
+ * - Retrieving fscrypt master keys from process-subscribed keyrings
+ * (rather than the new method of using a filesystem-level keyring)
+ *
+ * - Handling policies with the DIRECT_KEY flag set using a master key table
+ * (rather than the new method of implementing DIRECT_KEY with per-mode keys
+ * managed alongside the master keys in the filesystem-level keyring)
+ */
+
+#include <crypto/algapi.h>
+#include <crypto/skcipher.h>
+#include <keys/user-type.h>
+#include <linux/hashtable.h>
+#include <linux/scatterlist.h>
+#include <linux/bio-crypt-ctx.h>
+
+#include "fscrypt_private.h"
+
+/* Table of keys referenced by DIRECT_KEY policies */
+static DEFINE_HASHTABLE(fscrypt_direct_keys, 6); /* 6 bits = 64 buckets */
+static DEFINE_SPINLOCK(fscrypt_direct_keys_lock);
+
+/*
+ * v1 key derivation function. This generates the derived key by encrypting the
+ * master key with AES-128-ECB using the nonce as the AES key. This provides a
+ * unique derived key with sufficient entropy for each inode. However, it's
+ * nonstandard, non-extensible, doesn't evenly distribute the entropy from the
+ * master key, and is trivially reversible: an attacker who compromises a
+ * derived key can "decrypt" it to get back to the master key, then derive any
+ * other key. For all new code, use HKDF instead.
+ *
+ * The master key must be at least as long as the derived key. If the master
+ * key is longer, then only the first 'derived_keysize' bytes are used.
+ */
+static int derive_key_aes(const u8 *master_key,
+ const u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE],
+ u8 *derived_key, unsigned int derived_keysize)
+{
+ int res = 0;
+ struct skcipher_request *req = NULL;
+ DECLARE_CRYPTO_WAIT(wait);
+ struct scatterlist src_sg, dst_sg;
+ struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
+
+ if (IS_ERR(tfm)) {
+ res = PTR_ERR(tfm);
+ tfm = NULL;
+ goto out;
+ }
+ crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
+ req = skcipher_request_alloc(tfm, GFP_NOFS);
+ if (!req) {
+ res = -ENOMEM;
+ goto out;
+ }
+ skcipher_request_set_callback(req,
+ CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
+ crypto_req_done, &wait);
+ res = crypto_skcipher_setkey(tfm, nonce, FS_KEY_DERIVATION_NONCE_SIZE);
+ if (res < 0)
+ goto out;
+
+ sg_init_one(&src_sg, master_key, derived_keysize);
+ sg_init_one(&dst_sg, derived_key, derived_keysize);
+ skcipher_request_set_crypt(req, &src_sg, &dst_sg, derived_keysize,
+ NULL);
+ res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
+out:
+ skcipher_request_free(req);
+ crypto_free_skcipher(tfm);
+ return res;
+}
+
+/*
+ * Search the current task's subscribed keyrings for a "logon" key with
+ * description prefix:descriptor, and if found acquire a read lock on it and
+ * return a pointer to its validated payload in *payload_ret.
+ */
+static struct key *
+find_and_lock_process_key(const char *prefix,
+ const u8 descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE],
+ unsigned int min_keysize,
+ const struct fscrypt_key **payload_ret)
+{
+ char *description;
+ struct key *key;
+ const struct user_key_payload *ukp;
+ const struct fscrypt_key *payload;
+
+ description = kasprintf(GFP_NOFS, "%s%*phN", prefix,
+ FSCRYPT_KEY_DESCRIPTOR_SIZE, descriptor);
+ if (!description)
+ return ERR_PTR(-ENOMEM);
+
+ key = request_key(&key_type_logon, description, NULL);
+ kfree(description);
+ if (IS_ERR(key))
+ return key;
+
+ down_read(&key->sem);
+ ukp = user_key_payload_locked(key);
+
+ if (!ukp) /* was the key revoked before we acquired its semaphore? */
+ goto invalid;
+
+ payload = (const struct fscrypt_key *)ukp->data;
+
+ if (ukp->datalen != sizeof(struct fscrypt_key) ||
+ payload->size < 1 || payload->size > FSCRYPT_MAX_KEY_SIZE) {
+ fscrypt_warn(NULL,
+ "key with description '%s' has invalid payload",
+ key->description);
+ goto invalid;
+ }
+
+ if (payload->size < min_keysize) {
+ fscrypt_warn(NULL,
+ "key with description '%s' is too short (got %u bytes, need %u+ bytes)",
+ key->description, payload->size, min_keysize);
+ goto invalid;
+ }
+
+ *payload_ret = payload;
+ return key;
+
+invalid:
+ up_read(&key->sem);
+ key_put(key);
+ return ERR_PTR(-ENOKEY);
+}
+
+/* Master key referenced by DIRECT_KEY policy */
+struct fscrypt_direct_key {
+ struct hlist_node dk_node;
+ refcount_t dk_refcount;
+ const struct fscrypt_mode *dk_mode;
+ struct fscrypt_prepared_key dk_key;
+ u8 dk_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
+ u8 dk_raw[FSCRYPT_MAX_KEY_SIZE];
+};
+
+static void free_direct_key(struct fscrypt_direct_key *dk)
+{
+ if (dk) {
+ fscrypt_destroy_prepared_key(&dk->dk_key);
+ kzfree(dk);
+ }
+}
+
+void fscrypt_put_direct_key(struct fscrypt_direct_key *dk)
+{
+ if (!refcount_dec_and_lock(&dk->dk_refcount, &fscrypt_direct_keys_lock))
+ return;
+ hash_del(&dk->dk_node);
+ spin_unlock(&fscrypt_direct_keys_lock);
+
+ free_direct_key(dk);
+}
+
+/*
+ * Find/insert the given key into the fscrypt_direct_keys table. If found, it
+ * is returned with elevated refcount, and 'to_insert' is freed if non-NULL. If
+ * not found, 'to_insert' is inserted and returned if it's non-NULL; otherwise
+ * NULL is returned.
+ */
+static struct fscrypt_direct_key *
+find_or_insert_direct_key(struct fscrypt_direct_key *to_insert,
+ const u8 *raw_key, const struct fscrypt_info *ci)
+{
+ unsigned long hash_key;
+ struct fscrypt_direct_key *dk;
+
+ /*
+ * Careful: to avoid potentially leaking secret key bytes via timing
+ * information, we must key the hash table by descriptor rather than by
+ * raw key, and use crypto_memneq() when comparing raw keys.
+ */
+
+ BUILD_BUG_ON(sizeof(hash_key) > FSCRYPT_KEY_DESCRIPTOR_SIZE);
+ memcpy(&hash_key, ci->ci_policy.v1.master_key_descriptor,
+ sizeof(hash_key));
+
+ spin_lock(&fscrypt_direct_keys_lock);
+ hash_for_each_possible(fscrypt_direct_keys, dk, dk_node, hash_key) {
+ if (memcmp(ci->ci_policy.v1.master_key_descriptor,
+ dk->dk_descriptor, FSCRYPT_KEY_DESCRIPTOR_SIZE) != 0)
+ continue;
+ if (ci->ci_mode != dk->dk_mode)
+ continue;
+ if (!fscrypt_is_key_prepared(&dk->dk_key, ci))
+ continue;
+ if (crypto_memneq(raw_key, dk->dk_raw, ci->ci_mode->keysize))
+ continue;
+ /* using existing tfm with same (descriptor, mode, raw_key) */
+ refcount_inc(&dk->dk_refcount);
+ spin_unlock(&fscrypt_direct_keys_lock);
+ free_direct_key(to_insert);
+ return dk;
+ }
+ if (to_insert)
+ hash_add(fscrypt_direct_keys, &to_insert->dk_node, hash_key);
+ spin_unlock(&fscrypt_direct_keys_lock);
+ return to_insert;
+}
+
+/* Prepare to encrypt directly using the master key in the given mode */
+static struct fscrypt_direct_key *
+fscrypt_get_direct_key(const struct fscrypt_info *ci, const u8 *raw_key)
+{
+ struct fscrypt_direct_key *dk;
+ int err;
+
+ /* Is there already a tfm for this key? */
+ dk = find_or_insert_direct_key(NULL, raw_key, ci);
+ if (dk)
+ return dk;
+
+ /* Nope, allocate one. */
+ dk = kzalloc(sizeof(*dk), GFP_NOFS);
+ if (!dk)
+ return ERR_PTR(-ENOMEM);
+ refcount_set(&dk->dk_refcount, 1);
+ dk->dk_mode = ci->ci_mode;
+ err = fscrypt_prepare_key(&dk->dk_key, raw_key, ci->ci_mode->keysize,
+ false /*is_hw_wrapped*/, ci);
+ if (err)
+ goto err_free_dk;
+ memcpy(dk->dk_descriptor, ci->ci_policy.v1.master_key_descriptor,
+ FSCRYPT_KEY_DESCRIPTOR_SIZE);
+ memcpy(dk->dk_raw, raw_key, ci->ci_mode->keysize);
+
+ return find_or_insert_direct_key(dk, raw_key, ci);
+
+err_free_dk:
+ free_direct_key(dk);
+ return ERR_PTR(err);
+}
+
+/* v1 policy, DIRECT_KEY: use the master key directly */
+static int setup_v1_file_key_direct(struct fscrypt_info *ci,
+ const u8 *raw_master_key)
+{
+ const struct fscrypt_mode *mode = ci->ci_mode;
+ struct fscrypt_direct_key *dk;
+
+ if (!fscrypt_mode_supports_direct_key(mode)) {
+ fscrypt_warn(ci->ci_inode,
+ "Direct key mode not allowed with %s",
+ mode->friendly_name);
+ return -EINVAL;
+ }
+
+ if (ci->ci_policy.v1.contents_encryption_mode !=
+ ci->ci_policy.v1.filenames_encryption_mode) {
+ fscrypt_warn(ci->ci_inode,
+ "Direct key mode not allowed with different contents and filenames modes");
+ return -EINVAL;
+ }
+
+ dk = fscrypt_get_direct_key(ci, raw_master_key);
+ if (IS_ERR(dk))
+ return PTR_ERR(dk);
+ ci->ci_direct_key = dk;
+ ci->ci_key = dk->dk_key;
+ return 0;
+}
+
+/* v1 policy, !DIRECT_KEY: derive the file's encryption key */
+static int setup_v1_file_key_derived(struct fscrypt_info *ci,
+ const u8 *raw_master_key)
+{
+ u8 *derived_key;
+ int err;
+ int i;
+ union {
+ u8 bytes[FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE];
+ u32 words[FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE / sizeof(u32)];
+ } key_new;
+
+ /*Support legacy ice based content encryption mode*/
+ if ((fscrypt_policy_contents_mode(&ci->ci_policy) ==
+ FSCRYPT_MODE_PRIVATE) &&
+ fscrypt_using_inline_encryption(ci)) {
+ memcpy(key_new.bytes, raw_master_key, ci->ci_mode->keysize);
+
+ for (i = 0; i < ARRAY_SIZE(key_new.words); i++)
+ __cpu_to_be32s(&key_new.words[i]);
+
+ err = fscrypt_prepare_inline_crypt_key(&ci->ci_key,
+ key_new.bytes,
+ ci->ci_mode->keysize,
+ false,
+ ci);
+ return err;
+ }
+ /*
+ * This cannot be a stack buffer because it will be passed to the
+ * scatterlist crypto API during derive_key_aes().
+ */
+ derived_key = kmalloc(ci->ci_mode->keysize, GFP_NOFS);
+ if (!derived_key)
+ return -ENOMEM;
+
+ err = derive_key_aes(raw_master_key, ci->ci_nonce,
+ derived_key, ci->ci_mode->keysize);
+ if (err)
+ goto out;
+
+ err = fscrypt_set_derived_key(ci, derived_key);
+out:
+ kzfree(derived_key);
+ return err;
+}
+
+int fscrypt_setup_v1_file_key(struct fscrypt_info *ci, const u8 *raw_master_key)
+{
+ if (ci->ci_policy.v1.flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY)
+ return setup_v1_file_key_direct(ci, raw_master_key);
+ else
+ return setup_v1_file_key_derived(ci, raw_master_key);
+}
+
+int fscrypt_setup_v1_file_key_via_subscribed_keyrings(struct fscrypt_info *ci)
+{
+ struct key *key;
+ const struct fscrypt_key *payload;
+ int err;
+
+ key = find_and_lock_process_key(FSCRYPT_KEY_DESC_PREFIX,
+ ci->ci_policy.v1.master_key_descriptor,
+ ci->ci_mode->keysize, &payload);
+ if (key == ERR_PTR(-ENOKEY) && ci->ci_inode->i_sb->s_cop->key_prefix) {
+ key = find_and_lock_process_key(ci->ci_inode->i_sb->s_cop->key_prefix,
+ ci->ci_policy.v1.master_key_descriptor,
+ ci->ci_mode->keysize, &payload);
+ }
+ if (IS_ERR(key))
+ return PTR_ERR(key);
+
+ err = fscrypt_setup_v1_file_key(ci, payload->raw);
+ up_read(&key->sem);
+ key_put(key);
+ return err;
+}
diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
index 4941fe8..96f5280 100644
--- a/fs/crypto/policy.c
+++ b/fs/crypto/policy.c
@@ -5,8 +5,9 @@
* Copyright (C) 2015, Google, Inc.
* Copyright (C) 2015, Motorola Mobility.
*
- * Written by Michael Halcrow, 2015.
+ * Originally written by Michael Halcrow, 2015.
* Modified by Jaegeuk Kim, 2015.
+ * Modified by Eric Biggers, 2019 for v2 policy support.
*/
#include <linux/random.h>
@@ -14,70 +15,342 @@
#include <linux/mount.h>
#include "fscrypt_private.h"
-/*
- * check whether an encryption policy is consistent with an encryption context
+/**
+ * fscrypt_policies_equal - check whether two encryption policies are the same
+ *
+ * Return: %true if equal, else %false
*/
-static bool is_encryption_context_consistent_with_policy(
- const struct fscrypt_context *ctx,
- const struct fscrypt_policy *policy)
+bool fscrypt_policies_equal(const union fscrypt_policy *policy1,
+ const union fscrypt_policy *policy2)
{
- return memcmp(ctx->master_key_descriptor, policy->master_key_descriptor,
- FS_KEY_DESCRIPTOR_SIZE) == 0 &&
- (ctx->flags == policy->flags) &&
- (ctx->contents_encryption_mode ==
- policy->contents_encryption_mode) &&
- (ctx->filenames_encryption_mode ==
- policy->filenames_encryption_mode);
+ if (policy1->version != policy2->version)
+ return false;
+
+ return !memcmp(policy1, policy2, fscrypt_policy_size(policy1));
}
-static int create_encryption_context_from_policy(struct inode *inode,
- const struct fscrypt_policy *policy)
+static bool supported_iv_ino_lblk_64_policy(
+ const struct fscrypt_policy_v2 *policy,
+ const struct inode *inode)
{
- struct fscrypt_context ctx;
+ struct super_block *sb = inode->i_sb;
+ int ino_bits = 64, lblk_bits = 64;
- ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
- memcpy(ctx.master_key_descriptor, policy->master_key_descriptor,
- FS_KEY_DESCRIPTOR_SIZE);
+ if (policy->flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
+ fscrypt_warn(inode,
+ "The DIRECT_KEY and IV_INO_LBLK_64 flags are mutually exclusive");
+ return false;
+ }
+ /*
+ * It's unsafe to include inode numbers in the IVs if the filesystem can
+ * potentially renumber inodes, e.g. via filesystem shrinking.
+ */
+ if (!sb->s_cop->has_stable_inodes ||
+ !sb->s_cop->has_stable_inodes(sb)) {
+ fscrypt_warn(inode,
+ "Can't use IV_INO_LBLK_64 policy on filesystem '%s' because it doesn't have stable inode numbers",
+ sb->s_id);
+ return false;
+ }
+ if (sb->s_cop->get_ino_and_lblk_bits)
+ sb->s_cop->get_ino_and_lblk_bits(sb, &ino_bits, &lblk_bits);
+ if (ino_bits > 32 || lblk_bits > 32) {
+ fscrypt_warn(inode,
+ "Can't use IV_INO_LBLK_64 policy on filesystem '%s' because it doesn't use 32-bit inode and block numbers",
+ sb->s_id);
+ return false;
+ }
+ return true;
+}
- if (!fscrypt_valid_enc_modes(policy->contents_encryption_mode,
- policy->filenames_encryption_mode))
+/**
+ * fscrypt_supported_policy - check whether an encryption policy is supported
+ *
+ * Given an encryption policy, check whether all its encryption modes and other
+ * settings are supported by this kernel. (But we don't currently don't check
+ * for crypto API support here, so attempting to use an algorithm not configured
+ * into the crypto API will still fail later.)
+ *
+ * Return: %true if supported, else %false
+ */
+bool fscrypt_supported_policy(const union fscrypt_policy *policy_u,
+ const struct inode *inode)
+{
+ switch (policy_u->version) {
+ case FSCRYPT_POLICY_V1: {
+ const struct fscrypt_policy_v1 *policy = &policy_u->v1;
+
+ if (!fscrypt_valid_enc_modes(policy->contents_encryption_mode,
+ policy->filenames_encryption_mode)) {
+ fscrypt_warn(inode,
+ "Unsupported encryption modes (contents %d, filenames %d)",
+ policy->contents_encryption_mode,
+ policy->filenames_encryption_mode);
+ return false;
+ }
+
+ if (policy->flags & ~(FSCRYPT_POLICY_FLAGS_PAD_MASK |
+ FSCRYPT_POLICY_FLAG_DIRECT_KEY)) {
+ fscrypt_warn(inode,
+ "Unsupported encryption flags (0x%02x)",
+ policy->flags);
+ return false;
+ }
+
+ return true;
+ }
+ case FSCRYPT_POLICY_V2: {
+ const struct fscrypt_policy_v2 *policy = &policy_u->v2;
+
+ if (!fscrypt_valid_enc_modes(policy->contents_encryption_mode,
+ policy->filenames_encryption_mode)) {
+ fscrypt_warn(inode,
+ "Unsupported encryption modes (contents %d, filenames %d)",
+ policy->contents_encryption_mode,
+ policy->filenames_encryption_mode);
+ return false;
+ }
+
+ if (policy->flags & ~FSCRYPT_POLICY_FLAGS_VALID) {
+ fscrypt_warn(inode,
+ "Unsupported encryption flags (0x%02x)",
+ policy->flags);
+ return false;
+ }
+
+ if ((policy->flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64) &&
+ !supported_iv_ino_lblk_64_policy(policy, inode))
+ return false;
+
+ if (memchr_inv(policy->__reserved, 0,
+ sizeof(policy->__reserved))) {
+ fscrypt_warn(inode,
+ "Reserved bits set in encryption policy");
+ return false;
+ }
+
+ return true;
+ }
+ }
+ return false;
+}
+
+/**
+ * fscrypt_new_context_from_policy - create a new fscrypt_context from a policy
+ *
+ * Create an fscrypt_context for an inode that is being assigned the given
+ * encryption policy. A new nonce is randomly generated.
+ *
+ * Return: the size of the new context in bytes.
+ */
+static int fscrypt_new_context_from_policy(union fscrypt_context *ctx_u,
+ const union fscrypt_policy *policy_u)
+{
+ memset(ctx_u, 0, sizeof(*ctx_u));
+
+ switch (policy_u->version) {
+ case FSCRYPT_POLICY_V1: {
+ const struct fscrypt_policy_v1 *policy = &policy_u->v1;
+ struct fscrypt_context_v1 *ctx = &ctx_u->v1;
+
+ ctx->version = FSCRYPT_CONTEXT_V1;
+ ctx->contents_encryption_mode =
+ policy->contents_encryption_mode;
+ ctx->filenames_encryption_mode =
+ policy->filenames_encryption_mode;
+ ctx->flags = policy->flags;
+ memcpy(ctx->master_key_descriptor,
+ policy->master_key_descriptor,
+ sizeof(ctx->master_key_descriptor));
+ get_random_bytes(ctx->nonce, sizeof(ctx->nonce));
+ return sizeof(*ctx);
+ }
+ case FSCRYPT_POLICY_V2: {
+ const struct fscrypt_policy_v2 *policy = &policy_u->v2;
+ struct fscrypt_context_v2 *ctx = &ctx_u->v2;
+
+ ctx->version = FSCRYPT_CONTEXT_V2;
+ ctx->contents_encryption_mode =
+ policy->contents_encryption_mode;
+ ctx->filenames_encryption_mode =
+ policy->filenames_encryption_mode;
+ ctx->flags = policy->flags;
+ memcpy(ctx->master_key_identifier,
+ policy->master_key_identifier,
+ sizeof(ctx->master_key_identifier));
+ get_random_bytes(ctx->nonce, sizeof(ctx->nonce));
+ return sizeof(*ctx);
+ }
+ }
+ BUG();
+}
+
+/**
+ * fscrypt_policy_from_context - convert an fscrypt_context to an fscrypt_policy
+ *
+ * Given an fscrypt_context, build the corresponding fscrypt_policy.
+ *
+ * Return: 0 on success, or -EINVAL if the fscrypt_context has an unrecognized
+ * version number or size.
+ *
+ * This does *not* validate the settings within the policy itself, e.g. the
+ * modes, flags, and reserved bits. Use fscrypt_supported_policy() for that.
+ */
+int fscrypt_policy_from_context(union fscrypt_policy *policy_u,
+ const union fscrypt_context *ctx_u,
+ int ctx_size)
+{
+ memset(policy_u, 0, sizeof(*policy_u));
+
+ if (ctx_size <= 0 || ctx_size != fscrypt_context_size(ctx_u))
return -EINVAL;
- if (policy->flags & ~FS_POLICY_FLAGS_VALID)
+ switch (ctx_u->version) {
+ case FSCRYPT_CONTEXT_V1: {
+ const struct fscrypt_context_v1 *ctx = &ctx_u->v1;
+ struct fscrypt_policy_v1 *policy = &policy_u->v1;
+
+ policy->version = FSCRYPT_POLICY_V1;
+ policy->contents_encryption_mode =
+ ctx->contents_encryption_mode;
+ policy->filenames_encryption_mode =
+ ctx->filenames_encryption_mode;
+ policy->flags = ctx->flags;
+ memcpy(policy->master_key_descriptor,
+ ctx->master_key_descriptor,
+ sizeof(policy->master_key_descriptor));
+ return 0;
+ }
+ case FSCRYPT_CONTEXT_V2: {
+ const struct fscrypt_context_v2 *ctx = &ctx_u->v2;
+ struct fscrypt_policy_v2 *policy = &policy_u->v2;
+
+ policy->version = FSCRYPT_POLICY_V2;
+ policy->contents_encryption_mode =
+ ctx->contents_encryption_mode;
+ policy->filenames_encryption_mode =
+ ctx->filenames_encryption_mode;
+ policy->flags = ctx->flags;
+ memcpy(policy->__reserved, ctx->__reserved,
+ sizeof(policy->__reserved));
+ memcpy(policy->master_key_identifier,
+ ctx->master_key_identifier,
+ sizeof(policy->master_key_identifier));
+ return 0;
+ }
+ }
+ /* unreachable */
+ return -EINVAL;
+}
+
+/* Retrieve an inode's encryption policy */
+static int fscrypt_get_policy(struct inode *inode, union fscrypt_policy *policy)
+{
+ const struct fscrypt_info *ci;
+ union fscrypt_context ctx;
+ int ret;
+
+ ci = READ_ONCE(inode->i_crypt_info);
+ if (ci) {
+ /* key available, use the cached policy */
+ *policy = ci->ci_policy;
+ return 0;
+ }
+
+ if (!IS_ENCRYPTED(inode))
+ return -ENODATA;
+
+ ret = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
+ if (ret < 0)
+ return (ret == -ERANGE) ? -EINVAL : ret;
+
+ return fscrypt_policy_from_context(policy, &ctx, ret);
+}
+
+static int set_encryption_policy(struct inode *inode,
+ const union fscrypt_policy *policy)
+{
+ union fscrypt_context ctx;
+ int ctxsize;
+ int err;
+
+ if (!fscrypt_supported_policy(policy, inode))
return -EINVAL;
- ctx.contents_encryption_mode = policy->contents_encryption_mode;
- ctx.filenames_encryption_mode = policy->filenames_encryption_mode;
- ctx.flags = policy->flags;
- BUILD_BUG_ON(sizeof(ctx.nonce) != FS_KEY_DERIVATION_NONCE_SIZE);
- get_random_bytes(ctx.nonce, FS_KEY_DERIVATION_NONCE_SIZE);
+ switch (policy->version) {
+ case FSCRYPT_POLICY_V1:
+ /*
+ * The original encryption policy version provided no way of
+ * verifying that the correct master key was supplied, which was
+ * insecure in scenarios where multiple users have access to the
+ * same encrypted files (even just read-only access). The new
+ * encryption policy version fixes this and also implies use of
+ * an improved key derivation function and allows non-root users
+ * to securely remove keys. So as long as compatibility with
+ * old kernels isn't required, it is recommended to use the new
+ * policy version for all new encrypted directories.
+ */
+ pr_warn_once("%s (pid %d) is setting deprecated v1 encryption policy; recommend upgrading to v2.\n",
+ current->comm, current->pid);
+ break;
+ case FSCRYPT_POLICY_V2:
+ err = fscrypt_verify_key_added(inode->i_sb,
+ policy->v2.master_key_identifier);
+ if (err)
+ return err;
+ break;
+ default:
+ WARN_ON(1);
+ return -EINVAL;
+ }
- return inode->i_sb->s_cop->set_context(inode, &ctx, sizeof(ctx), NULL);
+ ctxsize = fscrypt_new_context_from_policy(&ctx, policy);
+
+ return inode->i_sb->s_cop->set_context(inode, &ctx, ctxsize, NULL);
}
int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg)
{
- struct fscrypt_policy policy;
+ union fscrypt_policy policy;
+ union fscrypt_policy existing_policy;
struct inode *inode = file_inode(filp);
+ u8 version;
+ int size;
int ret;
- struct fscrypt_context ctx;
- if (copy_from_user(&policy, arg, sizeof(policy)))
+ if (get_user(policy.version, (const u8 __user *)arg))
return -EFAULT;
+ size = fscrypt_policy_size(&policy);
+ if (size <= 0)
+ return -EINVAL;
+
+ /*
+ * We should just copy the remaining 'size - 1' bytes here, but a
+ * bizarre bug in gcc 7 and earlier (fixed by gcc r255731) causes gcc to
+ * think that size can be 0 here (despite the check above!) *and* that
+ * it's a compile-time constant. Thus it would think copy_from_user()
+ * is passed compile-time constant ULONG_MAX, causing the compile-time
+ * buffer overflow check to fail, breaking the build. This only occurred
+ * when building an i386 kernel with -Os and branch profiling enabled.
+ *
+ * Work around it by just copying the first byte again...
+ */
+ version = policy.version;
+ if (copy_from_user(&policy, arg, size))
+ return -EFAULT;
+ policy.version = version;
+
if (!inode_owner_or_capable(inode))
return -EACCES;
- if (policy.version != 0)
- return -EINVAL;
-
ret = mnt_want_write_file(filp);
if (ret)
return ret;
inode_lock(inode);
- ret = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
+ ret = fscrypt_get_policy(inode, &existing_policy);
if (ret == -ENODATA) {
if (!S_ISDIR(inode->i_mode))
ret = -ENOTDIR;
@@ -86,14 +359,10 @@ int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg)
else if (!inode->i_sb->s_cop->empty_dir(inode))
ret = -ENOTEMPTY;
else
- ret = create_encryption_context_from_policy(inode,
- &policy);
- } else if (ret == sizeof(ctx) &&
- is_encryption_context_consistent_with_policy(&ctx,
- &policy)) {
- /* The file already uses the same encryption policy. */
- ret = 0;
- } else if (ret >= 0 || ret == -ERANGE) {
+ ret = set_encryption_policy(inode, &policy);
+ } else if (ret == -EINVAL ||
+ (ret == 0 && !fscrypt_policies_equal(&policy,
+ &existing_policy))) {
/* The file already uses a different encryption policy. */
ret = -EEXIST;
}
@@ -105,37 +374,57 @@ int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg)
}
EXPORT_SYMBOL(fscrypt_ioctl_set_policy);
+/* Original ioctl version; can only get the original policy version */
int fscrypt_ioctl_get_policy(struct file *filp, void __user *arg)
{
- struct inode *inode = file_inode(filp);
- struct fscrypt_context ctx;
- struct fscrypt_policy policy;
- int res;
+ union fscrypt_policy policy;
+ int err;
- if (!IS_ENCRYPTED(inode))
- return -ENODATA;
+ err = fscrypt_get_policy(file_inode(filp), &policy);
+ if (err)
+ return err;
- res = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
- if (res < 0 && res != -ERANGE)
- return res;
- if (res != sizeof(ctx))
- return -EINVAL;
- if (ctx.format != FS_ENCRYPTION_CONTEXT_FORMAT_V1)
+ if (policy.version != FSCRYPT_POLICY_V1)
return -EINVAL;
- policy.version = 0;
- policy.contents_encryption_mode = ctx.contents_encryption_mode;
- policy.filenames_encryption_mode = ctx.filenames_encryption_mode;
- policy.flags = ctx.flags;
- memcpy(policy.master_key_descriptor, ctx.master_key_descriptor,
- FS_KEY_DESCRIPTOR_SIZE);
-
- if (copy_to_user(arg, &policy, sizeof(policy)))
+ if (copy_to_user(arg, &policy, sizeof(policy.v1)))
return -EFAULT;
return 0;
}
EXPORT_SYMBOL(fscrypt_ioctl_get_policy);
+/* Extended ioctl version; can get policies of any version */
+int fscrypt_ioctl_get_policy_ex(struct file *filp, void __user *uarg)
+{
+ struct fscrypt_get_policy_ex_arg arg;
+ union fscrypt_policy *policy = (union fscrypt_policy *)&arg.policy;
+ size_t policy_size;
+ int err;
+
+ /* arg is policy_size, then policy */
+ BUILD_BUG_ON(offsetof(typeof(arg), policy_size) != 0);
+ BUILD_BUG_ON(offsetofend(typeof(arg), policy_size) !=
+ offsetof(typeof(arg), policy));
+ BUILD_BUG_ON(sizeof(arg.policy) != sizeof(*policy));
+
+ err = fscrypt_get_policy(file_inode(filp), policy);
+ if (err)
+ return err;
+ policy_size = fscrypt_policy_size(policy);
+
+ if (copy_from_user(&arg, uarg, sizeof(arg.policy_size)))
+ return -EFAULT;
+
+ if (policy_size > arg.policy_size)
+ return -EOVERFLOW;
+ arg.policy_size = policy_size;
+
+ if (copy_to_user(uarg, &arg, sizeof(arg.policy_size) + policy_size))
+ return -EFAULT;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(fscrypt_ioctl_get_policy_ex);
+
/**
* fscrypt_has_permitted_context() - is a file's encryption policy permitted
* within its directory?
@@ -157,10 +446,8 @@ EXPORT_SYMBOL(fscrypt_ioctl_get_policy);
*/
int fscrypt_has_permitted_context(struct inode *parent, struct inode *child)
{
- const struct fscrypt_operations *cops = parent->i_sb->s_cop;
- const struct fscrypt_info *parent_ci, *child_ci;
- struct fscrypt_context parent_ctx, child_ctx;
- int res;
+ union fscrypt_policy parent_policy, child_policy;
+ int err;
/* No restrictions on file types which are never encrypted */
if (!S_ISREG(child->i_mode) && !S_ISDIR(child->i_mode) &&
@@ -190,41 +477,22 @@ int fscrypt_has_permitted_context(struct inode *parent, struct inode *child)
* In any case, if an unexpected error occurs, fall back to "forbidden".
*/
- res = fscrypt_get_encryption_info(parent);
- if (res)
+ err = fscrypt_get_encryption_info(parent);
+ if (err)
return 0;
- res = fscrypt_get_encryption_info(child);
- if (res)
- return 0;
- parent_ci = READ_ONCE(parent->i_crypt_info);
- child_ci = READ_ONCE(child->i_crypt_info);
-
- if (parent_ci && child_ci) {
- return memcmp(parent_ci->ci_master_key_descriptor,
- child_ci->ci_master_key_descriptor,
- FS_KEY_DESCRIPTOR_SIZE) == 0 &&
- (parent_ci->ci_data_mode == child_ci->ci_data_mode) &&
- (parent_ci->ci_filename_mode ==
- child_ci->ci_filename_mode) &&
- (parent_ci->ci_flags == child_ci->ci_flags);
- }
-
- res = cops->get_context(parent, &parent_ctx, sizeof(parent_ctx));
- if (res != sizeof(parent_ctx))
+ err = fscrypt_get_encryption_info(child);
+ if (err)
return 0;
- res = cops->get_context(child, &child_ctx, sizeof(child_ctx));
- if (res != sizeof(child_ctx))
+ err = fscrypt_get_policy(parent, &parent_policy);
+ if (err)
return 0;
- return memcmp(parent_ctx.master_key_descriptor,
- child_ctx.master_key_descriptor,
- FS_KEY_DESCRIPTOR_SIZE) == 0 &&
- (parent_ctx.contents_encryption_mode ==
- child_ctx.contents_encryption_mode) &&
- (parent_ctx.filenames_encryption_mode ==
- child_ctx.filenames_encryption_mode) &&
- (parent_ctx.flags == child_ctx.flags);
+ err = fscrypt_get_policy(child, &child_policy);
+ if (err)
+ return 0;
+
+ return fscrypt_policies_equal(&parent_policy, &child_policy);
}
EXPORT_SYMBOL(fscrypt_has_permitted_context);
@@ -240,7 +508,8 @@ EXPORT_SYMBOL(fscrypt_has_permitted_context);
int fscrypt_inherit_context(struct inode *parent, struct inode *child,
void *fs_data, bool preload)
{
- struct fscrypt_context ctx;
+ union fscrypt_context ctx;
+ int ctxsize;
struct fscrypt_info *ci;
int res;
@@ -252,16 +521,10 @@ int fscrypt_inherit_context(struct inode *parent, struct inode *child,
if (ci == NULL)
return -ENOKEY;
- ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
- ctx.contents_encryption_mode = ci->ci_data_mode;
- ctx.filenames_encryption_mode = ci->ci_filename_mode;
- ctx.flags = ci->ci_flags;
- memcpy(ctx.master_key_descriptor, ci->ci_master_key_descriptor,
- FS_KEY_DESCRIPTOR_SIZE);
- get_random_bytes(ctx.nonce, FS_KEY_DERIVATION_NONCE_SIZE);
+ ctxsize = fscrypt_new_context_from_policy(&ctx, &ci->ci_policy);
+
BUILD_BUG_ON(sizeof(ctx) != FSCRYPT_SET_CONTEXT_MAX_SIZE);
- res = parent->i_sb->s_cop->set_context(child, &ctx,
- sizeof(ctx), fs_data);
+ res = parent->i_sb->s_cop->set_context(child, &ctx, ctxsize, fs_data);
if (res)
return res;
return preload ? fscrypt_get_encryption_info(child): 0;
diff --git a/fs/direct-io.c b/fs/direct-io.c
index 5362449..0cfb4d6 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -23,6 +23,7 @@
#include <linux/module.h>
#include <linux/types.h>
#include <linux/fs.h>
+#include <linux/fscrypt.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/highmem.h>
@@ -37,7 +38,6 @@
#include <linux/uio.h>
#include <linux/atomic.h>
#include <linux/prefetch.h>
-#include <linux/fscrypt.h>
/*
* How many user pages to map in one call to get_user_pages(). This determines
@@ -431,6 +431,7 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio,
sector_t first_sector, int nr_vecs)
{
struct bio *bio;
+ struct inode *inode = dio->inode;
/*
* bio_alloc() is guaranteed to return a bio when allowed to sleep and
@@ -438,6 +439,9 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio,
*/
bio = bio_alloc(GFP_KERNEL, nr_vecs);
+ fscrypt_set_bio_crypt_ctx(bio, inode,
+ sdio->cur_page_fs_offset >> inode->i_blkbits,
+ GFP_KERNEL);
bio_set_dev(bio, bdev);
bio->bi_iter.bi_sector = first_sector;
bio_set_op_attrs(bio, dio->op, dio->op_flags);
@@ -452,23 +456,6 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio,
sdio->logical_offset_in_bio = sdio->cur_page_fs_offset;
}
-#ifdef CONFIG_PFK
-static bool is_inode_filesystem_type(const struct inode *inode,
- const char *fs_type)
-{
- if (!inode || !fs_type)
- return false;
-
- if (!inode->i_sb)
- return false;
-
- if (!inode->i_sb->s_type)
- return false;
-
- return (strcmp(inode->i_sb->s_type->name, fs_type) == 0);
-}
-#endif
-
/*
* In the AIO read case we speculatively dirty the pages before starting IO.
* During IO completion, any of these pages which happen to have been written
@@ -491,17 +478,7 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
bio_set_pages_dirty(bio);
dio->bio_disk = bio->bi_disk;
-#ifdef CONFIG_PFK
- bio->bi_dio_inode = dio->inode;
-/* iv sector for security/pfe/pfk_fscrypt.c and f2fs in fs/f2fs/f2fs.h */
-#define PG_DUN_NEW(i, p) \
- (((((u64)(i)->i_ino) & 0xffffffff) << 32) | ((p) & 0xffffffff))
-
- if (is_inode_filesystem_type(dio->inode, "f2fs"))
- fscrypt_set_ice_dun(dio->inode, bio, PG_DUN_NEW(dio->inode,
- (sdio->logical_offset_in_bio >> PAGE_SHIFT)));
-#endif
if (sdio->submit_io) {
sdio->submit_io(bio, dio->inode, sdio->logical_offset_in_bio);
dio->bio_cookie = BLK_QC_T_NONE;
@@ -513,18 +490,6 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
sdio->logical_offset_in_bio = 0;
}
-struct inode *dio_bio_get_inode(struct bio *bio)
-{
- struct inode *inode = NULL;
-
- if (bio == NULL)
- return NULL;
-#ifdef CONFIG_PFK
- inode = bio->bi_dio_inode;
-#endif
- return inode;
-}
-
/*
* Release any resources in case of a failure
*/
diff --git a/fs/ext4/Kconfig b/fs/ext4/Kconfig
index 3ed1939..037358b 100644
--- a/fs/ext4/Kconfig
+++ b/fs/ext4/Kconfig
@@ -106,16 +106,10 @@
files
config EXT4_FS_ENCRYPTION
- bool "Ext4 FS Encryption"
- default n
+ bool
+ default y
depends on EXT4_ENCRYPTION
-config EXT4_FS_ICE_ENCRYPTION
- bool "Ext4 Encryption with ICE support"
- default n
- depends on EXT4_FS_ENCRYPTION
- depends on PFK
-
config EXT4_DEBUG
bool "EXT4 debugging support"
depends on EXT4_FS
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 734dc63..56f9de2 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -224,10 +224,7 @@ typedef struct ext4_io_end {
ssize_t size; /* size of the extent */
} ext4_io_end_t;
-#define EXT4_IO_ENCRYPTED 1
-
struct ext4_io_submit {
- unsigned int io_flags;
struct writeback_control *io_wbc;
struct bio *io_bio;
ext4_io_end_t *io_end;
@@ -1143,6 +1140,7 @@ struct ext4_inode_info {
#define EXT4_MOUNT_JOURNAL_CHECKSUM 0x800000 /* Journal checksums */
#define EXT4_MOUNT_JOURNAL_ASYNC_COMMIT 0x1000000 /* Journal Async Commit */
#define EXT4_MOUNT_WARN_ON_ERROR 0x2000000 /* Trigger WARN_ON on error */
+#define EXT4_MOUNT_INLINECRYPT 0x4000000 /* Inline encryption support */
#define EXT4_MOUNT_DELALLOC 0x8000000 /* Delalloc support */
#define EXT4_MOUNT_DATA_ERR_ABORT 0x10000000 /* Abort on file data write */
#define EXT4_MOUNT_BLOCK_VALIDITY 0x20000000 /* Block validity checking */
@@ -1669,6 +1667,7 @@ static inline bool ext4_verity_in_progress(struct inode *inode)
#define EXT4_FEATURE_COMPAT_RESIZE_INODE 0x0010
#define EXT4_FEATURE_COMPAT_DIR_INDEX 0x0020
#define EXT4_FEATURE_COMPAT_SPARSE_SUPER2 0x0200
+#define EXT4_FEATURE_COMPAT_STABLE_INODES 0x0800
#define EXT4_FEATURE_RO_COMPAT_SPARSE_SUPER 0x0001
#define EXT4_FEATURE_RO_COMPAT_LARGE_FILE 0x0002
@@ -1770,6 +1769,7 @@ EXT4_FEATURE_COMPAT_FUNCS(xattr, EXT_ATTR)
EXT4_FEATURE_COMPAT_FUNCS(resize_inode, RESIZE_INODE)
EXT4_FEATURE_COMPAT_FUNCS(dir_index, DIR_INDEX)
EXT4_FEATURE_COMPAT_FUNCS(sparse_super2, SPARSE_SUPER2)
+EXT4_FEATURE_COMPAT_FUNCS(stable_inodes, STABLE_INODES)
EXT4_FEATURE_RO_COMPAT_FUNCS(sparse_super, SPARSE_SUPER)
EXT4_FEATURE_RO_COMPAT_FUNCS(large_file, LARGE_FILE)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 52cbf51..e8d1c11 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1235,12 +1235,9 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
if (!buffer_uptodate(bh) && !buffer_delay(bh) &&
!buffer_unwritten(bh) &&
(block_start < from || block_end > to)) {
- decrypt = IS_ENCRYPTED(inode) &&
- S_ISREG(inode->i_mode) &&
- !fscrypt_using_hardware_encryption(inode);
- ll_rw_block(REQ_OP_READ, (decrypt ? REQ_NOENCRYPT : 0),
- 1, &bh);
+ ll_rw_block(REQ_OP_READ, 0, 1, &bh);
*wait_bh++ = bh;
+ decrypt = fscrypt_inode_uses_fs_layer_crypto(inode);
}
}
/*
@@ -3806,14 +3803,10 @@ static ssize_t ext4_direct_IO_write(struct kiocb *iocb, struct iov_iter *iter)
get_block_func = ext4_dio_get_block_unwritten_async;
dio_flags = DIO_LOCKING;
}
-#if defined(CONFIG_FS_ENCRYPTION)
- WARN_ON(IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)
- && !fscrypt_using_hardware_encryption(inode));
-#endif
- ret = __blockdev_direct_IO(iocb, inode,
- inode->i_sb->s_bdev, iter,
- get_block_func,
- ext4_end_io_dio, NULL, dio_flags);
+
+ ret = __blockdev_direct_IO(iocb, inode, inode->i_sb->s_bdev, iter,
+ get_block_func, ext4_end_io_dio, NULL,
+ dio_flags);
if (ret > 0 && !overwrite && ext4_test_inode_state(inode,
EXT4_STATE_DIO_UNWRITTEN)) {
@@ -3926,11 +3919,12 @@ static ssize_t ext4_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
ssize_t ret;
int rw = iov_iter_rw(iter);
-#ifdef CONFIG_FS_ENCRYPTION
- if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)
- && !fscrypt_using_hardware_encryption(inode))
- return 0;
-#endif
+ if (IS_ENABLED(CONFIG_FS_ENCRYPTION) && IS_ENCRYPTED(inode)) {
+ if (!fscrypt_inode_uses_inline_crypto(inode) ||
+ !IS_ALIGNED(iocb->ki_pos | iov_iter_alignment(iter),
+ i_blocksize(inode)))
+ return 0;
+ }
if (fsverity_active(inode))
return 0;
@@ -4097,7 +4091,6 @@ static int __ext4_block_zero_page_range(handle_t *handle,
struct inode *inode = mapping->host;
struct buffer_head *bh;
struct page *page;
- bool decrypt;
int err = 0;
page = find_or_create_page(mapping, from >> PAGE_SHIFT,
@@ -4140,14 +4133,12 @@ static int __ext4_block_zero_page_range(handle_t *handle,
if (!buffer_uptodate(bh)) {
err = -EIO;
- decrypt = S_ISREG(inode->i_mode) && IS_ENCRYPTED(inode) &&
- !fscrypt_using_hardware_encryption(inode);
- ll_rw_block(REQ_OP_READ, (decrypt ? REQ_NOENCRYPT : 0), 1, &bh);
+ ll_rw_block(REQ_OP_READ, 0, 1, &bh);
wait_on_buffer(bh);
/* Uhhuh. Read error. Complain and punt. */
if (!buffer_uptodate(bh))
goto unlock;
- if (decrypt) {
+ if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
/* We expect the key to be set. */
BUG_ON(!fscrypt_has_encryption_key(inode));
BUG_ON(blocksize != PAGE_SIZE);
diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
index 1385541..96f8329 100644
--- a/fs/ext4/ioctl.c
+++ b/fs/ext4/ioctl.c
@@ -1131,8 +1131,35 @@ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
#endif
}
case EXT4_IOC_GET_ENCRYPTION_POLICY:
+ if (!ext4_has_feature_encrypt(sb))
+ return -EOPNOTSUPP;
return fscrypt_ioctl_get_policy(filp, (void __user *)arg);
+ case FS_IOC_GET_ENCRYPTION_POLICY_EX:
+ if (!ext4_has_feature_encrypt(sb))
+ return -EOPNOTSUPP;
+ return fscrypt_ioctl_get_policy_ex(filp, (void __user *)arg);
+
+ case FS_IOC_ADD_ENCRYPTION_KEY:
+ if (!ext4_has_feature_encrypt(sb))
+ return -EOPNOTSUPP;
+ return fscrypt_ioctl_add_key(filp, (void __user *)arg);
+
+ case FS_IOC_REMOVE_ENCRYPTION_KEY:
+ if (!ext4_has_feature_encrypt(sb))
+ return -EOPNOTSUPP;
+ return fscrypt_ioctl_remove_key(filp, (void __user *)arg);
+
+ case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
+ if (!ext4_has_feature_encrypt(sb))
+ return -EOPNOTSUPP;
+ return fscrypt_ioctl_remove_key_all_users(filp,
+ (void __user *)arg);
+ case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
+ if (!ext4_has_feature_encrypt(sb))
+ return -EOPNOTSUPP;
+ return fscrypt_ioctl_get_key_status(filp, (void __user *)arg);
+
case EXT4_IOC_FSGETXATTR:
{
struct fsxattr fa;
@@ -1265,6 +1292,11 @@ long ext4_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
case EXT4_IOC_SET_ENCRYPTION_POLICY:
case EXT4_IOC_GET_ENCRYPTION_PWSALT:
case EXT4_IOC_GET_ENCRYPTION_POLICY:
+ case FS_IOC_GET_ENCRYPTION_POLICY_EX:
+ case FS_IOC_ADD_ENCRYPTION_KEY:
+ case FS_IOC_REMOVE_ENCRYPTION_KEY:
+ case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
+ case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
case EXT4_IOC_SHUTDOWN:
case FS_IOC_GETFSMAP:
case FS_IOC_ENABLE_VERITY:
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index 1539ab5..85180868 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -344,8 +344,6 @@ void ext4_io_submit(struct ext4_io_submit *io)
int io_op_flags = io->io_wbc->sync_mode == WB_SYNC_ALL ?
REQ_SYNC : 0;
io->io_bio->bi_write_hint = io->io_end->inode->i_write_hint;
- if (io->io_flags & EXT4_IO_ENCRYPTED)
- io_op_flags |= REQ_NOENCRYPT;
bio_set_op_attrs(io->io_bio, REQ_OP_WRITE, io_op_flags);
submit_bio(io->io_bio);
}
@@ -355,7 +353,6 @@ void ext4_io_submit(struct ext4_io_submit *io)
void ext4_io_submit_init(struct ext4_io_submit *io,
struct writeback_control *wbc)
{
- io->io_flags = 0;
io->io_wbc = wbc;
io->io_bio = NULL;
io->io_end = NULL;
@@ -369,6 +366,7 @@ static int io_submit_init_bio(struct ext4_io_submit *io,
bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
if (!bio)
return -ENOMEM;
+ fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO);
wbc_init_bio(io->io_wbc, bio);
bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
bio_set_dev(bio, bh->b_bdev);
@@ -386,7 +384,8 @@ static int io_submit_add_bh(struct ext4_io_submit *io,
{
int ret;
- if (io->io_bio && bh->b_blocknr != io->io_next_block) {
+ if (io->io_bio && (bh->b_blocknr != io->io_next_block ||
+ !fscrypt_mergeable_bio_bh(io->io_bio, bh))) {
submit_and_retry:
ext4_io_submit(io);
}
@@ -472,12 +471,11 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
bh = head = page_buffers(page);
- if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) && nr_to_submit) {
+ if (fscrypt_inode_uses_fs_layer_crypto(inode) && nr_to_submit) {
gfp_t gfp_flags = GFP_NOFS;
retry_encrypt:
- if (!fscrypt_using_hardware_encryption(inode))
- bounce_page = fscrypt_encrypt_pagecache_blocks(page,
+ bounce_page = fscrypt_encrypt_pagecache_blocks(page,
PAGE_SIZE,0, gfp_flags);
if (IS_ERR(bounce_page)) {
ret = PTR_ERR(bounce_page);
@@ -498,8 +496,6 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
do {
if (!buffer_async_write(bh))
continue;
- if (bounce_page)
- io->io_flags |= EXT4_IO_ENCRYPTED;
ret = io_submit_add_bh(io, inode, bounce_page ?: page, bh);
if (ret) {
/*
diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
index 72f59b2..eb9c630 100644
--- a/fs/ext4/readpage.c
+++ b/fs/ext4/readpage.c
@@ -198,7 +198,7 @@ static struct bio_post_read_ctx *get_bio_post_read_ctx(struct inode *inode,
unsigned int post_read_steps = 0;
struct bio_post_read_ctx *ctx = NULL;
- if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode))
+ if (fscrypt_inode_uses_fs_layer_crypto(inode))
post_read_steps |= 1 << STEP_DECRYPT;
if (ext4_need_verity(inode, first_idx))
@@ -259,6 +259,7 @@ int ext4_mpage_readpages(struct address_space *mapping,
const unsigned blkbits = inode->i_blkbits;
const unsigned blocks_per_page = PAGE_SIZE >> blkbits;
const unsigned blocksize = 1 << blkbits;
+ sector_t next_block;
sector_t block_in_file;
sector_t last_block;
sector_t last_block_in_file;
@@ -290,7 +291,8 @@ int ext4_mpage_readpages(struct address_space *mapping,
if (page_has_buffers(page))
goto confused;
- block_in_file = (sector_t)page->index << (PAGE_SHIFT - blkbits);
+ block_in_file = next_block =
+ (sector_t)page->index << (PAGE_SHIFT - blkbits);
last_block = block_in_file + nr_pages * blocks_per_page;
last_block_in_file = (ext4_readpage_limit(inode) +
blocksize - 1) >> blkbits;
@@ -390,19 +392,21 @@ int ext4_mpage_readpages(struct address_space *mapping,
* This page will go to BIO. Do we need to send this
* BIO off first?
*/
- if (bio && (last_block_in_bio != blocks[0] - 1)) {
+ if (bio && (last_block_in_bio != blocks[0] - 1 ||
+ !fscrypt_mergeable_bio(bio, inode, next_block))) {
submit_and_realloc:
ext4_submit_bio_read(bio);
bio = NULL;
}
if (bio == NULL) {
struct bio_post_read_ctx *ctx;
- unsigned int flags = 0;
bio = bio_alloc(GFP_KERNEL,
min_t(int, nr_pages, BIO_MAX_PAGES));
if (!bio)
goto set_error_page;
+ fscrypt_set_bio_crypt_ctx(bio, inode, next_block,
+ GFP_KERNEL);
ctx = get_bio_post_read_ctx(inode, bio, page->index);
if (IS_ERR(ctx)) {
bio_put(bio);
@@ -413,10 +417,8 @@ int ext4_mpage_readpages(struct address_space *mapping,
bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9);
bio->bi_end_io = mpage_end_io;
bio->bi_private = ctx;
- if (is_readahead)
- flags = flags | REQ_RAHEAD;
- flags = flags | (ctx ? REQ_NOENCRYPT : 0);
- bio_set_op_attrs(bio, REQ_OP_READ, flags);
+ bio_set_op_attrs(bio, REQ_OP_READ,
+ is_readahead ? REQ_RAHEAD : 0);
}
length = first_hole << blkbits;
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 4ff9461..efcb091 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -71,7 +71,6 @@ static void ext4_mark_recovery_complete(struct super_block *sb,
static void ext4_clear_journal_err(struct super_block *sb,
struct ext4_super_block *es);
static int ext4_sync_fs(struct super_block *sb, int wait);
-static void ext4_umount_end(struct super_block *sb, int flags);
static int ext4_remount(struct super_block *sb, int *flags, char *data);
static int ext4_statfs(struct dentry *dentry, struct kstatfs *buf);
static int ext4_unfreeze(struct super_block *sb);
@@ -1108,6 +1107,9 @@ static int ext4_drop_inode(struct inode *inode)
{
int drop = generic_drop_inode(inode);
+ if (!drop)
+ drop = fscrypt_drop_inode(inode);
+
trace_ext4_drop_inode(inode, drop);
return drop;
}
@@ -1347,9 +1349,21 @@ static bool ext4_dummy_context(struct inode *inode)
return DUMMY_ENCRYPTION_ENABLED(EXT4_SB(inode->i_sb));
}
-static inline bool ext4_is_encrypted(struct inode *inode)
+static bool ext4_has_stable_inodes(struct super_block *sb)
{
- return IS_ENCRYPTED(inode);
+ return ext4_has_feature_stable_inodes(sb);
+}
+
+static void ext4_get_ino_and_lblk_bits(struct super_block *sb,
+ int *ino_bits_ret, int *lblk_bits_ret)
+{
+ *ino_bits_ret = 8 * sizeof(EXT4_SB(sb)->s_es->s_inodes_count);
+ *lblk_bits_ret = 8 * sizeof(ext4_lblk_t);
+}
+
+static bool ext4_inline_crypt_enabled(struct super_block *sb)
+{
+ return test_opt(sb, INLINECRYPT);
}
static const struct fscrypt_operations ext4_cryptops = {
@@ -1359,7 +1373,9 @@ static const struct fscrypt_operations ext4_cryptops = {
.dummy_context = ext4_dummy_context,
.empty_dir = ext4_empty_dir,
.max_namelen = EXT4_NAME_LEN,
- .is_encrypted = ext4_is_encrypted,
+ .has_stable_inodes = ext4_has_stable_inodes,
+ .get_ino_and_lblk_bits = ext4_get_ino_and_lblk_bits,
+ .inline_crypt_enabled = ext4_inline_crypt_enabled,
};
#endif
@@ -1427,7 +1443,6 @@ static const struct super_operations ext4_sops = {
.freeze_fs = ext4_freeze,
.unfreeze_fs = ext4_unfreeze,
.statfs = ext4_statfs,
- .umount_end = ext4_umount_end,
.remount_fs = ext4_remount,
.show_options = ext4_show_options,
#ifdef CONFIG_QUOTA
@@ -1455,6 +1470,7 @@ enum {
Opt_journal_path, Opt_journal_checksum, Opt_journal_async_commit,
Opt_abort, Opt_data_journal, Opt_data_ordered, Opt_data_writeback,
Opt_data_err_abort, Opt_data_err_ignore, Opt_test_dummy_encryption,
+ Opt_inlinecrypt,
Opt_usrjquota, Opt_grpjquota, Opt_offusrjquota, Opt_offgrpjquota,
Opt_jqfmt_vfsold, Opt_jqfmt_vfsv0, Opt_jqfmt_vfsv1, Opt_quota,
Opt_noquota, Opt_barrier, Opt_nobarrier, Opt_err,
@@ -1551,6 +1567,7 @@ static const match_table_t tokens = {
{Opt_noinit_itable, "noinit_itable"},
{Opt_max_dir_size_kb, "max_dir_size_kb=%u"},
{Opt_test_dummy_encryption, "test_dummy_encryption"},
+ {Opt_inlinecrypt, "inlinecrypt"},
{Opt_nombcache, "nombcache"},
{Opt_nombcache, "no_mbcache"}, /* for backward compatibility */
{Opt_removed, "check=none"}, /* mount option from ext2/3 */
@@ -1762,6 +1779,11 @@ static const struct mount_opts {
{Opt_jqfmt_vfsv1, QFMT_VFS_V1, MOPT_QFMT},
{Opt_max_dir_size_kb, 0, MOPT_GTE0},
{Opt_test_dummy_encryption, 0, MOPT_GTE0},
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+ {Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_SET},
+#else
+ {Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_NOSUPPORT},
+#endif
{Opt_nombcache, EXT4_MOUNT_NO_MBCACHE, MOPT_SET},
{Opt_err, 0, 0}
};
@@ -5266,25 +5288,6 @@ struct ext4_mount_options {
#endif
};
-static void ext4_umount_end(struct super_block *sb, int flags)
-{
- /*
- * this is called at the end of umount(2). If there is an unclosed
- * namespace, ext4 won't do put_super() which triggers fsck in the
- * next boot.
- */
- if ((flags & MNT_FORCE) || atomic_read(&sb->s_active) > 1) {
- ext4_msg(sb, KERN_ERR,
- "errors=remount-ro for active namespaces on umount %x",
- flags);
- clear_opt(sb, ERRORS_PANIC);
- set_opt(sb, ERRORS_RO);
- /* to write the latest s_kbytes_written */
- if (!(sb->s_flags & MS_RDONLY))
- ext4_commit_super(sb, 1);
- }
-}
-
static int ext4_remount(struct super_block *sb, int *flags, char *data)
{
struct ext4_super_block *es;
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 670da21e..3a02d79 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -1216,21 +1216,19 @@ static int block_operations(struct f2fs_sb_info *sbi)
goto retry_flush_quotas;
}
-retry_flush_nodes:
down_write(&sbi->node_write);
if (get_pages(sbi, F2FS_DIRTY_NODES)) {
up_write(&sbi->node_write);
+ up_write(&sbi->node_change);
+ f2fs_unlock_all(sbi);
atomic_inc(&sbi->wb_sync_req[NODE]);
err = f2fs_sync_node_pages(sbi, &wbc, false, FS_CP_NODE_IO);
atomic_dec(&sbi->wb_sync_req[NODE]);
- if (err) {
- up_write(&sbi->node_change);
- f2fs_unlock_all(sbi);
+ if (err)
goto out;
- }
cond_resched();
- goto retry_flush_nodes;
+ goto retry_flush_quotas;
}
/*
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index d2c0075..8ebefd7 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -317,6 +317,37 @@ static struct bio *__bio_alloc(struct f2fs_io_info *fio, int npages)
return bio;
}
+static void f2fs_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode,
+ pgoff_t first_idx,
+ const struct f2fs_io_info *fio,
+ gfp_t gfp_mask)
+{
+ /*
+ * The f2fs garbage collector sets ->encrypted_page when it wants to
+ * read/write raw data without encryption.
+ */
+ if (!fio || !fio->encrypted_page)
+ fscrypt_set_bio_crypt_ctx(bio, inode, first_idx, gfp_mask);
+ else if (fscrypt_inode_should_skip_dm_default_key(inode))
+ bio_set_skip_dm_default_key(bio);
+}
+
+static bool f2fs_crypt_mergeable_bio(struct bio *bio, const struct inode *inode,
+ pgoff_t next_idx,
+ const struct f2fs_io_info *fio)
+{
+ /*
+ * The f2fs garbage collector sets ->encrypted_page when it wants to
+ * read/write raw data without encryption.
+ */
+ if (fio && fio->encrypted_page)
+ return !bio_has_crypt_ctx(bio) &&
+ (bio_should_skip_dm_default_key(bio) ==
+ fscrypt_inode_should_skip_dm_default_key(inode));
+
+ return fscrypt_mergeable_bio(bio, inode, next_idx);
+}
+
static inline void __submit_bio(struct f2fs_sb_info *sbi,
struct bio *bio, enum page_type type)
{
@@ -514,7 +545,6 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
struct bio *bio;
struct page *page = fio->encrypted_page ?
fio->encrypted_page : fio->page;
- struct inode *inode = fio->page->mapping->host;
if (!f2fs_is_valid_blkaddr(fio->sbi, fio->new_blkaddr,
fio->is_por ? META_POR : (__is_meta_io(fio) ?
@@ -527,15 +557,17 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
/* Allocate a new bio */
bio = __bio_alloc(fio, 1);
- if (f2fs_may_encrypt_bio(inode, fio))
- fscrypt_set_ice_dun(inode, bio, PG_DUN(inode, fio->page));
- fscrypt_set_ice_skip(bio, fio->encrypted_page ? 1 : 0);
+ f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host,
+ fio->page->index, fio, GFP_NOIO);
if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
bio_put(bio);
return -EFAULT;
}
- fio->op_flags |= fio->encrypted_page ? REQ_NOENCRYPT : 0;
+
+ if (fio->io_wbc && !is_read_io(fio->op))
+ wbc_account_io(fio->io_wbc, page, PAGE_SIZE);
+
bio_set_op_attrs(bio, fio->op, fio->op_flags);
inc_page_count(fio->sbi, is_read_io(fio->op) ?
@@ -710,10 +742,6 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
struct bio *bio = *fio->bio;
struct page *page = fio->encrypted_page ?
fio->encrypted_page : fio->page;
- struct inode *inode;
- bool bio_encrypted;
- int bi_crypt_skip;
- u64 dun;
if (!f2fs_is_valid_blkaddr(fio->sbi, fio->new_blkaddr,
__is_meta_io(fio) ? META_GENERIC : DATA_GENERIC))
@@ -722,29 +750,20 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
trace_f2fs_submit_page_bio(page, fio);
f2fs_trace_ios(fio, 0);
- inode = fio->page->mapping->host;
- dun = PG_DUN(inode, fio->page);
- bi_crypt_skip = fio->encrypted_page ? 1 : 0;
- bio_encrypted = f2fs_may_encrypt_bio(inode, fio);
- fio->op_flags |= fio->encrypted_page ? REQ_NOENCRYPT : 0;
-
- if (bio && !page_is_mergeable(fio->sbi, bio, *fio->last_block,
- fio->new_blkaddr))
+ if (bio && (!page_is_mergeable(fio->sbi, bio, *fio->last_block,
+ fio->new_blkaddr) ||
+ !f2fs_crypt_mergeable_bio(bio, fio->page->mapping->host,
+ fio->page->index, fio)))
f2fs_submit_merged_ipu_write(fio->sbi, &bio, NULL);
- /* ICE support */
- if (bio && !fscrypt_mergeable_bio(bio, dun,
- bio_encrypted, bi_crypt_skip))
- f2fs_submit_merged_ipu_write(fio->sbi, &bio, NULL);
alloc_new:
if (!bio) {
bio = __bio_alloc(fio, BIO_MAX_PAGES);
+ f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host,
+ fio->page->index, fio,
+ GFP_NOIO);
bio_set_op_attrs(bio, fio->op, fio->op_flags);
- if (bio_encrypted)
- fscrypt_set_ice_dun(inode, bio, dun);
- fscrypt_set_ice_skip(bio, bi_crypt_skip);
-
add_bio_entry(fio->sbi, bio, page, fio->temp);
} else {
if (add_ipu_page(fio->sbi, &bio, page))
@@ -768,10 +787,6 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
enum page_type btype = PAGE_TYPE_OF_BIO(fio->type);
struct f2fs_bio_info *io = sbi->write_io[btype] + fio->temp;
struct page *bio_page;
- struct inode *inode;
- bool bio_encrypted;
- int bi_crypt_skip;
- u64 dun;
f2fs_bug_on(sbi, is_read_io(fio->op));
@@ -792,25 +807,18 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
verify_fio_blkaddr(fio);
bio_page = fio->encrypted_page ? fio->encrypted_page : fio->page;
- inode = fio->page->mapping->host;
- dun = PG_DUN(inode, fio->page);
- bi_crypt_skip = fio->encrypted_page ? 1 : 0;
- bio_encrypted = f2fs_may_encrypt_bio(inode, fio);
- fio->op_flags |= fio->encrypted_page ? REQ_NOENCRYPT : 0;
/* set submitted = true as a return value */
fio->submitted = true;
inc_page_count(sbi, WB_DATA_TYPE(bio_page));
- if (io->bio && !io_is_mergeable(sbi, io->bio, io, fio,
- io->last_block_in_bio, fio->new_blkaddr))
+ if (io->bio &&
+ (!io_is_mergeable(sbi, io->bio, io, fio, io->last_block_in_bio,
+ fio->new_blkaddr) ||
+ !f2fs_crypt_mergeable_bio(io->bio, fio->page->mapping->host,
+ fio->page->index, fio)))
__submit_merged_bio(io);
-
- /* ICE support */
- if (!fscrypt_mergeable_bio(io->bio, dun, bio_encrypted, bi_crypt_skip))
- __submit_merged_bio(io);
-
alloc_new:
if (io->bio == NULL) {
if (F2FS_IO_ALIGNED(sbi) &&
@@ -821,11 +829,9 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
goto skip;
}
io->bio = __bio_alloc(fio, BIO_MAX_PAGES);
-
- if (bio_encrypted)
- fscrypt_set_ice_dun(inode, io->bio, dun);
- fscrypt_set_ice_skip(io->bio, bi_crypt_skip);
-
+ f2fs_set_bio_crypt_ctx(io->bio, fio->page->mapping->host,
+ fio->page->index, fio,
+ GFP_NOIO);
io->fio = *fio;
}
@@ -869,13 +875,14 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
bio = f2fs_bio_alloc(sbi, min_t(int, nr_pages, BIO_MAX_PAGES), false);
if (!bio)
return ERR_PTR(-ENOMEM);
+
+ f2fs_set_bio_crypt_ctx(bio, inode, first_idx, NULL, GFP_NOFS);
+
f2fs_target_device(sbi, blkaddr, bio);
bio->bi_end_io = f2fs_read_end_io;
- op_flag |= IS_ENCRYPTED(inode) ? REQ_NOENCRYPT : 0;
bio_set_op_attrs(bio, REQ_OP_READ, op_flag);
- if (f2fs_encrypted_file(inode) &&
- !fscrypt_using_hardware_encryption(inode))
+ if (fscrypt_inode_uses_fs_layer_crypto(inode))
post_read_steps |= 1 << STEP_DECRYPT;
if (f2fs_need_verity(inode, first_idx))
@@ -906,9 +913,6 @@ static int f2fs_submit_page_read(struct inode *inode, struct page *page,
if (IS_ERR(bio))
return PTR_ERR(bio);
- if (f2fs_may_encrypt_bio(inode, NULL))
- fscrypt_set_ice_dun(inode, bio, PG_DUN(inode, page));
-
/* wait for GCed page writeback via META_MAPPING */
f2fs_wait_on_block_writeback(inode, blkaddr);
@@ -1375,7 +1379,6 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
if (map->m_next_extent)
*map->m_next_extent = pgofs + map->m_len;
- /* for hardware encryption, but to avoid potential issue in future */
if (flag == F2FS_GET_BLOCK_DIO)
f2fs_wait_on_block_writeback_range(inode,
map->m_pblk, map->m_len);
@@ -1540,7 +1543,6 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
sync_out:
- /* for hardware encryption, but to avoid potential issue in future */
if (flag == F2FS_GET_BLOCK_DIO && map->m_flags & F2FS_MAP_MAPPED)
f2fs_wait_on_block_writeback_range(inode,
map->m_pblk, map->m_len);
@@ -1851,8 +1853,6 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page,
sector_t last_block;
sector_t last_block_in_file;
sector_t block_nr;
- bool bio_encrypted;
- u64 dun;
int ret = 0;
block_in_file = (sector_t)page_index(page);
@@ -1917,20 +1917,14 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page,
* This page will go to BIO. Do we need to send this
* BIO off first?
*/
- if (bio && !page_is_mergeable(F2FS_I_SB(inode), bio,
- *last_block_in_bio, block_nr)) {
+ if (bio && (!page_is_mergeable(F2FS_I_SB(inode), bio,
+ *last_block_in_bio, block_nr) ||
+ !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) {
submit_and_realloc:
__f2fs_submit_read_bio(F2FS_I_SB(inode), bio, DATA);
bio = NULL;
}
- dun = PG_DUN(inode, page);
- bio_encrypted = f2fs_may_encrypt_bio(inode, NULL);
- if (!fscrypt_mergeable_bio(bio, dun, bio_encrypted, 0)) {
- __submit_bio(F2FS_I_SB(inode), bio, DATA);
- bio = NULL;
- }
-
if (bio == NULL) {
bio = f2fs_grab_read_bio(inode, block_nr, nr_pages,
is_readahead ? REQ_RAHEAD : 0, page->index);
@@ -1939,8 +1933,6 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page,
bio = NULL;
goto out;
}
- if (bio_encrypted)
- fscrypt_set_ice_dun(inode, bio, dun);
}
/*
@@ -2014,6 +2006,7 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
zero_user_segment(page, 0, PAGE_SIZE);
unlock_page(page);
}
+
next_page:
if (pages)
put_page(page);
@@ -2068,10 +2061,11 @@ static int encrypt_one_page(struct f2fs_io_info *fio)
/* wait for GCed page writeback via META_MAPPING */
f2fs_wait_on_block_writeback(inode, fio->old_blkaddr);
-retry_encrypt:
- if (fscrypt_using_hardware_encryption(inode))
+ if (fscrypt_inode_uses_inline_crypto(inode))
return 0;
+retry_encrypt:
+
fio->encrypted_page = fscrypt_encrypt_pagecache_blocks(fio->page,
PAGE_SIZE, 0,
gfp_flags);
@@ -2245,7 +2239,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
f2fs_unlock_op(fio->sbi);
err = f2fs_inplace_write_data(fio);
if (err) {
- if (f2fs_encrypted_file(inode))
+ if (fscrypt_inode_uses_fs_layer_crypto(inode))
fscrypt_finalize_bounce_page(&fio->encrypted_page);
if (PageWriteback(page))
end_page_writeback(page);
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 0bb27ec..2297267 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -137,6 +137,9 @@ struct f2fs_mount_info {
int alloc_mode; /* segment allocation policy */
int fsync_mode; /* fsync policy */
bool test_dummy_encryption; /* test dummy encryption */
+#ifdef CONFIG_FS_ENCRYPTION
+ bool inlinecrypt; /* inline encryption enabled */
+#endif
block_t unusable_cap; /* Amount of space allowed to be
* unusable when disabling checkpoint
*/
@@ -3603,9 +3606,7 @@ static inline void f2fs_set_encrypted_inode(struct inode *inode)
*/
static inline bool f2fs_post_read_required(struct inode *inode)
{
- return (f2fs_encrypted_file(inode)
- && !fscrypt_using_hardware_encryption(inode))
- || fsverity_active(inode);
+ return f2fs_encrypted_file(inode) || fsverity_active(inode);
}
#define F2FS_FEATURE_FUNCS(name, flagname) \
@@ -3734,7 +3735,13 @@ static inline bool f2fs_force_buffered_io(struct inode *inode,
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
int rw = iov_iter_rw(iter);
- if (f2fs_post_read_required(inode))
+ if (IS_ENABLED(CONFIG_FS_ENCRYPTION) && f2fs_encrypted_file(inode)) {
+ if (!fscrypt_inode_uses_inline_crypto(inode) ||
+ !IS_ALIGNED(iocb->ki_pos | iov_iter_alignment(iter),
+ F2FS_BLKSIZE))
+ return true;
+ }
+ if (fsverity_active(inode))
return true;
if (f2fs_is_multi_device(sbi))
return true;
@@ -3757,16 +3764,6 @@ static inline bool f2fs_force_buffered_io(struct inode *inode,
return false;
}
-static inline bool f2fs_may_encrypt_bio(struct inode *inode,
- struct f2fs_io_info *fio)
-{
- if (fio && (fio->type != DATA || fio->encrypted_page))
- return false;
-
- return (f2fs_encrypted_file(inode) &&
- fscrypt_using_hardware_encryption(inode));
-}
-
#ifdef CONFIG_F2FS_FAULT_INJECTION
extern void f2fs_build_fault_attr(struct f2fs_sb_info *sbi, unsigned int rate,
unsigned int type);
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index c4ac231..ec22279 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -2267,6 +2267,49 @@ static int f2fs_ioc_get_encryption_pwsalt(struct file *filp, unsigned long arg)
return err;
}
+static int f2fs_ioc_get_encryption_policy_ex(struct file *filp,
+ unsigned long arg)
+{
+ if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
+ return -EOPNOTSUPP;
+
+ return fscrypt_ioctl_get_policy_ex(filp, (void __user *)arg);
+}
+
+static int f2fs_ioc_add_encryption_key(struct file *filp, unsigned long arg)
+{
+ if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
+ return -EOPNOTSUPP;
+
+ return fscrypt_ioctl_add_key(filp, (void __user *)arg);
+}
+
+static int f2fs_ioc_remove_encryption_key(struct file *filp, unsigned long arg)
+{
+ if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
+ return -EOPNOTSUPP;
+
+ return fscrypt_ioctl_remove_key(filp, (void __user *)arg);
+}
+
+static int f2fs_ioc_remove_encryption_key_all_users(struct file *filp,
+ unsigned long arg)
+{
+ if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
+ return -EOPNOTSUPP;
+
+ return fscrypt_ioctl_remove_key_all_users(filp, (void __user *)arg);
+}
+
+static int f2fs_ioc_get_encryption_key_status(struct file *filp,
+ unsigned long arg)
+{
+ if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
+ return -EOPNOTSUPP;
+
+ return fscrypt_ioctl_get_key_status(filp, (void __user *)arg);
+}
+
static int f2fs_ioc_gc(struct file *filp, unsigned long arg)
{
struct inode *inode = file_inode(filp);
@@ -3265,6 +3308,16 @@ long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
return f2fs_ioc_get_encryption_policy(filp, arg);
case F2FS_IOC_GET_ENCRYPTION_PWSALT:
return f2fs_ioc_get_encryption_pwsalt(filp, arg);
+ case FS_IOC_GET_ENCRYPTION_POLICY_EX:
+ return f2fs_ioc_get_encryption_policy_ex(filp, arg);
+ case FS_IOC_ADD_ENCRYPTION_KEY:
+ return f2fs_ioc_add_encryption_key(filp, arg);
+ case FS_IOC_REMOVE_ENCRYPTION_KEY:
+ return f2fs_ioc_remove_encryption_key(filp, arg);
+ case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
+ return f2fs_ioc_remove_encryption_key_all_users(filp, arg);
+ case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
+ return f2fs_ioc_get_encryption_key_status(filp, arg);
case F2FS_IOC_GARBAGE_COLLECT:
return f2fs_ioc_gc(filp, arg);
case F2FS_IOC_GARBAGE_COLLECT_RANGE:
@@ -3396,6 +3449,11 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
case F2FS_IOC_SET_ENCRYPTION_POLICY:
case F2FS_IOC_GET_ENCRYPTION_PWSALT:
case F2FS_IOC_GET_ENCRYPTION_POLICY:
+ case FS_IOC_GET_ENCRYPTION_POLICY_EX:
+ case FS_IOC_ADD_ENCRYPTION_KEY:
+ case FS_IOC_REMOVE_ENCRYPTION_KEY:
+ case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
+ case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
case F2FS_IOC_GARBAGE_COLLECT:
case F2FS_IOC_GARBAGE_COLLECT_RANGE:
case F2FS_IOC_WRITE_CHECKPOINT:
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index fa32ce92..110f380 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -1100,7 +1100,6 @@ static void __init_discard_policy(struct f2fs_sb_info *sbi,
} else if (discard_type == DPOLICY_FSTRIM) {
dpolicy->io_aware = false;
} else if (discard_type == DPOLICY_UMOUNT) {
- dpolicy->max_requests = UINT_MAX;
dpolicy->io_aware = false;
/* we need to issue all to keep CP_TRIMMED_FLAG */
dpolicy->granularity = 1;
@@ -1461,6 +1460,8 @@ static unsigned int __issue_discard_cmd_orderly(struct f2fs_sb_info *sbi,
return issued;
}
+static unsigned int __wait_all_discard_cmd(struct f2fs_sb_info *sbi,
+ struct discard_policy *dpolicy);
static int __issue_discard_cmd(struct f2fs_sb_info *sbi,
struct discard_policy *dpolicy)
@@ -1469,12 +1470,14 @@ static int __issue_discard_cmd(struct f2fs_sb_info *sbi,
struct list_head *pend_list;
struct discard_cmd *dc, *tmp;
struct blk_plug plug;
- int i, issued = 0;
+ int i, issued;
bool io_interrupted = false;
if (dpolicy->timeout != 0)
f2fs_update_time(sbi, dpolicy->timeout);
+retry:
+ issued = 0;
for (i = MAX_PLIST_NUM - 1; i >= 0; i--) {
if (dpolicy->timeout != 0 &&
f2fs_time_over(sbi, dpolicy->timeout))
@@ -1521,6 +1524,11 @@ static int __issue_discard_cmd(struct f2fs_sb_info *sbi,
break;
}
+ if (dpolicy->type == DPOLICY_UMOUNT && issued) {
+ __wait_all_discard_cmd(sbi, dpolicy);
+ goto retry;
+ }
+
if (!issued && io_interrupted)
issued = -1;
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 9c2e10d..9be6d2c 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -137,6 +137,7 @@ enum {
Opt_alloc,
Opt_fsync,
Opt_test_dummy_encryption,
+ Opt_inlinecrypt,
Opt_checkpoint_disable,
Opt_checkpoint_disable_cap,
Opt_checkpoint_disable_cap_perc,
@@ -199,6 +200,7 @@ static match_table_t f2fs_tokens = {
{Opt_alloc, "alloc_mode=%s"},
{Opt_fsync, "fsync_mode=%s"},
{Opt_test_dummy_encryption, "test_dummy_encryption"},
+ {Opt_inlinecrypt, "inlinecrypt"},
{Opt_checkpoint_disable, "checkpoint=disable"},
{Opt_checkpoint_disable_cap, "checkpoint=disable:%u"},
{Opt_checkpoint_disable_cap_perc, "checkpoint=disable:%u%%"},
@@ -785,6 +787,13 @@ static int parse_options(struct super_block *sb, char *options)
f2fs_info(sbi, "Test dummy encryption mount option ignored");
#endif
break;
+ case Opt_inlinecrypt:
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+ F2FS_OPTION(sbi).inlinecrypt = true;
+#else
+ f2fs_info(sbi, "inline encryption not supported");
+#endif
+ break;
case Opt_checkpoint_disable_cap_perc:
if (args->from && match_int(args, &arg))
return -EINVAL;
@@ -965,6 +974,8 @@ static int f2fs_drop_inode(struct inode *inode)
return 0;
}
ret = generic_drop_inode(inode);
+ if (!ret)
+ ret = fscrypt_drop_inode(inode);
trace_f2fs_drop_inode(inode, ret);
return ret;
}
@@ -1064,27 +1075,6 @@ static void destroy_device_list(struct f2fs_sb_info *sbi)
kvfree(sbi->devs);
}
-static void f2fs_umount_end(struct super_block *sb, int flags)
-{
- /*
- * this is called at the end of umount(2). If there is an unclosed
- * namespace, f2fs won't do put_super() which triggers fsck in the
- * next boot.
- */
- if ((flags & MNT_FORCE) || atomic_read(&sb->s_active) > 1) {
- /* to write the latest kbytes_written */
- if (!(sb->s_flags & MS_RDONLY)) {
- struct f2fs_sb_info *sbi = F2FS_SB(sb);
- struct cp_control cpc = {
- .reason = CP_UMOUNT,
- };
- mutex_lock(&sbi->gc_mutex);
- f2fs_write_checkpoint(F2FS_SB(sb), &cpc);
- mutex_unlock(&sbi->gc_mutex);
- }
- }
-}
-
static void f2fs_put_super(struct super_block *sb)
{
struct f2fs_sb_info *sbi = F2FS_SB(sb);
@@ -1473,6 +1463,8 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
#ifdef CONFIG_FS_ENCRYPTION
if (F2FS_OPTION(sbi).test_dummy_encryption)
seq_puts(seq, ",test_dummy_encryption");
+ if (F2FS_OPTION(sbi).inlinecrypt)
+ seq_puts(seq, ",inlinecrypt");
#endif
if (F2FS_OPTION(sbi).alloc_mode == ALLOC_MODE_DEFAULT)
@@ -1501,6 +1493,9 @@ static void default_options(struct f2fs_sb_info *sbi)
F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_DEFAULT;
F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_POSIX;
F2FS_OPTION(sbi).test_dummy_encryption = false;
+#ifdef CONFIG_FS_ENCRYPTION
+ F2FS_OPTION(sbi).inlinecrypt = false;
+#endif
F2FS_OPTION(sbi).s_resuid = make_kuid(&init_user_ns, F2FS_DEF_RESUID);
F2FS_OPTION(sbi).s_resgid = make_kgid(&init_user_ns, F2FS_DEF_RESGID);
@@ -2303,7 +2298,6 @@ static const struct super_operations f2fs_sops = {
#endif
.evict_inode = f2fs_evict_inode,
.put_super = f2fs_put_super,
- .umount_end = f2fs_umount_end,
.sync_fs = f2fs_sync_fs,
.freeze_fs = f2fs_freeze,
.unfreeze_fs = f2fs_unfreeze,
@@ -2344,19 +2338,54 @@ static bool f2fs_dummy_context(struct inode *inode)
return DUMMY_ENCRYPTION_ENABLED(F2FS_I_SB(inode));
}
-static inline bool f2fs_is_encrypted(struct inode *inode)
+static bool f2fs_has_stable_inodes(struct super_block *sb)
{
- return f2fs_encrypted_file(inode);
+ return true;
+}
+
+static void f2fs_get_ino_and_lblk_bits(struct super_block *sb,
+ int *ino_bits_ret, int *lblk_bits_ret)
+{
+ *ino_bits_ret = 8 * sizeof(nid_t);
+ *lblk_bits_ret = 8 * sizeof(block_t);
+}
+
+static bool f2fs_inline_crypt_enabled(struct super_block *sb)
+{
+ return F2FS_OPTION(F2FS_SB(sb)).inlinecrypt;
+}
+
+static int f2fs_get_num_devices(struct super_block *sb)
+{
+ struct f2fs_sb_info *sbi = F2FS_SB(sb);
+
+ if (f2fs_is_multi_device(sbi))
+ return sbi->s_ndevs;
+ return 1;
+}
+
+static void f2fs_get_devices(struct super_block *sb,
+ struct request_queue **devs)
+{
+ struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ int i;
+
+ for (i = 0; i < sbi->s_ndevs; i++)
+ devs[i] = bdev_get_queue(FDEV(i).bdev);
}
static const struct fscrypt_operations f2fs_cryptops = {
- .key_prefix = "f2fs:",
- .get_context = f2fs_get_context,
- .set_context = f2fs_set_context,
- .dummy_context = f2fs_dummy_context,
- .empty_dir = f2fs_empty_dir,
- .max_namelen = F2FS_NAME_LEN,
- .is_encrypted = f2fs_is_encrypted,
+ .key_prefix = "f2fs:",
+ .get_context = f2fs_get_context,
+ .set_context = f2fs_set_context,
+ .dummy_context = f2fs_dummy_context,
+ .empty_dir = f2fs_empty_dir,
+ .max_namelen = F2FS_NAME_LEN,
+ .has_stable_inodes = f2fs_has_stable_inodes,
+ .get_ino_and_lblk_bits = f2fs_get_ino_and_lblk_bits,
+ .inline_crypt_enabled = f2fs_inline_crypt_enabled,
+ .get_num_devices = f2fs_get_num_devices,
+ .get_devices = f2fs_get_devices,
};
#endif
diff --git a/fs/iomap.c b/fs/iomap.c
index 03edf62..5c77dbc 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -14,6 +14,7 @@
#include <linux/module.h>
#include <linux/compiler.h>
#include <linux/fs.h>
+#include <linux/fscrypt.h>
#include <linux/iomap.h>
#include <linux/uaccess.h>
#include <linux/gfp.h>
@@ -1580,10 +1581,13 @@ static blk_qc_t
iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos,
unsigned len)
{
+ struct inode *inode = file_inode(dio->iocb->ki_filp);
struct page *page = ZERO_PAGE(0);
struct bio *bio;
bio = bio_alloc(GFP_KERNEL, 1);
+ fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits,
+ GFP_KERNEL);
bio_set_dev(bio, iomap->bdev);
bio->bi_iter.bi_sector = iomap_sector(iomap, pos);
bio->bi_private = dio;
@@ -1664,6 +1668,8 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
}
bio = bio_alloc(GFP_KERNEL, nr_pages);
+ fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits,
+ GFP_KERNEL);
bio_set_dev(bio, iomap->bdev);
bio->bi_iter.bi_sector = iomap_sector(iomap, pos);
bio->bi_write_hint = dio->iocb->ki_hint;
diff --git a/fs/namei.c b/fs/namei.c
index af523d9..c99cb21 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -3010,11 +3010,6 @@ int vfs_create2(struct vfsmount *mnt, struct inode *dir, struct dentry *dentry,
if (error)
return error;
error = dir->i_op->create(dir, dentry, mode, want_excl);
- if (error)
- return error;
- error = security_inode_post_create(dir, dentry, mode);
- if (error)
- return error;
if (!error)
fsnotify_create(dir, dentry);
return error;
@@ -3839,11 +3834,6 @@ int vfs_mknod2(struct vfsmount *mnt, struct inode *dir, struct dentry *dentry, u
return error;
error = dir->i_op->mknod(dir, dentry, mode, dev);
- if (error)
- return error;
- error = security_inode_post_create(dir, dentry, mode);
- if (error)
- return error;
if (!error)
fsnotify_create(dir, dentry);
return error;
diff --git a/fs/namespace.c b/fs/namespace.c
index 7899153..3a93384 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -21,7 +21,6 @@
#include <linux/fs_struct.h> /* get_fs_root et.al. */
#include <linux/fsnotify.h> /* fsnotify_vfsmount_delete */
#include <linux/uaccess.h>
-#include <linux/file.h>
#include <linux/proc_ns.h>
#include <linux/magic.h>
#include <linux/bootmem.h>
@@ -1135,12 +1134,6 @@ static void delayed_mntput(struct work_struct *unused)
}
static DECLARE_DELAYED_WORK(delayed_mntput_work, delayed_mntput);
-void flush_delayed_mntput_wait(void)
-{
- delayed_mntput(NULL);
- flush_delayed_work(&delayed_mntput_work);
-}
-
static void mntput_no_expire(struct mount *mnt)
{
rcu_read_lock();
@@ -1657,7 +1650,6 @@ int ksys_umount(char __user *name, int flags)
struct mount *mnt;
int retval;
int lookup_flags = 0;
- bool user_request = !(current->flags & PF_KTHREAD);
if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW))
return -EINVAL;
@@ -1683,36 +1675,12 @@ int ksys_umount(char __user *name, int flags)
if (flags & MNT_FORCE && !capable(CAP_SYS_ADMIN))
goto dput_and_out;
- /* flush delayed_fput to put mnt_count */
- if (user_request)
- flush_delayed_fput_wait();
-
retval = do_umount(mnt, flags);
dput_and_out:
/* we mustn't call path_put() as that would clear mnt_expiry_mark */
dput(path.dentry);
- if (user_request && (!retval || (flags & MNT_FORCE))) {
- /* filesystem needs to handle unclosed namespaces */
- if (mnt->mnt.mnt_sb->s_op->umount_end)
- mnt->mnt.mnt_sb->s_op->umount_end(mnt->mnt.mnt_sb,
- flags);
- }
mntput_no_expire(mnt);
- if (!user_request)
- goto out;
-
- if (!retval) {
- /*
- * If the last delayed_fput() is called during do_umount()
- * and makes mnt_count zero, we need to guarantee to register
- * delayed_mntput by waiting for delayed_fput work again.
- */
- flush_delayed_fput_wait();
-
- /* flush delayed_mntput_work to put sb->s_active */
- flush_delayed_mntput_wait();
- }
out:
return retval;
}
diff --git a/fs/sdcardfs/main.c b/fs/sdcardfs/main.c
index 4c7b7fa..cb668f7 100644
--- a/fs/sdcardfs/main.c
+++ b/fs/sdcardfs/main.c
@@ -19,6 +19,7 @@
*/
#include "sdcardfs.h"
+#include <linux/fscrypt.h>
#include <linux/module.h>
#include <linux/types.h>
#include <linux/parser.h>
@@ -375,6 +376,9 @@ static int sdcardfs_read_super(struct vfsmount *mnt, struct super_block *sb,
list_add(&sb_info->list, &sdcardfs_super_list);
mutex_unlock(&sdcardfs_super_list_lock);
+ sb_info->fscrypt_nb.notifier_call = sdcardfs_on_fscrypt_key_removed;
+ fscrypt_register_key_removal_notifier(&sb_info->fscrypt_nb);
+
if (!silent)
pr_info("sdcardfs: mounted on top of %s type %s\n",
dev_name, lower_sb->s_type->name);
@@ -445,6 +449,9 @@ void sdcardfs_kill_sb(struct super_block *sb)
if (sb->s_magic == SDCARDFS_SUPER_MAGIC && sb->s_fs_info) {
sbi = SDCARDFS_SB(sb);
+
+ fscrypt_unregister_key_removal_notifier(&sbi->fscrypt_nb);
+
mutex_lock(&sdcardfs_super_list_lock);
list_del(&sbi->list);
mutex_unlock(&sdcardfs_super_list_lock);
diff --git a/fs/sdcardfs/sdcardfs.h b/fs/sdcardfs/sdcardfs.h
index 9ccf62c..401445e 100644
--- a/fs/sdcardfs/sdcardfs.h
+++ b/fs/sdcardfs/sdcardfs.h
@@ -151,6 +151,8 @@ extern struct inode *sdcardfs_iget(struct super_block *sb,
struct inode *lower_inode, userid_t id);
extern int sdcardfs_interpose(struct dentry *dentry, struct super_block *sb,
struct path *lower_path, userid_t id);
+extern int sdcardfs_on_fscrypt_key_removed(struct notifier_block *nb,
+ unsigned long action, void *data);
/* file private data */
struct sdcardfs_file_info {
@@ -224,6 +226,7 @@ struct sdcardfs_sb_info {
struct path obbpath;
void *pkgl_id;
struct list_head list;
+ struct notifier_block fscrypt_nb;
};
/*
diff --git a/fs/sdcardfs/super.c b/fs/sdcardfs/super.c
index 1240ef2f..b2ba09a 100644
--- a/fs/sdcardfs/super.c
+++ b/fs/sdcardfs/super.c
@@ -319,6 +319,23 @@ static int sdcardfs_show_options(struct vfsmount *mnt, struct seq_file *m,
return 0;
};
+int sdcardfs_on_fscrypt_key_removed(struct notifier_block *nb,
+ unsigned long action, void *data)
+{
+ struct sdcardfs_sb_info *sbi = container_of(nb, struct sdcardfs_sb_info,
+ fscrypt_nb);
+
+ /*
+ * Evict any unused sdcardfs dentries (and hence any unused sdcardfs
+ * inodes, since sdcardfs doesn't cache unpinned inodes by themselves)
+ * so that the lower filesystem's encrypted inodes can be evicted.
+ * This is needed to make the FS_IOC_REMOVE_ENCRYPTION_KEY ioctl
+ * properly "lock" the files underneath the sdcardfs mount.
+ */
+ shrink_dcache_sb(sbi->sb);
+ return NOTIFY_OK;
+}
+
const struct super_operations sdcardfs_sops = {
.put_super = sdcardfs_put_super,
.statfs = sdcardfs_statfs,
diff --git a/fs/super.c b/fs/super.c
index b02e086..7fa6fe5 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -32,6 +32,7 @@
#include <linux/backing-dev.h>
#include <linux/rculist_bl.h>
#include <linux/cleancache.h>
+#include <linux/fscrypt.h>
#include <linux/fsnotify.h>
#include <linux/lockdep.h>
#include <linux/user_namespace.h>
@@ -288,6 +289,7 @@ static void __put_super(struct super_block *s)
WARN_ON(s->s_inode_lru.node);
WARN_ON(!list_empty(&s->s_mounts));
security_sb_free(s);
+ fscrypt_sb_free(s);
put_user_ns(s->s_user_ns);
kfree(s->s_subtype);
call_rcu(&s->rcu, destroy_super_rcu);
diff --git a/fs/ubifs/ioctl.c b/fs/ubifs/ioctl.c
index 0f9c362..71c3440 100644
--- a/fs/ubifs/ioctl.c
+++ b/fs/ubifs/ioctl.c
@@ -205,6 +205,21 @@ long ubifs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
#endif
}
+ case FS_IOC_GET_ENCRYPTION_POLICY_EX:
+ return fscrypt_ioctl_get_policy_ex(file, (void __user *)arg);
+
+ case FS_IOC_ADD_ENCRYPTION_KEY:
+ return fscrypt_ioctl_add_key(file, (void __user *)arg);
+
+ case FS_IOC_REMOVE_ENCRYPTION_KEY:
+ return fscrypt_ioctl_remove_key(file, (void __user *)arg);
+
+ case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
+ return fscrypt_ioctl_remove_key_all_users(file,
+ (void __user *)arg);
+ case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
+ return fscrypt_ioctl_get_key_status(file, (void __user *)arg);
+
default:
return -ENOTTY;
}
@@ -222,6 +237,11 @@ long ubifs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
break;
case FS_IOC_SET_ENCRYPTION_POLICY:
case FS_IOC_GET_ENCRYPTION_POLICY:
+ case FS_IOC_GET_ENCRYPTION_POLICY_EX:
+ case FS_IOC_ADD_ENCRYPTION_KEY:
+ case FS_IOC_REMOVE_ENCRYPTION_KEY:
+ case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
+ case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
break;
default:
return -ENOIOCTLCMD;
diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
index ebb9e84..e276b54 100644
--- a/fs/ubifs/super.c
+++ b/fs/ubifs/super.c
@@ -336,6 +336,16 @@ static int ubifs_write_inode(struct inode *inode, struct writeback_control *wbc)
return err;
}
+static int ubifs_drop_inode(struct inode *inode)
+{
+ int drop = generic_drop_inode(inode);
+
+ if (!drop)
+ drop = fscrypt_drop_inode(inode);
+
+ return drop;
+}
+
static void ubifs_evict_inode(struct inode *inode)
{
int err;
@@ -1925,6 +1935,7 @@ const struct super_operations ubifs_super_operations = {
.destroy_inode = ubifs_destroy_inode,
.put_super = ubifs_put_super,
.write_inode = ubifs_write_inode,
+ .drop_inode = ubifs_drop_inode,
.evict_inode = ubifs_evict_inode,
.statfs = ubifs_statfs,
.dirty_inode = ubifs_dirty_inode,
diff --git a/gen_headers_arm.bp b/gen_headers_arm.bp
index a8f40a7..65319b0 100644
--- a/gen_headers_arm.bp
+++ b/gen_headers_arm.bp
@@ -638,6 +638,7 @@
"linux/xilinx-v4l2-controls.h",
"linux/zorro.h",
"linux/zorro_ids.h",
+ "linux/fscrypt.h",
"media/msm_cvp_private.h",
"media/msm_cvp_utils.h",
"media/msm_media_info.h",
@@ -970,6 +971,9 @@
"media/cam_req_mgr.h",
"media/cam_sensor.h",
"media/cam_sync.h",
+ "media/cam_tfe.h",
+ "media/cam_ope.h",
+ "media/cam_isp_tfe.h",
]
genrule {
diff --git a/gen_headers_arm64.bp b/gen_headers_arm64.bp
index 0b9d2ba..3e1627b 100644
--- a/gen_headers_arm64.bp
+++ b/gen_headers_arm64.bp
@@ -632,6 +632,7 @@
"linux/xilinx-v4l2-controls.h",
"linux/zorro.h",
"linux/zorro_ids.h",
+ "linux/fscrypt.h",
"media/msm_cvp_private.h",
"media/msm_cvp_utils.h",
"media/msm_media_info.h",
@@ -964,6 +965,9 @@
"media/cam_req_mgr.h",
"media/cam_sensor.h",
"media/cam_sync.h",
+ "media/cam_tfe.h",
+ "media/cam_ope.h",
+ "media/cam_isp_tfe.h",
]
genrule {
diff --git a/include/dt-bindings/clock/mdss-10nm-pll-clk.h b/include/dt-bindings/clock/mdss-10nm-pll-clk.h
index bbc6ed9..2ab0d67 100644
--- a/include/dt-bindings/clock/mdss-10nm-pll-clk.h
+++ b/include/dt-bindings/clock/mdss-10nm-pll-clk.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
- * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2020, The Linux Foundation. All rights reserved.
*/
#ifndef __MDSS_10NM_PLL_CLK_H
@@ -45,10 +45,13 @@
#define SHADOW_PCLK_SRC_1_CLK 35
/* DP PLL clocks */
-#define DP_VCO_CLK 0
-#define DP_LINK_CLK_DIVSEL_TEN 1
+#define DP_VCO_CLK 0
+#define DP_PHY_PLL_LINK_CLK 1
#define DP_VCO_DIVIDED_TWO_CLK_SRC 2
#define DP_VCO_DIVIDED_FOUR_CLK_SRC 3
#define DP_VCO_DIVIDED_SIX_CLK_SRC 4
-#define DP_VCO_DIVIDED_CLK_SRC_MUX 5
+#define DP_PHY_PLL_VCO_DIV_CLK 5
+
+#define DP_LINK_CLK_DIVSEL_TEN 1
+#define DP_VCO_DIVIDED_CLK_SRC_MUX 5
#endif
diff --git a/include/dt-bindings/clock/mdss-7nm-pll-clk.h b/include/dt-bindings/clock/mdss-7nm-pll-clk.h
index 79820b4..bb146d7 100644
--- a/include/dt-bindings/clock/mdss-7nm-pll-clk.h
+++ b/include/dt-bindings/clock/mdss-7nm-pll-clk.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
- * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2020, The Linux Foundation. All rights reserved.
*/
#ifndef __MDSS_7NM_PLL_CLK_H
@@ -25,24 +25,36 @@
#define SHADOW_POST_VCO_DIV_0_CLK 15
#define SHADOW_PCLK_SRC_MUX_0_CLK 16
#define SHADOW_PCLK_SRC_0_CLK 17
-#define VCO_CLK_1 18
-#define PLL_OUT_DIV_1_CLK 19
-#define BITCLK_SRC_1_CLK 20
-#define BYTECLK_SRC_1_CLK 21
-#define POST_BIT_DIV_1_CLK 22
-#define POST_VCO_DIV_1_CLK 23
-#define BYTECLK_MUX_1_CLK 24
-#define PCLK_SRC_MUX_1_CLK 25
-#define PCLK_SRC_1_CLK 26
-#define PCLK_MUX_1_CLK 27
-#define SHADOW_VCO_CLK_1 28
-#define SHADOW_PLL_OUT_DIV_1_CLK 29
-#define SHADOW_BITCLK_SRC_1_CLK 30
-#define SHADOW_BYTECLK_SRC_1_CLK 31
-#define SHADOW_POST_BIT_DIV_1_CLK 32
-#define SHADOW_POST_VCO_DIV_1_CLK 33
-#define SHADOW_PCLK_SRC_MUX_1_CLK 34
-#define SHADOW_PCLK_SRC_1_CLK 35
+/* CPHY clocks for DSI-0 PLL */
+#define CPHY_BYTECLK_SRC_0_CLK 18
+#define POST_VCO_DIV3_5_0_CLK 19
+#define CPHY_PCLK_SRC_MUX_0_CLK 20
+#define CPHY_PCLK_SRC_0_CLK 21
+
+#define VCO_CLK_1 22
+#define PLL_OUT_DIV_1_CLK 23
+#define BITCLK_SRC_1_CLK 24
+#define BYTECLK_SRC_1_CLK 25
+#define POST_BIT_DIV_1_CLK 26
+#define POST_VCO_DIV_1_CLK 27
+#define BYTECLK_MUX_1_CLK 28
+#define PCLK_SRC_MUX_1_CLK 29
+#define PCLK_SRC_1_CLK 30
+#define PCLK_MUX_1_CLK 31
+#define SHADOW_VCO_CLK_1 32
+#define SHADOW_PLL_OUT_DIV_1_CLK 33
+#define SHADOW_BITCLK_SRC_1_CLK 34
+#define SHADOW_BYTECLK_SRC_1_CLK 35
+#define SHADOW_POST_BIT_DIV_1_CLK 36
+#define SHADOW_POST_VCO_DIV_1_CLK 37
+#define SHADOW_PCLK_SRC_MUX_1_CLK 38
+#define SHADOW_PCLK_SRC_1_CLK 39
+/* CPHY clocks for DSI-1 PLL */
+#define CPHY_BYTECLK_SRC_1_CLK 40
+#define POST_VCO_DIV3_5_1_CLK 41
+#define CPHY_PCLK_SRC_MUX_1_CLK 42
+#define CPHY_PCLK_SRC_1_CLK 43
+
/* DP PLL clocks */
#define DP_VCO_CLK 0
diff --git a/include/dt-bindings/clock/qcom,rpmcc.h b/include/dt-bindings/clock/qcom,rpmcc.h
index bdef14d..f0e0a66 100644
--- a/include/dt-bindings/clock/qcom,rpmcc.h
+++ b/include/dt-bindings/clock/qcom,rpmcc.h
@@ -127,88 +127,92 @@
#define RPM_SMD_QPIC_A_CLK 75
#define RPM_SMD_CE1_CLK 76
#define RPM_SMD_CE1_A_CLK 77
-#define RPM_SMD_BIMC_GPU_CLK 78
-#define RPM_SMD_BIMC_GPU_A_CLK 79
-#define RPM_SMD_LN_BB_CLK 80
-#define RPM_SMD_LN_BB_CLK_A 81
-#define RPM_SMD_LN_BB_CLK_PIN 82
-#define RPM_SMD_LN_BB_CLK_A_PIN 83
-#define RPM_SMD_RF_CLK3 84
-#define RPM_SMD_RF_CLK3_A 85
-#define RPM_SMD_RF_CLK3_PIN 86
-#define RPM_SMD_RF_CLK3_A_PIN 87
-#define RPM_SMD_LN_BB_CLK1 88
-#define RPM_SMD_LN_BB_CLK1_A 89
-#define RPM_SMD_LN_BB_CLK2 90
-#define RPM_SMD_LN_BB_CLK2_A 91
-#define RPM_SMD_LN_BB_CLK3 92
-#define RPM_SMD_LN_BB_CLK3_A 93
-#define RPM_SMD_MMAXI_CLK 94
-#define RPM_SMD_MMAXI_A_CLK 95
-#define RPM_SMD_AGGR1_NOC_CLK 96
-#define RPM_SMD_AGGR1_NOC_A_CLK 97
-#define RPM_SMD_AGGR2_NOC_CLK 98
-#define RPM_SMD_AGGR2_NOC_A_CLK 99
-#define PNOC_MSMBUS_CLK 100
-#define PNOC_MSMBUS_A_CLK 101
-#define PNOC_KEEPALIVE_A_CLK 102
-#define SNOC_MSMBUS_CLK 103
-#define SNOC_MSMBUS_A_CLK 104
-#define BIMC_MSMBUS_CLK 105
-#define BIMC_MSMBUS_A_CLK 106
-#define PNOC_USB_CLK 107
-#define PNOC_USB_A_CLK 108
-#define SNOC_USB_CLK 109
-#define SNOC_USB_A_CLK 110
-#define BIMC_USB_CLK 111
-#define BIMC_USB_A_CLK 112
-#define SNOC_WCNSS_A_CLK 113
-#define BIMC_WCNSS_A_CLK 114
-#define MCD_CE1_CLK 115
-#define QCEDEV_CE1_CLK 116
-#define QCRYPTO_CE1_CLK 117
-#define QSEECOM_CE1_CLK 118
-#define SCM_CE1_CLK 119
-#define CXO_SMD_OTG_CLK 120
-#define CXO_SMD_LPM_CLK 121
-#define CXO_SMD_PIL_PRONTO_CLK 122
-#define CXO_SMD_PIL_MSS_CLK 123
-#define CXO_SMD_WLAN_CLK 124
-#define CXO_SMD_PIL_LPASS_CLK 125
-#define CXO_SMD_PIL_CDSP_CLK 126
-#define CNOC_MSMBUS_CLK 127
-#define CNOC_MSMBUS_A_CLK 128
-#define CNOC_KEEPALIVE_A_CLK 129
-#define SNOC_KEEPALIVE_A_CLK 130
-#define CPP_MMNRT_MSMBUS_CLK 131
-#define CPP_MMNRT_MSMBUS_A_CLK 132
-#define JPEG_MMNRT_MSMBUS_CLK 133
-#define JPEG_MMNRT_MSMBUS_A_CLK 134
-#define VENUS_MMNRT_MSMBUS_CLK 135
-#define VENUS_MMNRT_MSMBUS_A_CLK 136
-#define ARM9_MMNRT_MSMBUS_CLK 137
-#define ARM9_MMNRT_MSMBUS_A_CLK 138
-#define MDP_MMRT_MSMBUS_CLK 139
-#define MDP_MMRT_MSMBUS_A_CLK 140
-#define VFE_MMRT_MSMBUS_CLK 141
-#define VFE_MMRT_MSMBUS_A_CLK 142
-#define QUP0_MSMBUS_SNOC_PERIPH_CLK 143
-#define QUP0_MSMBUS_SNOC_PERIPH_A_CLK 144
-#define QUP1_MSMBUS_SNOC_PERIPH_CLK 145
-#define QUP1_MSMBUS_SNOC_PERIPH_A_CLK 146
-#define QUP2_MSMBUS_SNOC_PERIPH_CLK 147
-#define QUP2_MSMBUS_SNOC_PERIPH_A_CLK 148
-#define DAP_MSMBUS_SNOC_PERIPH_CLK 149
-#define DAP_MSMBUS_SNOC_PERIPH_A_CLK 150
-#define SDC1_MSMBUS_SNOC_PERIPH_CLK 151
-#define SDC1_MSMBUS_SNOC_PERIPH_A_CLK 152
-#define SDC2_MSMBUS_SNOC_PERIPH_CLK 153
-#define SDC2_MSMBUS_SNOC_PERIPH_A_CLK 154
-#define CRYPTO_MSMBUS_SNOC_PERIPH_CLK 155
-#define CRYPTO_MSMBUS_SNOC_PERIPH_A_CLK 156
-#define SDC1_SLV_MSMBUS_SNOC_PERIPH_CLK 157
-#define SDC1_SLV_MSMBUS_SNOC_PERIPH_A_CLK 158
-#define SDC2_SLV_MSMBUS_SNOC_PERIPH_CLK 159
-#define SDC2_SLV_MSMBUS_SNOC_PERIPH_A_CLK 160
+#define RPM_SMD_HWKM_CLK 78
+#define RPM_SMD_HWKM_A_CLK 79
+#define RPM_SMD_PKA_CLK 80
+#define RPM_SMD_PKA_A_CLK 81
+#define RPM_SMD_BIMC_GPU_CLK 82
+#define RPM_SMD_BIMC_GPU_A_CLK 83
+#define RPM_SMD_LN_BB_CLK 84
+#define RPM_SMD_LN_BB_CLK_A 85
+#define RPM_SMD_LN_BB_CLK_PIN 86
+#define RPM_SMD_LN_BB_CLK_A_PIN 87
+#define RPM_SMD_RF_CLK3 88
+#define RPM_SMD_RF_CLK3_A 89
+#define RPM_SMD_RF_CLK3_PIN 90
+#define RPM_SMD_RF_CLK3_A_PIN 91
+#define RPM_SMD_LN_BB_CLK1 92
+#define RPM_SMD_LN_BB_CLK1_A 93
+#define RPM_SMD_LN_BB_CLK2 94
+#define RPM_SMD_LN_BB_CLK2_A 95
+#define RPM_SMD_LN_BB_CLK3 96
+#define RPM_SMD_LN_BB_CLK3_A 97
+#define RPM_SMD_MMAXI_CLK 98
+#define RPM_SMD_MMAXI_A_CLK 99
+#define RPM_SMD_AGGR1_NOC_CLK 100
+#define RPM_SMD_AGGR1_NOC_A_CLK 101
+#define RPM_SMD_AGGR2_NOC_CLK 102
+#define RPM_SMD_AGGR2_NOC_A_CLK 103
+#define PNOC_MSMBUS_CLK 104
+#define PNOC_MSMBUS_A_CLK 105
+#define PNOC_KEEPALIVE_A_CLK 106
+#define SNOC_MSMBUS_CLK 107
+#define SNOC_MSMBUS_A_CLK 108
+#define BIMC_MSMBUS_CLK 109
+#define BIMC_MSMBUS_A_CLK 110
+#define PNOC_USB_CLK 111
+#define PNOC_USB_A_CLK 112
+#define SNOC_USB_CLK 113
+#define SNOC_USB_A_CLK 114
+#define BIMC_USB_CLK 115
+#define BIMC_USB_A_CLK 116
+#define SNOC_WCNSS_A_CLK 117
+#define BIMC_WCNSS_A_CLK 118
+#define MCD_CE1_CLK 119
+#define QCEDEV_CE1_CLK 120
+#define QCRYPTO_CE1_CLK 121
+#define QSEECOM_CE1_CLK 122
+#define SCM_CE1_CLK 123
+#define CXO_SMD_OTG_CLK 124
+#define CXO_SMD_LPM_CLK 125
+#define CXO_SMD_PIL_PRONTO_CLK 126
+#define CXO_SMD_PIL_MSS_CLK 127
+#define CXO_SMD_WLAN_CLK 128
+#define CXO_SMD_PIL_LPASS_CLK 129
+#define CXO_SMD_PIL_CDSP_CLK 130
+#define CNOC_MSMBUS_CLK 131
+#define CNOC_MSMBUS_A_CLK 132
+#define CNOC_KEEPALIVE_A_CLK 133
+#define SNOC_KEEPALIVE_A_CLK 134
+#define CPP_MMNRT_MSMBUS_CLK 135
+#define CPP_MMNRT_MSMBUS_A_CLK 136
+#define JPEG_MMNRT_MSMBUS_CLK 137
+#define JPEG_MMNRT_MSMBUS_A_CLK 138
+#define VENUS_MMNRT_MSMBUS_CLK 139
+#define VENUS_MMNRT_MSMBUS_A_CLK 140
+#define ARM9_MMNRT_MSMBUS_CLK 141
+#define ARM9_MMNRT_MSMBUS_A_CLK 142
+#define MDP_MMRT_MSMBUS_CLK 143
+#define MDP_MMRT_MSMBUS_A_CLK 144
+#define VFE_MMRT_MSMBUS_CLK 145
+#define VFE_MMRT_MSMBUS_A_CLK 146
+#define QUP0_MSMBUS_SNOC_PERIPH_CLK 147
+#define QUP0_MSMBUS_SNOC_PERIPH_A_CLK 148
+#define QUP1_MSMBUS_SNOC_PERIPH_CLK 149
+#define QUP1_MSMBUS_SNOC_PERIPH_A_CLK 150
+#define QUP2_MSMBUS_SNOC_PERIPH_CLK 151
+#define QUP2_MSMBUS_SNOC_PERIPH_A_CLK 152
+#define DAP_MSMBUS_SNOC_PERIPH_CLK 153
+#define DAP_MSMBUS_SNOC_PERIPH_A_CLK 154
+#define SDC1_MSMBUS_SNOC_PERIPH_CLK 155
+#define SDC1_MSMBUS_SNOC_PERIPH_A_CLK 156
+#define SDC2_MSMBUS_SNOC_PERIPH_CLK 157
+#define SDC2_MSMBUS_SNOC_PERIPH_A_CLK 158
+#define CRYPTO_MSMBUS_SNOC_PERIPH_CLK 159
+#define CRYPTO_MSMBUS_SNOC_PERIPH_A_CLK 160
+#define SDC1_SLV_MSMBUS_SNOC_PERIPH_CLK 161
+#define SDC1_SLV_MSMBUS_SNOC_PERIPH_A_CLK 162
+#define SDC2_SLV_MSMBUS_SNOC_PERIPH_CLK 163
+#define SDC2_SLV_MSMBUS_SNOC_PERIPH_A_CLK 164
#endif
diff --git a/include/dt-bindings/clock/qcom,rpmh.h b/include/dt-bindings/clock/qcom,rpmh.h
index d6c1dff..31e63a7 100644
--- a/include/dt-bindings/clock/qcom,rpmh.h
+++ b/include/dt-bindings/clock/qcom,rpmh.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved. */
+/* Copyright (c) 2018-2020, The Linux Foundation. All rights reserved. */
#ifndef _DT_BINDINGS_CLK_MSM_RPMH_H
#define _DT_BINDINGS_CLK_MSM_RPMH_H
@@ -25,5 +25,7 @@
#define RPMH_RF_CLKD4_A 17
#define RPMH_RF_CLK4 18
#define RPMH_RF_CLK4_A 19
+#define RPMH_QLINK_CLK 20
+#define RPMH_QLINK_CLK_A 21
#endif
diff --git a/include/dt-bindings/msm/msm-bus-ids.h b/include/dt-bindings/msm/msm-bus-ids.h
index 835fb0c..4aff0c9 100644
--- a/include/dt-bindings/msm/msm-bus-ids.h
+++ b/include/dt-bindings/msm/msm-bus-ids.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
- * Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2014-2020, The Linux Foundation. All rights reserved.
*/
#ifndef __MSM_BUS_IDS_H
@@ -699,6 +699,8 @@
#define MSM_BUS_SLAVE_ANOC_SNOC 834
#define MSM_BUS_SLAVE_GPU_CDSP_BIMC 835
#define MSM_BUS_SLAVE_AHB2PHY_2 836
+#define MSM_BUS_SLAVE_HWKM 837
+#define MSM_BUS_SLAVE_PKA_WRAPPER 838
#define MSM_BUS_SLAVE_EBI_CH0_DISPLAY 20512
#define MSM_BUS_SLAVE_LLCC_DISPLAY 20513
@@ -1175,4 +1177,6 @@
#define ICBID_SLAVE_MAPSS 277
#define ICBID_SLAVE_MDSP_MPU_CFG 278
#define ICBID_SLAVE_CAMERA_RT_THROTTLE_CFG 279
+#define ICBID_SLAVE_HWKM 280
+#define ICBID_SLAVE_PKA_WRAPPER 281
#endif
diff --git a/include/dt-bindings/msm/msm-camera.h b/include/dt-bindings/msm/msm-camera.h
index 07817a7..84c8e4c 100644
--- a/include/dt-bindings/msm/msm-camera.h
+++ b/include/dt-bindings/msm/msm-camera.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
- * Copyright (c) 2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2019-2020, The Linux Foundation. All rights reserved.
*/
#ifndef __MSM_CAMERA_H
@@ -62,10 +62,14 @@
#define CAM_CPAS_TRAFFIC_MERGE_SUM 0
#define CAM_CPAS_TRAFFIC_MERGE_SUM_INTERLEAVE 1
+#define CAM_CPAS_FEATURE_TYPE_DISABLE 0
+#define CAM_CPAS_FEATURE_TYPE_ENABLE 1
-/* Feature support bit positions in feature fuse register*/
-#define CAM_CPAS_QCFA_BINNING_ENABLE 0
-#define CAM_CPAS_SECURE_CAMERA_ENABLE 1
-#define CAM_CPAS_FUSE_FEATURE_MAX 2
+/* Fuse Feature support ids */
+#define CAM_CPAS_QCFA_BINNING_ENABLE 0
+#define CAM_CPAS_SECURE_CAMERA_ENABLE 1
+#define CAM_CPAS_ISP_FUSE_ID 2
+#define CAM_CPAS_ISP_PIX_FUSE_ID 3
+#define CAM_CPAS_FUSE_FEATURE_MAX 4
#endif
diff --git a/include/linux/bio-crypt-ctx.h b/include/linux/bio-crypt-ctx.h
new file mode 100644
index 0000000..d10c5ad
--- /dev/null
+++ b/include/linux/bio-crypt-ctx.h
@@ -0,0 +1,231 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+#ifndef __LINUX_BIO_CRYPT_CTX_H
+#define __LINUX_BIO_CRYPT_CTX_H
+
+#include <linux/string.h>
+
+enum blk_crypto_mode_num {
+ BLK_ENCRYPTION_MODE_INVALID,
+ BLK_ENCRYPTION_MODE_AES_256_XTS,
+ BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
+ BLK_ENCRYPTION_MODE_ADIANTUM,
+ BLK_ENCRYPTION_MODE_MAX,
+};
+
+#ifdef CONFIG_BLOCK
+#include <linux/blk_types.h>
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
+#define BLK_CRYPTO_MAX_KEY_SIZE 64
+#define BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE 128
+
+/**
+ * struct blk_crypto_key - an inline encryption key
+ * @crypto_mode: encryption algorithm this key is for
+ * @data_unit_size: the data unit size for all encryption/decryptions with this
+ * key. This is the size in bytes of each individual plaintext and
+ * ciphertext. This is always a power of 2. It might be e.g. the
+ * filesystem block size or the disk sector size.
+ * @data_unit_size_bits: log2 of data_unit_size
+ * @size: size of this key in bytes (determined by @crypto_mode)
+ * @hash: hash of this key, for keyslot manager use only
+ * @is_hw_wrapped: @raw points to a wrapped key to be used by an inline
+ * encryption hardware that accepts wrapped keys.
+ * @raw: the raw bytes of this key. Only the first @size bytes are used.
+ *
+ * A blk_crypto_key is immutable once created, and many bios can reference it at
+ * the same time. It must not be freed until all bios using it have completed.
+ */
+struct blk_crypto_key {
+ enum blk_crypto_mode_num crypto_mode;
+ unsigned int data_unit_size;
+ unsigned int data_unit_size_bits;
+ unsigned int size;
+ unsigned int hash;
+ bool is_hw_wrapped;
+ u8 raw[BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE];
+};
+
+#define BLK_CRYPTO_MAX_IV_SIZE 32
+#define BLK_CRYPTO_DUN_ARRAY_SIZE (BLK_CRYPTO_MAX_IV_SIZE/sizeof(u64))
+
+/**
+ * struct bio_crypt_ctx - an inline encryption context
+ * @bc_key: the key, algorithm, and data unit size to use
+ * @bc_keyslot: the keyslot that has been assigned for this key in @bc_ksm,
+ * or -1 if no keyslot has been assigned yet.
+ * @bc_dun: the data unit number (starting IV) to use
+ * @bc_ksm: the keyslot manager into which the key has been programmed with
+ * @bc_keyslot, or NULL if this key hasn't yet been programmed.
+ *
+ * A bio_crypt_ctx specifies that the contents of the bio will be encrypted (for
+ * write requests) or decrypted (for read requests) inline by the storage device
+ * or controller, or by the crypto API fallback.
+ */
+struct bio_crypt_ctx {
+ const struct blk_crypto_key *bc_key;
+ int bc_keyslot;
+
+ /* Data unit number */
+ u64 bc_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+
+ /*
+ * The keyslot manager where the key has been programmed
+ * with keyslot.
+ */
+ struct keyslot_manager *bc_ksm;
+};
+
+int bio_crypt_ctx_init(void);
+
+struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask);
+
+void bio_crypt_free_ctx(struct bio *bio);
+
+static inline bool bio_has_crypt_ctx(struct bio *bio)
+{
+ return bio->bi_crypt_context;
+}
+
+void bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask);
+
+static inline void bio_crypt_set_ctx(struct bio *bio,
+ const struct blk_crypto_key *key,
+ u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
+ gfp_t gfp_mask)
+{
+ struct bio_crypt_ctx *bc = bio_crypt_alloc_ctx(gfp_mask);
+
+ bc->bc_key = key;
+ memcpy(bc->bc_dun, dun, sizeof(bc->bc_dun));
+ bc->bc_ksm = NULL;
+ bc->bc_keyslot = -1;
+
+ bio->bi_crypt_context = bc;
+}
+
+void bio_crypt_ctx_release_keyslot(struct bio_crypt_ctx *bc);
+
+int bio_crypt_ctx_acquire_keyslot(struct bio_crypt_ctx *bc,
+ struct keyslot_manager *ksm);
+
+struct request;
+bool bio_crypt_should_process(struct request *rq);
+
+static inline bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc,
+ unsigned int bytes,
+ u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
+{
+ int i = 0;
+ unsigned int inc = bytes >> bc->bc_key->data_unit_size_bits;
+
+ while (i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
+ if (bc->bc_dun[i] + inc != next_dun[i])
+ return false;
+ inc = ((bc->bc_dun[i] + inc) < inc);
+ i++;
+ }
+
+ return true;
+}
+
+
+static inline void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
+ unsigned int inc)
+{
+ int i = 0;
+
+ while (inc && i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
+ dun[i] += inc;
+ inc = (dun[i] < inc);
+ i++;
+ }
+}
+
+static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes)
+{
+ struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+
+ if (!bc)
+ return;
+
+ bio_crypt_dun_increment(bc->bc_dun,
+ bytes >> bc->bc_key->data_unit_size_bits);
+}
+
+bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2);
+
+bool bio_crypt_ctx_mergeable(struct bio *b_1, unsigned int b1_bytes,
+ struct bio *b_2);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+static inline int bio_crypt_ctx_init(void)
+{
+ return 0;
+}
+
+static inline bool bio_has_crypt_ctx(struct bio *bio)
+{
+ return false;
+}
+
+static inline void bio_crypt_clone(struct bio *dst, struct bio *src,
+ gfp_t gfp_mask) { }
+
+static inline void bio_crypt_free_ctx(struct bio *bio) { }
+
+static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes) { }
+
+static inline bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
+{
+ return true;
+}
+
+static inline bool bio_crypt_ctx_mergeable(struct bio *b_1,
+ unsigned int b1_bytes,
+ struct bio *b_2)
+{
+ return true;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+#if IS_ENABLED(CONFIG_DM_DEFAULT_KEY)
+static inline void bio_set_skip_dm_default_key(struct bio *bio)
+{
+ bio->bi_skip_dm_default_key = true;
+}
+
+static inline bool bio_should_skip_dm_default_key(const struct bio *bio)
+{
+ return bio->bi_skip_dm_default_key;
+}
+
+static inline void bio_clone_skip_dm_default_key(struct bio *dst,
+ const struct bio *src)
+{
+ dst->bi_skip_dm_default_key = src->bi_skip_dm_default_key;
+}
+#else /* CONFIG_DM_DEFAULT_KEY */
+static inline void bio_set_skip_dm_default_key(struct bio *bio)
+{
+}
+
+static inline bool bio_should_skip_dm_default_key(const struct bio *bio)
+{
+ return false;
+}
+
+static inline void bio_clone_skip_dm_default_key(struct bio *dst,
+ const struct bio *src)
+{
+}
+#endif /* !CONFIG_DM_DEFAULT_KEY */
+
+#endif /* CONFIG_BLOCK */
+
+#endif /* __LINUX_BIO_CRYPT_CTX_H */
diff --git a/include/linux/bio.h b/include/linux/bio.h
index efa15cf..b7efb85 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -22,6 +22,7 @@
#include <linux/mempool.h>
#include <linux/ioprio.h>
#include <linux/bug.h>
+#include <linux/bio-crypt-ctx.h>
#ifdef CONFIG_BLOCK
@@ -73,9 +74,6 @@
#define bio_sectors(bio) bvec_iter_sectors((bio)->bi_iter)
#define bio_end_sector(bio) bvec_iter_end_sector((bio)->bi_iter)
-#define bio_dun(bio) ((bio)->bi_iter.bi_dun)
-#define bio_duns(bio) (bio_sectors(bio) >> 3) /* 4KB unit */
-#define bio_end_dun(bio) (bio_dun(bio) + bio_duns(bio))
/*
* Return the data direction, READ or WRITE.
@@ -173,11 +171,6 @@ static inline void bio_advance_iter(struct bio *bio, struct bvec_iter *iter,
{
iter->bi_sector += bytes >> 9;
-#ifdef CONFIG_PFK
- if (iter->bi_dun)
- iter->bi_dun += bytes >> 12;
-#endif
-
if (bio_no_advance_iter(bio)) {
iter->bi_size -= bytes;
iter->bi_done += bytes;
diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
new file mode 100644
index 0000000..2d871a7
--- /dev/null
+++ b/include/linux/blk-crypto.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef __LINUX_BLK_CRYPTO_H
+#define __LINUX_BLK_CRYPTO_H
+
+#include <linux/bio.h>
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
+int blk_crypto_submit_bio(struct bio **bio_ptr);
+
+bool blk_crypto_endio(struct bio *bio);
+
+int blk_crypto_init_key(struct blk_crypto_key *blk_key,
+ const u8 *raw_key, unsigned int raw_key_size,
+ bool is_hw_wrapped,
+ enum blk_crypto_mode_num crypto_mode,
+ unsigned int data_unit_size);
+
+int blk_crypto_evict_key(struct request_queue *q,
+ const struct blk_crypto_key *key);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+static inline int blk_crypto_submit_bio(struct bio **bio_ptr)
+{
+ return 0;
+}
+
+static inline bool blk_crypto_endio(struct bio *bio)
+{
+ return true;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK
+
+int blk_crypto_start_using_mode(enum blk_crypto_mode_num mode_num,
+ unsigned int data_unit_size,
+ struct request_queue *q);
+
+int blk_crypto_fallback_init(void);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+
+static inline int
+blk_crypto_start_using_mode(enum blk_crypto_mode_num mode_num,
+ unsigned int data_unit_size,
+ struct request_queue *q)
+{
+ return 0;
+}
+
+static inline int blk_crypto_fallback_init(void)
+{
+ return 0;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+
+#endif /* __LINUX_BLK_CRYPTO_H */
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index f2040ae..d93633d 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -18,6 +18,7 @@ struct block_device;
struct io_context;
struct cgroup_subsys_state;
typedef void (bio_end_io_t) (struct bio *);
+struct bio_crypt_ctx;
/*
* Block error status values. See block/blk-core:blk_errors for the details.
@@ -182,18 +183,19 @@ struct bio {
struct blkcg_gq *bi_blkg;
struct bio_issue bi_issue;
#endif
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+ struct bio_crypt_ctx *bi_crypt_context;
+#if IS_ENABLED(CONFIG_DM_DEFAULT_KEY)
+ bool bi_skip_dm_default_key;
+#endif
+#endif
+
union {
#if defined(CONFIG_BLK_DEV_INTEGRITY)
struct bio_integrity_payload *bi_integrity; /* data integrity */
#endif
};
-#ifdef CONFIG_PFK
- /* Encryption key to use (NULL if none) */
- const struct blk_encryption_key *bi_crypt_key;
-#endif
-#ifdef CONFIG_DM_DEFAULT_KEY
- int bi_crypt_skip;
-#endif
unsigned short bi_vcnt; /* how many bio_vec's */
@@ -208,9 +210,7 @@ struct bio {
struct bio_vec *bi_io_vec; /* the actual vec list */
struct bio_set *bi_pool;
-#ifdef CONFIG_PFK
- struct inode *bi_dio_inode;
-#endif
+
/*
* We can inline a number of vecs at the end of the bio, to avoid
* double allocations for a small number of bio_vecs. This member
@@ -340,11 +340,6 @@ enum req_flag_bits {
/* for driver use */
__REQ_DRV,
__REQ_SWAP, /* swapping request. */
- /* Android specific flags */
- __REQ_NOENCRYPT, /*
- * ok to not encrypt (already encrypted at fs
- * level)
- */
__REQ_NR_BITS, /* stops here */
};
@@ -363,10 +358,11 @@ enum req_flag_bits {
#define REQ_RAHEAD (1ULL << __REQ_RAHEAD)
#define REQ_BACKGROUND (1ULL << __REQ_BACKGROUND)
#define REQ_NOWAIT (1ULL << __REQ_NOWAIT)
+
#define REQ_NOUNMAP (1ULL << __REQ_NOUNMAP)
+
#define REQ_DRV (1ULL << __REQ_DRV)
#define REQ_SWAP (1ULL << __REQ_SWAP)
-#define REQ_NOENCRYPT (1ULL << __REQ_NOENCRYPT)
#define REQ_FAILFAST_MASK \
(REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 3a8a3902..d01246c 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -43,6 +43,7 @@ struct pr_ops;
struct rq_qos;
struct blk_queue_stats;
struct blk_stat_callback;
+struct keyslot_manager;
#define BLKDEV_MIN_RQ 4
#define BLKDEV_MAX_RQ 128 /* Default maximum */
@@ -161,7 +162,6 @@ struct request {
unsigned int __data_len; /* total data len */
int tag;
sector_t __sector; /* sector cursor */
- u64 __dun; /* dun for UFS */
struct bio *bio;
struct bio *biotail;
@@ -575,6 +575,10 @@ struct request_queue {
* queue_lock internally, e.g. scsi_request_fn().
*/
unsigned int request_fn_active;
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+ /* Inline crypto capabilities */
+ struct keyslot_manager *ksm;
+#endif
unsigned int rq_timeout;
int poll_nsec;
@@ -705,7 +709,6 @@ struct request_queue {
#define QUEUE_FLAG_REGISTERED 26 /* queue has been registered to a disk */
#define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */
#define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */
-#define QUEUE_FLAG_INLINECRYPT 29 /* inline encryption support */
#define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \
(1 << QUEUE_FLAG_SAME_COMP) | \
@@ -738,8 +741,6 @@ bool blk_queue_flag_test_and_clear(unsigned int flag, struct request_queue *q);
#define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
#define blk_queue_scsi_passthrough(q) \
test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags)
-#define blk_queue_inlinecrypt(q) \
- test_bit(QUEUE_FLAG_INLINECRYPT, &(q)->queue_flags)
#define blk_noretry_request(rq) \
((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \
@@ -886,24 +887,6 @@ static inline unsigned int blk_queue_depth(struct request_queue *q)
return q->nr_requests;
}
-static inline void queue_flag_set_unlocked(unsigned int flag,
- struct request_queue *q)
-{
- if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) &&
- kref_read(&q->kobj.kref))
- lockdep_assert_held(q->queue_lock);
- __set_bit(flag, &q->queue_flags);
-}
-
-static inline void queue_flag_clear_unlocked(unsigned int flag,
- struct request_queue *q)
-{
- if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) &&
- kref_read(&q->kobj.kref))
- lockdep_assert_held(q->queue_lock);
- __clear_bit(flag, &q->queue_flags);
-}
-
/*
* q->prep_rq_fn return values
*/
@@ -1069,11 +1052,6 @@ static inline sector_t blk_rq_pos(const struct request *rq)
return rq->__sector;
}
-static inline sector_t blk_rq_dun(const struct request *rq)
-{
- return rq->__dun;
-}
-
static inline unsigned int blk_rq_bytes(const struct request *rq)
{
return rq->__data_len;
diff --git a/include/linux/bluetooth-power.h b/include/linux/bluetooth-power.h
index 8bcba91..c0984b3 100644
--- a/include/linux/bluetooth-power.h
+++ b/include/linux/bluetooth-power.h
@@ -93,7 +93,7 @@ int get_chipset_version(void);
#define BT_CMD_SLIM_TEST 0xbfac
#define BT_CMD_PWR_CTRL 0xbfad
#define BT_CMD_CHIPSET_VERS 0xbfae
-#define BT_CMD_GETVAL_RESET_GPIO 0xbfaf
+#define BT_CMD_GETVAL_RESET_GPIO 0xbfb5
#define BT_CMD_GETVAL_SW_CTRL_GPIO 0xbfb0
#define BT_CMD_GETVAL_VDD_AON_LDO 0xbfb1
#define BT_CMD_GETVAL_VDD_DIG_LDO 0xbfb2
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index 543bb5f..fe7a22d 100644
--- a/include/linux/bvec.h
+++ b/include/linux/bvec.h
@@ -44,7 +44,6 @@ struct bvec_iter {
unsigned int bi_bvec_done; /* number of bytes completed in
current bvec */
- u64 bi_dun; /* DUN setting for bio */
};
/*
diff --git a/include/linux/crypto-qti-common.h b/include/linux/crypto-qti-common.h
new file mode 100644
index 0000000..ef72618
--- /dev/null
+++ b/include/linux/crypto-qti-common.h
@@ -0,0 +1,88 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef _CRYPTO_QTI_COMMON_H
+#define _CRYPTO_QTI_COMMON_H
+
+#include <linux/bio-crypt-ctx.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+
+#define RAW_SECRET_SIZE 32
+#define QTI_ICE_MAX_BIST_CHECK_COUNT 100
+#define QTI_ICE_TYPE_NAME_LEN 8
+
+struct crypto_vops_qti_entry {
+ void __iomem *icemmio_base;
+ uint32_t ice_hw_version;
+ uint8_t ice_dev_type[QTI_ICE_TYPE_NAME_LEN];
+ uint32_t flags;
+};
+
+#if IS_ENABLED(CONFIG_QTI_CRYPTO_COMMON)
+// crypto-qti-common.c
+int crypto_qti_init_crypto(struct device *dev, void __iomem *mmio_base,
+ void **priv_data);
+int crypto_qti_enable(void *priv_data);
+void crypto_qti_disable(void *priv_data);
+int crypto_qti_resume(void *priv_data);
+int crypto_qti_debug(void *priv_data);
+int crypto_qti_keyslot_program(void *priv_data,
+ const struct blk_crypto_key *key,
+ unsigned int slot, u8 data_unit_mask,
+ int capid);
+int crypto_qti_keyslot_evict(void *priv_data, unsigned int slot);
+int crypto_qti_derive_raw_secret(const u8 *wrapped_key,
+ unsigned int wrapped_key_size, u8 *secret,
+ unsigned int secret_size);
+
+#else
+static inline int crypto_qti_init_crypto(struct device *dev,
+ void __iomem *mmio_base,
+ void **priv_data)
+{
+ return 0;
+}
+static inline int crypto_qti_enable(void *priv_data)
+{
+ return 0;
+}
+static inline void crypto_qti_disable(void *priv_data)
+{
+ return 0;
+}
+static inline int crypto_qti_resume(void *priv_data)
+{
+ return 0;
+}
+static inline int crypto_qti_debug(void *priv_data)
+{
+ return 0;
+}
+static inline int crypto_qti_keyslot_program(void *priv_data,
+ const struct blk_crypto_key *key,
+ unsigned int slot,
+ u8 data_unit_mask,
+ int capid)
+{
+ return 0;
+}
+static inline int crypto_qti_keyslot_evict(void *priv_data, unsigned int slot)
+{
+ return 0;
+}
+static inline int crypto_qti_derive_raw_secret(const u8 *wrapped_key,
+ unsigned int wrapped_key_size,
+ u8 *secret,
+ unsigned int secret_size)
+{
+ return 0;
+}
+
+#endif /* CONFIG_QTI_CRYPTO_COMMON */
+
+#endif /* _CRYPTO_QTI_COMMON_H */
diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
index 3b0ba54..3bc1034c 100644
--- a/include/linux/debugfs.h
+++ b/include/linux/debugfs.h
@@ -54,6 +54,8 @@ static const struct file_operations __fops = { \
.llseek = no_llseek, \
}
+typedef struct vfsmount *(*debugfs_automount_t)(struct dentry *, void *);
+
#if defined(CONFIG_DEBUG_FS)
struct dentry *debugfs_lookup(const char *name, struct dentry *parent);
@@ -75,7 +77,6 @@ struct dentry *debugfs_create_dir(const char *name, struct dentry *parent);
struct dentry *debugfs_create_symlink(const char *name, struct dentry *parent,
const char *dest);
-typedef struct vfsmount *(*debugfs_automount_t)(struct dentry *, void *);
struct dentry *debugfs_create_automount(const char *name,
struct dentry *parent,
debugfs_automount_t f,
@@ -204,7 +205,7 @@ static inline struct dentry *debugfs_create_symlink(const char *name,
static inline struct dentry *debugfs_create_automount(const char *name,
struct dentry *parent,
- struct vfsmount *(*f)(void *),
+ debugfs_automount_t f,
void *data)
{
return ERR_PTR(-ENODEV);
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 2f3d54e..b35970f 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -315,6 +315,12 @@ struct dm_target {
* on max_io_len boundary.
*/
bool split_discard_bios:1;
+
+ /*
+ * Set if inline crypto capabilities from this target's underlying
+ * device(s) can be exposed via the device-mapper device.
+ */
+ bool may_passthrough_inline_crypto:1;
};
/* Each target can link one of these into the table */
diff --git a/include/linux/diagchar.h b/include/linux/diagchar.h
index fb222f2..dcabab1 100644
--- a/include/linux/diagchar.h
+++ b/include/linux/diagchar.h
@@ -142,10 +142,10 @@
* a new RANGE of SSIDs to the msg_mask_tbl.
*/
#define MSG_MASK_TBL_CNT 26
-#define APPS_EVENT_LAST_ID 0xCC1
+#define APPS_EVENT_LAST_ID 0xCC2
#define MSG_SSID_0 0
-#define MSG_SSID_0_LAST 132
+#define MSG_SSID_0_LAST 134
#define MSG_SSID_1 500
#define MSG_SSID_1_LAST 506
#define MSG_SSID_2 1000
@@ -357,7 +357,9 @@ static const uint32_t msg_bld_masks_0[] = {
MSG_LVL_HIGH,
MSG_LVL_HIGH,
MSG_LVL_LOW | MSG_LVL_MED | MSG_LVL_HIGH | MSG_LVL_ERROR,
- MSG_LVL_HIGH
+ MSG_LVL_HIGH,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW
};
static const uint32_t msg_bld_masks_1[] = {
@@ -919,7 +921,7 @@ static const uint32_t msg_bld_masks_25[] = {
/* LOG CODES */
static const uint32_t log_code_last_tbl[] = {
0x0, /* EQUIP ID 0 */
- 0x1CD6, /* EQUIP ID 1 */
+ 0x1CDD, /* EQUIP ID 1 */
0x0, /* EQUIP ID 2 */
0x0, /* EQUIP ID 3 */
0x4910, /* EQUIP ID 4 */
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index f6e7438..851a46c 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -445,6 +445,7 @@ struct dma_buf {
struct list_head refs;
dma_buf_destructor dtor;
void *dtor_data;
+ atomic_t dent_count;
};
/**
diff --git a/include/linux/fs.h b/include/linux/fs.h
index a33dc31..66963a1 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1396,6 +1396,7 @@ struct super_block {
const struct xattr_handler **s_xattr;
#ifdef CONFIG_FS_ENCRYPTION
const struct fscrypt_operations *s_cop;
+ struct key *s_master_keys; /* master crypto keys in use */
#endif
#ifdef CONFIG_FS_VERITY
const struct fsverity_operations *s_vop;
@@ -1896,7 +1897,6 @@ struct super_operations {
void *(*clone_mnt_data) (void *);
void (*copy_mnt_data) (void *, void *);
void (*umount_begin) (struct super_block *);
- void (*umount_end)(struct super_block *sb, int flags);
int (*show_options)(struct seq_file *, struct dentry *);
int (*show_options2)(struct vfsmount *,struct seq_file *, struct dentry *);
@@ -3122,8 +3122,6 @@ static inline void inode_dio_end(struct inode *inode)
wake_up_bit(&inode->i_state, __I_DIO_WAKEUP);
}
-struct inode *dio_bio_get_inode(struct bio *bio);
-
extern void inode_set_flags(struct inode *inode, unsigned int flags,
unsigned int mask);
diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
index 53193af..a298256 100644
--- a/include/linux/fscrypt.h
+++ b/include/linux/fscrypt.h
@@ -16,15 +16,10 @@
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/slab.h>
+#include <uapi/linux/fscrypt.h>
#define FS_CRYPTO_BLOCK_SIZE 16
-struct fscrypt_ctx;
-
-/* iv sector for security/pfe/pfk_fscrypt.c and f2fs */
-#define PG_DUN(i, p) \
- (((((u64)(i)->i_ino) & 0xffffffff) << 32) | ((p)->index & 0xffffffff))
-
struct fscrypt_info;
struct fscrypt_str {
@@ -47,7 +42,7 @@ struct fscrypt_name {
#define fname_len(p) ((p)->disk_name.len)
/* Maximum value for the third parameter of fscrypt_operations.set_context(). */
-#define FSCRYPT_SET_CONTEXT_MAX_SIZE 28
+#define FSCRYPT_SET_CONTEXT_MAX_SIZE 40
#ifdef CONFIG_FS_ENCRYPTION
/*
@@ -66,19 +61,13 @@ struct fscrypt_operations {
bool (*dummy_context)(struct inode *);
bool (*empty_dir)(struct inode *);
unsigned int max_namelen;
- bool (*is_encrypted)(struct inode *inode);
-};
-
-/* Decryption work */
-struct fscrypt_ctx {
- union {
- struct {
- struct bio *bio;
- struct work_struct work;
- };
- struct list_head free_list; /* Free list */
- };
- u8 flags; /* Flags */
+ bool (*has_stable_inodes)(struct super_block *sb);
+ void (*get_ino_and_lblk_bits)(struct super_block *sb,
+ int *ino_bits_ret, int *lblk_bits_ret);
+ bool (*inline_crypt_enabled)(struct super_block *sb);
+ int (*get_num_devices)(struct super_block *sb);
+ void (*get_devices)(struct super_block *sb,
+ struct request_queue **devs);
};
static inline bool fscrypt_has_encryption_key(const struct inode *inode)
@@ -107,8 +96,6 @@ static inline void fscrypt_handle_d_move(struct dentry *dentry)
/* crypto.c */
extern void fscrypt_enqueue_decrypt_work(struct work_struct *);
-extern struct fscrypt_ctx *fscrypt_get_ctx(gfp_t);
-extern void fscrypt_release_ctx(struct fscrypt_ctx *);
extern struct page *fscrypt_encrypt_pagecache_blocks(struct page *page,
unsigned int len,
@@ -140,13 +127,25 @@ extern void fscrypt_free_bounce_page(struct page *bounce_page);
/* policy.c */
extern int fscrypt_ioctl_set_policy(struct file *, const void __user *);
extern int fscrypt_ioctl_get_policy(struct file *, void __user *);
+extern int fscrypt_ioctl_get_policy_ex(struct file *, void __user *);
extern int fscrypt_has_permitted_context(struct inode *, struct inode *);
extern int fscrypt_inherit_context(struct inode *, struct inode *,
void *, bool);
-/* keyinfo.c */
+/* keyring.c */
+extern void fscrypt_sb_free(struct super_block *sb);
+extern int fscrypt_ioctl_add_key(struct file *filp, void __user *arg);
+extern int fscrypt_ioctl_remove_key(struct file *filp, void __user *arg);
+extern int fscrypt_ioctl_remove_key_all_users(struct file *filp,
+ void __user *arg);
+extern int fscrypt_ioctl_get_key_status(struct file *filp, void __user *arg);
+extern int fscrypt_register_key_removal_notifier(struct notifier_block *nb);
+extern int fscrypt_unregister_key_removal_notifier(struct notifier_block *nb);
+
+/* keysetup.c */
extern int fscrypt_get_encryption_info(struct inode *);
extern void fscrypt_put_encryption_info(struct inode *);
extern void fscrypt_free_inode(struct inode *);
+extern int fscrypt_drop_inode(struct inode *inode);
/* fname.c */
extern int fscrypt_setup_filename(struct inode *, const struct qstr *,
@@ -239,8 +238,6 @@ static inline bool fscrypt_match_name(const struct fscrypt_name *fname,
/* bio.c */
extern void fscrypt_decrypt_bio(struct bio *);
-extern void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx,
- struct bio *bio);
extern int fscrypt_zeroout_range(const struct inode *, pgoff_t, sector_t,
unsigned int);
@@ -285,16 +282,6 @@ static inline void fscrypt_enqueue_decrypt_work(struct work_struct *work)
{
}
-static inline struct fscrypt_ctx *fscrypt_get_ctx(gfp_t gfp_flags)
-{
- return ERR_PTR(-EOPNOTSUPP);
-}
-
-static inline void fscrypt_release_ctx(struct fscrypt_ctx *ctx)
-{
- return;
-}
-
static inline struct page *fscrypt_encrypt_pagecache_blocks(struct page *page,
unsigned int len,
unsigned int offs,
@@ -354,6 +341,12 @@ static inline int fscrypt_ioctl_get_policy(struct file *filp, void __user *arg)
return -EOPNOTSUPP;
}
+static inline int fscrypt_ioctl_get_policy_ex(struct file *filp,
+ void __user *arg)
+{
+ return -EOPNOTSUPP;
+}
+
static inline int fscrypt_has_permitted_context(struct inode *parent,
struct inode *child)
{
@@ -367,7 +360,46 @@ static inline int fscrypt_inherit_context(struct inode *parent,
return -EOPNOTSUPP;
}
-/* keyinfo.c */
+/* keyring.c */
+static inline void fscrypt_sb_free(struct super_block *sb)
+{
+}
+
+static inline int fscrypt_ioctl_add_key(struct file *filp, void __user *arg)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int fscrypt_ioctl_remove_key(struct file *filp, void __user *arg)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int fscrypt_ioctl_remove_key_all_users(struct file *filp,
+ void __user *arg)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int fscrypt_ioctl_get_key_status(struct file *filp,
+ void __user *arg)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int fscrypt_register_key_removal_notifier(
+ struct notifier_block *nb)
+{
+ return 0;
+}
+
+static inline int fscrypt_unregister_key_removal_notifier(
+ struct notifier_block *nb)
+{
+ return 0;
+}
+
+/* keysetup.c */
static inline int fscrypt_get_encryption_info(struct inode *inode)
{
return -EOPNOTSUPP;
@@ -382,6 +414,11 @@ static inline void fscrypt_free_inode(struct inode *inode)
{
}
+static inline int fscrypt_drop_inode(struct inode *inode)
+{
+ return 0;
+}
+
/* fname.c */
static inline int fscrypt_setup_filename(struct inode *dir,
const struct qstr *iname,
@@ -436,11 +473,6 @@ static inline void fscrypt_decrypt_bio(struct bio *bio)
{
}
-static inline void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx,
- struct bio *bio)
-{
-}
-
static inline int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
sector_t pblk, unsigned int len)
{
@@ -504,6 +536,74 @@ static inline const char *fscrypt_get_symlink(struct inode *inode,
}
#endif /* !CONFIG_FS_ENCRYPTION */
+/* inline_crypt.c */
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+extern bool fscrypt_inode_uses_inline_crypto(const struct inode *inode);
+
+extern bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode);
+
+extern void fscrypt_set_bio_crypt_ctx(struct bio *bio,
+ const struct inode *inode,
+ u64 first_lblk, gfp_t gfp_mask);
+
+extern void fscrypt_set_bio_crypt_ctx_bh(struct bio *bio,
+ const struct buffer_head *first_bh,
+ gfp_t gfp_mask);
+
+extern bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode,
+ u64 next_lblk);
+
+extern bool fscrypt_mergeable_bio_bh(struct bio *bio,
+ const struct buffer_head *next_bh);
+
+#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
+static inline bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
+{
+ return false;
+}
+
+static inline bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
+{
+ return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode);
+}
+
+static inline void fscrypt_set_bio_crypt_ctx(struct bio *bio,
+ const struct inode *inode,
+ u64 first_lblk, gfp_t gfp_mask) { }
+
+static inline void fscrypt_set_bio_crypt_ctx_bh(
+ struct bio *bio,
+ const struct buffer_head *first_bh,
+ gfp_t gfp_mask) { }
+
+static inline bool fscrypt_mergeable_bio(struct bio *bio,
+ const struct inode *inode,
+ u64 next_lblk)
+{
+ return true;
+}
+
+static inline bool fscrypt_mergeable_bio_bh(struct bio *bio,
+ const struct buffer_head *next_bh)
+{
+ return true;
+}
+#endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
+
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION) && IS_ENABLED(CONFIG_DM_DEFAULT_KEY)
+static inline bool
+fscrypt_inode_should_skip_dm_default_key(const struct inode *inode)
+{
+ return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode);
+}
+#else
+static inline bool
+fscrypt_inode_should_skip_dm_default_key(const struct inode *inode)
+{
+ return false;
+}
+#endif
+
/**
* fscrypt_require_key - require an inode's encryption key
* @inode: the inode we need the key for
@@ -712,33 +812,6 @@ static inline int fscrypt_encrypt_symlink(struct inode *inode,
return 0;
}
-/* fscrypt_ice.c */
-#ifdef CONFIG_PFK
-extern int fscrypt_using_hardware_encryption(const struct inode *inode);
-extern void fscrypt_set_ice_dun(const struct inode *inode,
- struct bio *bio, u64 dun);
-extern void fscrypt_set_ice_skip(struct bio *bio, int bi_crypt_skip);
-extern bool fscrypt_mergeable_bio(struct bio *bio, u64 dun, bool bio_encrypted,
- int bi_crypt_skip);
-#else
-static inline int fscrypt_using_hardware_encryption(const struct inode *inode)
-{
- return 0;
-}
-
-static inline void fscrypt_set_ice_dun(const struct inode *inode,
- struct bio *bio, u64 dun){}
-
-static inline void fscrypt_set_ice_skip(struct bio *bio, int bi_crypt_skip)
-{}
-
-static inline bool fscrypt_mergeable_bio(struct bio *bio,
- u64 dun, bool bio_encrypted, int bi_crypt_skip)
-{
- return true;
-}
-#endif
-
/* If *pagep is a bounce page, free it and set *pagep to the pagecache page */
static inline void fscrypt_finalize_bounce_page(struct page **pagep)
{
@@ -749,5 +822,4 @@ static inline void fscrypt_finalize_bounce_page(struct page **pagep)
fscrypt_free_bounce_page(page);
}
}
-
#endif /* _LINUX_FSCRYPT_H */
diff --git a/include/linux/ipa.h b/include/linux/ipa.h
index dcab89e..9ad4cb0 100644
--- a/include/linux/ipa.h
+++ b/include/linux/ipa.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
- * Copyright (c) 2012-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2020, The Linux Foundation. All rights reserved.
*/
#ifndef _IPA_H_
@@ -350,18 +350,28 @@ struct ipa_ep_cfg_holb {
* struct ipa_ep_cfg_deaggr - deaggregation configuration in IPA end-point
* @deaggr_hdr_len: Deaggregation Header length in bytes. Valid only for Input
* Pipes, which are configured for 'Generic' deaggregation.
+ * @syspipe_err_detection - If set to 1, enables error detection for
+ * de-aggregration. Valid only for Input Pipes, which are configured
+ * for 'Generic' deaggregation.
+ * Note: if this bit is set, de-aggregated frames must be contiguous
+ * in memory.
* @packet_offset_valid: - 0: PACKET_OFFSET is not used, 1: PACKET_OFFSET is
* used.
* @packet_offset_location: Location of packet offset field, which specifies
* the offset to the packet from the start of the packet offset field.
+ * @ignore_min_pkt_err - Ignore packets smaller than header. This is intended
+ * for use in RNDIS de-aggregated pipes, to silently ignore a redundant
+ * 1-byte trailer in MSFT implementation.
* @max_packet_len: DEAGGR Max Packet Length in Bytes. A Packet with higher
* size wil be treated as an error. 0 - Packet Length is not Bound,
* IPA should not check for a Max Packet Length.
*/
struct ipa_ep_cfg_deaggr {
u32 deaggr_hdr_len;
+ bool syspipe_err_detection;
bool packet_offset_valid;
u32 packet_offset_location;
+ bool ignore_min_pkt_err;
u32 max_packet_len;
};
diff --git a/include/linux/key.h b/include/linux/key.h
index e58ee10f6..86cbff8 100644
--- a/include/linux/key.h
+++ b/include/linux/key.h
@@ -303,6 +303,9 @@ extern key_ref_t key_create_or_update(key_ref_t keyring,
key_perm_t perm,
unsigned long flags);
+extern key_ref_t lookup_user_key(key_serial_t id, unsigned long flags,
+ key_perm_t perm);
+
extern int key_update(key_ref_t key,
const void *payload,
size_t plen);
diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h
new file mode 100644
index 0000000..6d32a03
--- /dev/null
+++ b/include/linux/keyslot-manager.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef __LINUX_KEYSLOT_MANAGER_H
+#define __LINUX_KEYSLOT_MANAGER_H
+
+#include <linux/bio.h>
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
+struct keyslot_manager;
+
+/**
+ * struct keyslot_mgmt_ll_ops - functions to manage keyslots in hardware
+ * @keyslot_program: Program the specified key into the specified slot in the
+ * inline encryption hardware.
+ * @keyslot_evict: Evict key from the specified keyslot in the hardware.
+ * The key is provided so that e.g. dm layers can evict
+ * keys from the devices that they map over.
+ * Returns 0 on success, -errno otherwise.
+ * @derive_raw_secret: (Optional) Derive a software secret from a
+ * hardware-wrapped key. Returns 0 on success, -EOPNOTSUPP
+ * if unsupported on the hardware, or another -errno code.
+ *
+ * This structure should be provided by storage device drivers when they set up
+ * a keyslot manager - this structure holds the function ptrs that the keyslot
+ * manager will use to manipulate keyslots in the hardware.
+ */
+struct keyslot_mgmt_ll_ops {
+ int (*keyslot_program)(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot);
+ int (*keyslot_evict)(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot);
+ int (*derive_raw_secret)(struct keyslot_manager *ksm,
+ const u8 *wrapped_key,
+ unsigned int wrapped_key_size,
+ u8 *secret, unsigned int secret_size);
+};
+
+struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
+ const struct keyslot_mgmt_ll_ops *ksm_ops,
+ const unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX],
+ void *ll_priv_data);
+
+int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key);
+
+void keyslot_manager_get_slot(struct keyslot_manager *ksm, unsigned int slot);
+
+void keyslot_manager_put_slot(struct keyslot_manager *ksm, unsigned int slot);
+
+bool keyslot_manager_crypto_mode_supported(struct keyslot_manager *ksm,
+ enum blk_crypto_mode_num crypto_mode,
+ unsigned int data_unit_size);
+
+int keyslot_manager_evict_key(struct keyslot_manager *ksm,
+ const struct blk_crypto_key *key);
+
+void keyslot_manager_reprogram_all_keys(struct keyslot_manager *ksm);
+
+void *keyslot_manager_private(struct keyslot_manager *ksm);
+
+void keyslot_manager_destroy(struct keyslot_manager *ksm);
+
+struct keyslot_manager *keyslot_manager_create_passthrough(
+ const struct keyslot_mgmt_ll_ops *ksm_ops,
+ const unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX],
+ void *ll_priv_data);
+
+void keyslot_manager_intersect_modes(struct keyslot_manager *parent,
+ const struct keyslot_manager *child);
+
+int keyslot_manager_derive_raw_secret(struct keyslot_manager *ksm,
+ const u8 *wrapped_key,
+ unsigned int wrapped_key_size,
+ u8 *secret, unsigned int secret_size);
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+#endif /* __LINUX_KEYSLOT_MANAGER_H */
diff --git a/include/linux/leds-qti-flash.h b/include/linux/leds-qti-flash.h
index ac87c46..56d6921 100644
--- a/include/linux/leds-qti-flash.h
+++ b/include/linux/leds-qti-flash.h
@@ -8,7 +8,10 @@
#include <linux/leds.h>
-#define QUERY_MAX_AVAIL_CURRENT BIT(0)
+#define ENABLE_REGULATOR BIT(0)
+#define DISABLE_REGULATOR BIT(1)
+#define QUERY_MAX_AVAIL_CURRENT BIT(2)
+#define QUERY_MAX_CURRENT BIT(3)
int qpnp_flash_register_led_prepare(struct device *dev, void *data);
diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
index 8f29eb0..0605f86 100644
--- a/include/linux/lsm_hooks.h
+++ b/include/linux/lsm_hooks.h
@@ -1516,8 +1516,6 @@ union security_list_options {
size_t *len);
int (*inode_create)(struct inode *dir, struct dentry *dentry,
umode_t mode);
- int (*inode_post_create)(struct inode *dir, struct dentry *dentry,
- umode_t mode);
int (*inode_link)(struct dentry *old_dentry, struct inode *dir,
struct dentry *new_dentry);
int (*inode_unlink)(struct inode *dir, struct dentry *dentry);
@@ -1840,7 +1838,6 @@ struct security_hook_heads {
struct hlist_head inode_free_security;
struct hlist_head inode_init_security;
struct hlist_head inode_create;
- struct hlist_head inode_post_create;
struct hlist_head inode_link;
struct hlist_head inode_unlink;
struct hlist_head inode_symlink;
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index 1683035..547beaf 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -38,6 +38,7 @@ enum MHI_CB {
MHI_CB_EE_MISSION_MODE,
MHI_CB_SYS_ERROR,
MHI_CB_FATAL_ERROR,
+ MHI_CB_FW_FALLBACK_IMG,
};
/**
@@ -282,6 +283,7 @@ struct mhi_controller {
/* fw images */
const char *fw_image;
+ const char *fw_image_fallback;
const char *edl_image;
/* mhi host manages downloading entire fbc images */
@@ -402,6 +404,7 @@ struct mhi_controller {
bool initiate_mhi_reset;
void *priv_data;
void *log_buf;
+ void *cntrl_log_buf;
struct dentry *dentry;
struct dentry *parent;
@@ -855,7 +858,7 @@ char *mhi_get_restart_reason(const char *name);
#ifdef CONFIG_MHI_DEBUG
#define MHI_VERB(fmt, ...) do { \
- if (mhi_cntrl->klog_lvl <= MHI_MSG_VERBOSE) \
+ if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_VERBOSE) \
pr_dbg("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
} while (0)
@@ -865,8 +868,18 @@ char *mhi_get_restart_reason(const char *name);
#endif
+#define MHI_CNTRL_LOG(fmt, ...) do { \
+ if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
+ pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
+} while (0)
+
+#define MHI_CNTRL_ERR(fmt, ...) do { \
+ if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_ERROR) \
+ pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
+} while (0)
+
#define MHI_LOG(fmt, ...) do { \
- if (mhi_cntrl->klog_lvl <= MHI_MSG_INFO) \
+ if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
} while (0)
@@ -906,6 +919,20 @@ char *mhi_get_restart_reason(const char *name);
#endif
+#define MHI_CNTRL_LOG(fmt, ...) do { \
+ if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
+ pr_err("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
+ ipc_log_string(mhi_cntrl->cntrl_log_buf, "[I][%s] " fmt, \
+ __func__, ##__VA_ARGS__); \
+} while (0)
+
+#define MHI_CNTRL_ERR(fmt, ...) do { \
+ if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_ERROR) \
+ pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
+ ipc_log_string(mhi_cntrl->cntrl_log_buf, "[E][%s] " fmt, \
+ __func__, ##__VA_ARGS__); \
+} while (0)
+
#define MHI_LOG(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
pr_err("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8652399..2b39bdab 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2331,6 +2331,7 @@ extern void set_dma_reserve(unsigned long new_dma_reserve);
extern void memmap_init_zone(unsigned long, int, unsigned long, unsigned long,
enum memmap_context, struct vmem_altmap *);
extern void setup_per_zone_wmarks(void);
+extern void update_kswapd_threads(void);
extern int __meminit init_per_zone_wmark_min(void);
extern void mem_init(void);
extern void __init mmap_init(void);
@@ -2351,6 +2352,7 @@ extern void zone_pcp_update(struct zone *zone);
extern void zone_pcp_reset(struct zone *zone);
/* page_alloc.c */
+extern int kswapd_threads;
extern int min_free_kbytes;
extern int watermark_boost_factor;
extern int watermark_scale_factor;
diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index 5e5256d..5754c8d 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -164,7 +164,6 @@ struct mmc_request {
*/
void (*recovery_notifier)(struct mmc_request *);
struct mmc_host *host;
- struct request *req;
/* Allow other commands during this ongoing data transfer or busy wait */
bool cap_cmd_during_tfr;
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index cd5410d..6ab6bfb 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -260,6 +260,13 @@ struct mmc_cqe_ops {
* will have zero data bytes transferred.
*/
void (*cqe_recovery_finish)(struct mmc_host *host);
+ /*
+ * Update the request queue with keyslot manager details. This keyslot
+ * manager will be used by block crypto to configure the crypto Engine
+ * for data encryption.
+ */
+ void (*cqe_crypto_update_queue)(struct mmc_host *host,
+ struct request_queue *queue);
};
struct mmc_async_req {
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 0be318e..664e49e 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -36,6 +36,8 @@
*/
#define PAGE_ALLOC_COSTLY_ORDER 3
+#define MAX_KSWAPD_THREADS 16
+
enum migratetype {
MIGRATE_UNMOVABLE,
MIGRATE_MOVABLE,
@@ -676,8 +678,10 @@ typedef struct pglist_data {
int node_id;
wait_queue_head_t kswapd_wait;
wait_queue_head_t pfmemalloc_wait;
- struct task_struct *kswapd; /* Protected by
- mem_hotplug_begin/end() */
+ /*
+ * Protected by mem_hotplug_begin/end()
+ */
+ struct task_struct *kswapd[MAX_KSWAPD_THREADS];
int kswapd_order;
enum zone_type kswapd_classzone_idx;
@@ -904,6 +908,8 @@ static inline int is_highmem(struct zone *zone)
/* These two functions are used to setup the per zone pages min values */
struct ctl_table;
+int kswapd_threads_sysctl_handler(struct ctl_table *, int,
+ void __user *, size_t *, loff_t *);
int min_free_kbytes_sysctl_handler(struct ctl_table *, int,
void __user *, size_t *, loff_t *);
int watermark_boost_factor_sysctl_handler(struct ctl_table *, int,
diff --git a/include/linux/msm_kgsl.h b/include/linux/msm_kgsl.h
index a527a0a..c84db40 100644
--- a/include/linux/msm_kgsl.h
+++ b/include/linux/msm_kgsl.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
- * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018, 2020 The Linux Foundation. All rights reserved.
*/
#ifndef _MSM_KGSL_H
#define _MSM_KGSL_H
@@ -11,6 +11,7 @@
void *kgsl_pwr_limits_add(u32 id);
void kgsl_pwr_limits_del(void *limit);
int kgsl_pwr_limits_set_freq(void *limit, unsigned int freq);
+int kgsl_pwr_limits_set_gpu_fmax(void *limit, unsigned int freq);
void kgsl_pwr_limits_set_default(void *limit);
unsigned int kgsl_pwr_limits_get_freq(u32 id);
diff --git a/include/linux/of_iommu.h b/include/linux/of_iommu.h
index f3d40dd..ad07079 100644
--- a/include/linux/of_iommu.h
+++ b/include/linux/of_iommu.h
@@ -15,6 +15,9 @@ extern int of_get_dma_window(struct device_node *dn, const char *prefix,
extern const struct iommu_ops *of_iommu_configure(struct device *dev,
struct device_node *master_np);
+extern int of_iommu_fill_fwspec(struct device *dev, struct of_phandle_args
+ *iommu_spec);
+
#else
static inline int of_get_dma_window(struct device_node *dn, const char *prefix,
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 358b70f..436ad99 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -537,6 +537,8 @@ static inline int wait_on_page_locked_killable(struct page *page)
return wait_on_page_bit_killable(compound_head(page), PG_locked);
}
+extern void put_and_wait_on_page_locked(struct page *page);
+
/*
* Wait for a page to complete writeback
*/
diff --git a/include/linux/power_supply.h b/include/linux/power_supply.h
index e78b73b..f6ebcb5 100644
--- a/include/linux/power_supply.h
+++ b/include/linux/power_supply.h
@@ -362,6 +362,7 @@ enum power_supply_property {
POWER_SUPPLY_PROP_IRQ_STATUS,
POWER_SUPPLY_PROP_PARALLEL_OUTPUT_MODE,
POWER_SUPPLY_PROP_FG_TYPE,
+ POWER_SUPPLY_PROP_CHARGER_STATUS,
/* Local extensions of type int64_t */
POWER_SUPPLY_PROP_CHARGE_COUNTER_EXT,
/* Properties of type `const char *' */
diff --git a/include/linux/security.h b/include/linux/security.h
index fee252f..9fa7661 100644
--- a/include/linux/security.h
+++ b/include/linux/security.h
@@ -31,7 +31,6 @@
#include <linux/string.h>
#include <linux/mm.h>
#include <linux/fs.h>
-#include <linux/bio.h>
struct linux_binprm;
struct cred;
@@ -284,8 +283,6 @@ int security_old_inode_init_security(struct inode *inode, struct inode *dir,
const struct qstr *qstr, const char **name,
void **value, size_t *len);
int security_inode_create(struct inode *dir, struct dentry *dentry, umode_t mode);
-int security_inode_post_create(struct inode *dir, struct dentry *dentry,
- umode_t mode);
int security_inode_link(struct dentry *old_dentry, struct inode *dir,
struct dentry *new_dentry);
int security_inode_unlink(struct inode *dir, struct dentry *dentry);
@@ -674,13 +671,6 @@ static inline int security_inode_create(struct inode *dir,
return 0;
}
-static inline int security_inode_post_create(struct inode *dir,
- struct dentry *dentry,
- umode_t mode)
-{
- return 0;
-}
-
static inline int security_inode_link(struct dentry *old_dentry,
struct inode *dir,
struct dentry *new_dentry)
diff --git a/include/linux/soc/qcom/smd-rpm.h b/include/linux/soc/qcom/smd-rpm.h
index fb24896..500bc7e 100644
--- a/include/linux/soc/qcom/smd-rpm.h
+++ b/include/linux/soc/qcom/smd-rpm.h
@@ -34,6 +34,8 @@ struct qcom_smd_rpm;
#define QCOM_SMD_RPM_AGGR_CLK 0x72676761
#define QCOM_SMD_RPM_QUP_CLK 0x00707571
#define QCOM_SMD_RPM_MMXI_CLK 0x69786D6D
+#define QCOM_SMD_RPM_HWKM_CLK 0x6D6B7768
+#define QCOM_SMD_RPM_PKA_CLK 0x616B70
int qcom_rpm_smd_write(struct qcom_smd_rpm *rpm,
int state,
diff --git a/include/linux/usb/gadget.h b/include/linux/usb/gadget.h
index 945c867..6c70f75 100644
--- a/include/linux/usb/gadget.h
+++ b/include/linux/usb/gadget.h
@@ -84,6 +84,9 @@ enum gsi_ep_op {
* @db_reg_phs_addr_lsb: IPA channel doorbell register's physical address LSB
* @mapped_db_reg_phs_addr_lsb: doorbell LSB IOVA address mapped with IOMMU
* @db_reg_phs_addr_msb: IPA channel doorbell register's physical address MSB
+ * @sgt_trb_xfer_ring: USB TRB ring related sgtable entries
+ * @sgt_data_buff: Data buffer related sgtable entries
+ * @dev: pointer to the DMA-capable dwc device
*/
struct usb_gsi_request {
void *buf_base_addr;
@@ -95,6 +98,7 @@ struct usb_gsi_request {
u32 db_reg_phs_addr_msb;
struct sg_table sgt_trb_xfer_ring;
struct sg_table sgt_data_buff;
+ struct device *dev;
};
/*
diff --git a/include/linux/usb/phy.h b/include/linux/usb/phy.h
index 2a44134..b863726c 100644
--- a/include/linux/usb/phy.h
+++ b/include/linux/usb/phy.h
@@ -26,6 +26,9 @@
#define PHY_HSFS_MODE BIT(8)
#define PHY_LS_MODE BIT(9)
#define PHY_USB_DP_CONCURRENT_MODE BIT(10)
+#define EUD_SPOOF_DISCONNECT BIT(11)
+#define EUD_SPOOF_CONNECT BIT(12)
+#define PHY_SUS_OVERRIDE BIT(13)
enum usb_phy_interface {
USBPHY_INTERFACE_MODE_UNKNOWN,
diff --git a/include/linux/usb/usb_qdss.h b/include/linux/usb/usb_qdss.h
index ae5cfa3..645d6f6 100644
--- a/include/linux/usb/usb_qdss.h
+++ b/include/linux/usb/usb_qdss.h
@@ -20,7 +20,6 @@ struct qdss_request {
struct scatterlist *sg;
unsigned int num_sgs;
unsigned int num_mapped_sgs;
- struct completion write_done;
};
struct usb_qdss_ch {
@@ -41,17 +40,22 @@ enum qdss_state {
USB_QDSS_CTRL_WRITE_DONE,
};
+struct qdss_req {
+ struct usb_request *usb_req;
+ struct completion write_done;
+ struct qdss_request *qdss_req;
+ struct list_head list;
+};
+
#if IS_ENABLED(CONFIG_USB_F_QDSS)
struct usb_qdss_ch *usb_qdss_open(const char *name, void *priv,
void (*notify)(void *priv, unsigned int event,
struct qdss_request *d_req, struct usb_qdss_ch *ch));
void usb_qdss_close(struct usb_qdss_ch *ch);
-int usb_qdss_alloc_req(struct usb_qdss_ch *ch, int n_write, int n_read);
+int usb_qdss_alloc_req(struct usb_qdss_ch *ch, int n_write);
void usb_qdss_free_req(struct usb_qdss_ch *ch);
-int usb_qdss_read(struct usb_qdss_ch *ch, struct qdss_request *d_req);
int usb_qdss_write(struct usb_qdss_ch *ch, struct qdss_request *d_req);
int usb_qdss_ctrl_write(struct usb_qdss_ch *ch, struct qdss_request *d_req);
-int usb_qdss_ctrl_read(struct usb_qdss_ch *ch, struct qdss_request *d_req);
#else
static inline struct usb_qdss_ch *usb_qdss_open(const char *name, void *priv,
void (*n)(void *, unsigned int event,
@@ -60,11 +64,6 @@ static inline struct usb_qdss_ch *usb_qdss_open(const char *name, void *priv,
return ERR_PTR(-ENODEV);
}
-static inline int usb_qdss_read(struct usb_qdss_ch *c, struct qdss_request *d)
-{
- return -ENODEV;
-}
-
static inline int usb_qdss_write(struct usb_qdss_ch *c, struct qdss_request *d)
{
return -ENODEV;
@@ -76,11 +75,6 @@ static inline int usb_qdss_ctrl_write(struct usb_qdss_ch *c,
return -ENODEV;
}
-static inline int usb_qdss_ctrl_read(struct usb_qdss_ch *c,
- struct qdss_request *d)
-{
- return -ENODEV;
-}
static inline int usb_qdss_alloc_req(struct usb_qdss_ch *c, int n_wr, int n_rd)
{
return -ENODEV;
diff --git a/include/net/cnss2.h b/include/net/cnss2.h
index 78143ea..eb37edc 100644
--- a/include/net/cnss2.h
+++ b/include/net/cnss2.h
@@ -67,6 +67,25 @@ struct cnss_wlan_runtime_ops {
int (*runtime_resume)(struct pci_dev *pdev);
};
+enum cnss_driver_status {
+ CNSS_UNINITIALIZED,
+ CNSS_INITIALIZED,
+ CNSS_LOAD_UNLOAD,
+ CNSS_RECOVERY,
+ CNSS_FW_DOWN,
+ CNSS_HANG_EVENT,
+};
+
+struct cnss_hang_event {
+ void *hang_event_data;
+ u16 hang_event_data_len;
+};
+
+struct cnss_uevent_data {
+ enum cnss_driver_status status;
+ void *data;
+};
+
struct cnss_wlan_driver {
char *name;
int (*probe)(struct pci_dev *pdev, const struct pci_device_id *id);
@@ -83,18 +102,12 @@ struct cnss_wlan_driver {
int (*resume_noirq)(struct pci_dev *pdev);
void (*modem_status)(struct pci_dev *pdev, int state);
void (*update_status)(struct pci_dev *pdev, uint32_t status);
+ int (*update_event)(struct pci_dev *pdev,
+ struct cnss_uevent_data *uevent);
struct cnss_wlan_runtime_ops *runtime_ops;
const struct pci_device_id *id_table;
};
-enum cnss_driver_status {
- CNSS_UNINITIALIZED,
- CNSS_INITIALIZED,
- CNSS_LOAD_UNLOAD,
- CNSS_RECOVERY,
- CNSS_FW_DOWN,
-};
-
struct cnss_ce_tgt_pipe_cfg {
u32 pipe_num;
u32 pipe_dir;
diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
index 20c5d80..1417526 100644
--- a/include/scsi/scsi_host.h
+++ b/include/scsi/scsi_host.h
@@ -651,9 +651,6 @@ struct Scsi_Host {
/* The controller does not support WRITE SAME */
unsigned no_write_same:1;
- /* Inline encryption support? */
- unsigned inlinecrypt_support:1;
-
unsigned use_blk_mq:1;
unsigned use_cmd_list:1;
diff --git a/include/soc/qcom/icnss.h b/include/soc/qcom/icnss.h
index 6d9d766..a5aeb97 100644
--- a/include/soc/qcom/icnss.h
+++ b/include/soc/qcom/icnss.h
@@ -18,12 +18,16 @@
enum icnss_uevent {
ICNSS_UEVENT_FW_CRASHED,
ICNSS_UEVENT_FW_DOWN,
+ ICNSS_UEVENT_HANG_DATA,
+};
+
+struct icnss_uevent_hang_data {
+ void *hang_event_data;
+ uint16_t hang_event_data_len;
};
struct icnss_uevent_fw_down_data {
bool crashed;
- void *hang_event_data;
- uint16_t hang_event_data_len;
};
struct icnss_uevent_data {
diff --git a/include/soc/qcom/icnss2.h b/include/soc/qcom/icnss2.h
index fca498f..64128de 100644
--- a/include/soc/qcom/icnss2.h
+++ b/include/soc/qcom/icnss2.h
@@ -164,4 +164,7 @@ extern int icnss_get_user_msi_assignment(struct device *dev, char *user_name,
extern int icnss_get_msi_irq(struct device *dev, unsigned int vector);
extern void icnss_get_msi_address(struct device *dev, u32 *msi_addr_low,
u32 *msi_addr_high);
+extern int icnss_qmi_send(struct device *dev, int type, void *cmd,
+ int cmd_len, void *cb_ctx,
+ int (*cb)(void *ctx, void *event, int event_len));
#endif /* _ICNSS_WLAN_H_ */
diff --git a/include/soc/qcom/socinfo.h b/include/soc/qcom/socinfo.h
index a808e7d..baa41cd 100644
--- a/include/soc/qcom/socinfo.h
+++ b/include/soc/qcom/socinfo.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
- * Copyright (c) 2009-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2009-2020, The Linux Foundation. All rights reserved.
*/
#ifndef _ARCH_ARM_MACH_MSM_SOCINFO_H_
@@ -74,6 +74,8 @@
of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,sdxprairie")
#define early_machine_is_sdmmagpie() \
of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,sdmmagpie")
+#define early_machine_is_sdm660() \
+ of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,sdm660")
#else
#define of_board_is_sim() 0
#define of_board_is_rumi() 0
@@ -105,6 +107,7 @@
#define early_machine_is_qcs405() 0
#define early_machine_is_sdxprairie() 0
#define early_machine_is_sdmmagpie() 0
+#define early_machine_is_sdm660() 0
#endif
#define PLATFORM_SUBTYPE_MDM 1
@@ -125,6 +128,7 @@ enum msm_cpu {
MSM_CPU_8916,
MSM_CPU_8084,
MSM_CPU_8996,
+ MSM_CPU_SDM660,
MSM_CPU_SM8150,
MSM_CPU_SA8150,
MSM_CPU_KONA,
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 6eb8e9f..463117c 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -13,6 +13,9 @@
#include <linux/limits.h>
#include <linux/ioctl.h>
#include <linux/types.h>
+#ifndef __KERNEL__
+#include <linux/fscrypt.h>
+#endif
/*
* It's silly to have NR_OPEN bigger than NR_FILE, but you can change
@@ -259,58 +262,6 @@ struct fsxattr {
#define FS_IOC_SETFSLABEL _IOW(0x94, 50, char[FSLABEL_MAX])
/*
- * File system encryption support
- */
-/* Policy provided via an ioctl on the topmost directory */
-#define FS_KEY_DESCRIPTOR_SIZE 8
-
-#define FS_POLICY_FLAGS_PAD_4 0x00
-#define FS_POLICY_FLAGS_PAD_8 0x01
-#define FS_POLICY_FLAGS_PAD_16 0x02
-#define FS_POLICY_FLAGS_PAD_32 0x03
-#define FS_POLICY_FLAGS_PAD_MASK 0x03
-#define FS_POLICY_FLAG_DIRECT_KEY 0x04 /* use master key directly */
-#define FS_POLICY_FLAGS_VALID 0x07
-
-/* Encryption algorithms */
-#define FS_ENCRYPTION_MODE_INVALID 0
-#define FS_ENCRYPTION_MODE_AES_256_XTS 1
-#define FS_ENCRYPTION_MODE_AES_256_GCM 2
-#define FS_ENCRYPTION_MODE_AES_256_CBC 3
-#define FS_ENCRYPTION_MODE_AES_256_CTS 4
-#define FS_ENCRYPTION_MODE_AES_128_CBC 5
-#define FS_ENCRYPTION_MODE_AES_128_CTS 6
-#define FS_ENCRYPTION_MODE_SPECK128_256_XTS 7 /* Removed, do not use. */
-#define FS_ENCRYPTION_MODE_SPECK128_256_CTS 8 /* Removed, do not use. */
-#define FS_ENCRYPTION_MODE_ADIANTUM 9
-#define FS_ENCRYPTION_MODE_PRIVATE 127
-
-struct fscrypt_policy {
- __u8 version;
- __u8 contents_encryption_mode;
- __u8 filenames_encryption_mode;
- __u8 flags;
- __u8 master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
-};
-
-#define FS_IOC_SET_ENCRYPTION_POLICY _IOR('f', 19, struct fscrypt_policy)
-#define FS_IOC_GET_ENCRYPTION_PWSALT _IOW('f', 20, __u8[16])
-#define FS_IOC_GET_ENCRYPTION_POLICY _IOW('f', 21, struct fscrypt_policy)
-
-/* Parameters for passing an encryption key into the kernel keyring */
-#define FS_KEY_DESC_PREFIX "fscrypt:"
-#define FS_KEY_DESC_PREFIX_SIZE 8
-
-/* Structure that userspace passes to the kernel keyring */
-#define FS_MAX_KEY_SIZE 64
-
-struct fscrypt_key {
- __u32 mode;
- __u8 raw[FS_MAX_KEY_SIZE];
- __u32 size;
-};
-
-/*
* Inode flags (FS_IOC_GETFLAGS / FS_IOC_SETFLAGS)
*
* Note: for historical reasons, these flags were originally used and
diff --git a/include/uapi/linux/fscrypt.h b/include/uapi/linux/fscrypt.h
new file mode 100644
index 0000000..12ac8cc
--- /dev/null
+++ b/include/uapi/linux/fscrypt.h
@@ -0,0 +1,197 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * fscrypt user API
+ *
+ * These ioctls can be used on filesystems that support fscrypt. See the
+ * "User API" section of Documentation/filesystems/fscrypt.rst.
+ */
+#ifndef _UAPI_LINUX_FSCRYPT_H
+#define _UAPI_LINUX_FSCRYPT_H
+
+#include <linux/types.h>
+
+/* Encryption policy flags */
+#define FSCRYPT_POLICY_FLAGS_PAD_4 0x00
+#define FSCRYPT_POLICY_FLAGS_PAD_8 0x01
+#define FSCRYPT_POLICY_FLAGS_PAD_16 0x02
+#define FSCRYPT_POLICY_FLAGS_PAD_32 0x03
+#define FSCRYPT_POLICY_FLAGS_PAD_MASK 0x03
+#define FSCRYPT_POLICY_FLAG_DIRECT_KEY 0x04
+#define FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 0x08
+#define FSCRYPT_POLICY_FLAGS_VALID 0x0F
+
+/* Encryption algorithms */
+#define FSCRYPT_MODE_AES_256_XTS 1
+#define FSCRYPT_MODE_AES_256_CTS 4
+#define FSCRYPT_MODE_AES_128_CBC 5
+#define FSCRYPT_MODE_AES_128_CTS 6
+#define FSCRYPT_MODE_ADIANTUM 9
+#define FSCRYPT_MODE_PRIVATE 127
+#define __FSCRYPT_MODE_MAX 127
+/*
+ * Legacy policy version; ad-hoc KDF and no key verification.
+ * For new encrypted directories, use fscrypt_policy_v2 instead.
+ *
+ * Careful: the .version field for this is actually 0, not 1.
+ */
+#define FSCRYPT_POLICY_V1 0
+#define FSCRYPT_KEY_DESCRIPTOR_SIZE 8
+struct fscrypt_policy_v1 {
+ __u8 version;
+ __u8 contents_encryption_mode;
+ __u8 filenames_encryption_mode;
+ __u8 flags;
+ __u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
+};
+#define fscrypt_policy fscrypt_policy_v1
+
+/*
+ * Process-subscribed "logon" key description prefix and payload format.
+ * Deprecated; prefer FS_IOC_ADD_ENCRYPTION_KEY instead.
+ */
+#define FSCRYPT_KEY_DESC_PREFIX "fscrypt:"
+#define FSCRYPT_KEY_DESC_PREFIX_SIZE 8
+#define FSCRYPT_MAX_KEY_SIZE 64
+struct fscrypt_key {
+ __u32 mode;
+ __u8 raw[FSCRYPT_MAX_KEY_SIZE];
+ __u32 size;
+};
+
+/*
+ * New policy version with HKDF and key verification (recommended).
+ */
+#define FSCRYPT_POLICY_V2 2
+#define FSCRYPT_KEY_IDENTIFIER_SIZE 16
+struct fscrypt_policy_v2 {
+ __u8 version;
+ __u8 contents_encryption_mode;
+ __u8 filenames_encryption_mode;
+ __u8 flags;
+ __u8 __reserved[4];
+ __u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
+};
+
+/* Struct passed to FS_IOC_GET_ENCRYPTION_POLICY_EX */
+struct fscrypt_get_policy_ex_arg {
+ __u64 policy_size; /* input/output */
+ union {
+ __u8 version;
+ struct fscrypt_policy_v1 v1;
+ struct fscrypt_policy_v2 v2;
+ } policy; /* output */
+};
+
+/*
+ * v1 policy keys are specified by an arbitrary 8-byte key "descriptor",
+ * matching fscrypt_policy_v1::master_key_descriptor.
+ */
+#define FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR 1
+
+/*
+ * v2 policy keys are specified by a 16-byte key "identifier" which the kernel
+ * calculates as a cryptographic hash of the key itself,
+ * matching fscrypt_policy_v2::master_key_identifier.
+ */
+#define FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER 2
+
+/*
+ * Specifies a key, either for v1 or v2 policies. This doesn't contain the
+ * actual key itself; this is just the "name" of the key.
+ */
+struct fscrypt_key_specifier {
+ __u32 type; /* one of FSCRYPT_KEY_SPEC_TYPE_* */
+ __u32 __reserved;
+ union {
+ __u8 __reserved[32]; /* reserve some extra space */
+ __u8 descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
+ __u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
+ } u;
+};
+
+/*
+ * Payload of Linux keyring key of type "fscrypt-provisioning", referenced by
+ * fscrypt_add_key_arg::key_id as an alternative to fscrypt_add_key_arg::raw.
+ */
+struct fscrypt_provisioning_key_payload {
+ __u32 type;
+ __u32 __reserved;
+ __u8 raw[];
+};
+
+/* Struct passed to FS_IOC_ADD_ENCRYPTION_KEY */
+struct fscrypt_add_key_arg {
+ struct fscrypt_key_specifier key_spec;
+ __u32 raw_size;
+ __u32 key_id;
+ __u32 __reserved[7];
+ /* N.B.: "temporary" flag, not reserved upstream */
+#define __FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED 0x00000001
+ __u32 __flags;
+ __u8 raw[];
+};
+
+/* Struct passed to FS_IOC_REMOVE_ENCRYPTION_KEY */
+struct fscrypt_remove_key_arg {
+ struct fscrypt_key_specifier key_spec;
+#define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY 0x00000001
+#define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS 0x00000002
+ __u32 removal_status_flags; /* output */
+ __u32 __reserved[5];
+};
+
+/* Struct passed to FS_IOC_GET_ENCRYPTION_KEY_STATUS */
+struct fscrypt_get_key_status_arg {
+ /* input */
+ struct fscrypt_key_specifier key_spec;
+ __u32 __reserved[6];
+
+ /* output */
+#define FSCRYPT_KEY_STATUS_ABSENT 1
+#define FSCRYPT_KEY_STATUS_PRESENT 2
+#define FSCRYPT_KEY_STATUS_INCOMPLETELY_REMOVED 3
+ __u32 status;
+#define FSCRYPT_KEY_STATUS_FLAG_ADDED_BY_SELF 0x00000001
+ __u32 status_flags;
+ __u32 user_count;
+ __u32 __out_reserved[13];
+};
+
+#define FS_IOC_SET_ENCRYPTION_POLICY _IOR('f', 19, struct fscrypt_policy)
+#define FS_IOC_GET_ENCRYPTION_PWSALT _IOW('f', 20, __u8[16])
+#define FS_IOC_GET_ENCRYPTION_POLICY _IOW('f', 21, struct fscrypt_policy)
+#define FS_IOC_GET_ENCRYPTION_POLICY_EX _IOWR('f', 22, __u8[9]) /* size + version */
+#define FS_IOC_ADD_ENCRYPTION_KEY _IOWR('f', 23, struct fscrypt_add_key_arg)
+#define FS_IOC_REMOVE_ENCRYPTION_KEY _IOWR('f', 24, struct fscrypt_remove_key_arg)
+#define FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS _IOWR('f', 25, struct fscrypt_remove_key_arg)
+#define FS_IOC_GET_ENCRYPTION_KEY_STATUS _IOWR('f', 26, struct fscrypt_get_key_status_arg)
+
+/**********************************************************************/
+
+/* old names; don't add anything new here! */
+#ifndef __KERNEL__
+#define FS_KEY_DESCRIPTOR_SIZE FSCRYPT_KEY_DESCRIPTOR_SIZE
+#define FS_POLICY_FLAGS_PAD_4 FSCRYPT_POLICY_FLAGS_PAD_4
+#define FS_POLICY_FLAGS_PAD_8 FSCRYPT_POLICY_FLAGS_PAD_8
+#define FS_POLICY_FLAGS_PAD_16 FSCRYPT_POLICY_FLAGS_PAD_16
+#define FS_POLICY_FLAGS_PAD_32 FSCRYPT_POLICY_FLAGS_PAD_32
+#define FS_POLICY_FLAGS_PAD_MASK FSCRYPT_POLICY_FLAGS_PAD_MASK
+#define FS_POLICY_FLAG_DIRECT_KEY FSCRYPT_POLICY_FLAG_DIRECT_KEY
+#define FS_POLICY_FLAGS_VALID FSCRYPT_POLICY_FLAGS_VALID
+#define FS_ENCRYPTION_MODE_INVALID 0 /* never used */
+#define FS_ENCRYPTION_MODE_AES_256_XTS FSCRYPT_MODE_AES_256_XTS
+#define FS_ENCRYPTION_MODE_AES_256_GCM 2 /* never used */
+#define FS_ENCRYPTION_MODE_AES_256_CBC 3 /* never used */
+#define FS_ENCRYPTION_MODE_AES_256_CTS FSCRYPT_MODE_AES_256_CTS
+#define FS_ENCRYPTION_MODE_AES_128_CBC FSCRYPT_MODE_AES_128_CBC
+#define FS_ENCRYPTION_MODE_AES_128_CTS FSCRYPT_MODE_AES_128_CTS
+#define FS_ENCRYPTION_MODE_SPECK128_256_XTS 7 /* removed */
+#define FS_ENCRYPTION_MODE_SPECK128_256_CTS 8 /* removed */
+#define FS_ENCRYPTION_MODE_ADIANTUM FSCRYPT_MODE_ADIANTUM
+#define FS_ENCRYPTION_MODE_PRIVATE FSCRYPT_MODE_PRIVATE
+#define FS_KEY_DESC_PREFIX FSCRYPT_KEY_DESC_PREFIX
+#define FS_KEY_DESC_PREFIX_SIZE FSCRYPT_KEY_DESC_PREFIX_SIZE
+#define FS_MAX_KEY_SIZE FSCRYPT_MAX_KEY_SIZE
+#endif /* !__KERNEL__ */
+
+#endif /* _UAPI_LINUX_FSCRYPT_H */
diff --git a/include/uapi/linux/msm_kgsl.h b/include/uapi/linux/msm_kgsl.h
index 7a98eed..9b9cd6e 100644
--- a/include/uapi/linux/msm_kgsl.h
+++ b/include/uapi/linux/msm_kgsl.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
*/
#ifndef _UAPI_MSM_KGSL_H
@@ -462,7 +462,10 @@ struct kgsl_context_property_fault {
#define KGSL_PERFCOUNTER_GROUP_CP_PWR 0x21
#define KGSL_PERFCOUNTER_GROUP_GPMU_PWR 0x22
#define KGSL_PERFCOUNTER_GROUP_ALWAYSON_PWR 0x23
-#define KGSL_PERFCOUNTER_GROUP_MAX 0x24
+#define KGSL_PERFCOUNTER_GROUP_GLC 0x24
+#define KGSL_PERFCOUNTER_GROUP_FCHE 0x25
+#define KGSL_PERFCOUNTER_GROUP_MHUB 0x26
+#define KGSL_PERFCOUNTER_GROUP_MAX 0x27
#define KGSL_PERFCOUNTER_NOT_USED 0xFFFFFFFF
#define KGSL_PERFCOUNTER_BROKEN 0xFFFFFFFE
diff --git a/include/uapi/linux/msm_npu.h b/include/uapi/linux/msm_npu.h
index bd68c53..d55f475 100644
--- a/include/uapi/linux/msm_npu.h
+++ b/include/uapi/linux/msm_npu.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
*/
#ifndef _UAPI_MSM_NPU_H_
@@ -87,6 +87,7 @@
#define MSM_NPU_PROP_ID_CLK_GATING_MODE (MSM_NPU_FW_PROP_ID_START + 2)
#define MSM_NPU_PROP_ID_HW_VERSION (MSM_NPU_FW_PROP_ID_START + 3)
#define MSM_NPU_PROP_ID_FW_VERSION (MSM_NPU_FW_PROP_ID_START + 4)
+#define MSM_NPU_PROP_ID_FW_GETCAPS (MSM_NPU_FW_PROP_ID_START + 5)
/* features supported by driver */
#define MSM_NPU_FEATURE_MULTI_EXECUTE 0x1
diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h
index 0d16050..f605007 100644
--- a/include/uapi/linux/v4l2-controls.h
+++ b/include/uapi/linux/v4l2-controls.h
@@ -543,6 +543,7 @@ enum v4l2_mpeg_video_h264_hierarchical_coding_type {
};
#define V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER (V4L2_CID_MPEG_BASE+381)
#define V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER_QP (V4L2_CID_MPEG_BASE+382)
+#define V4L2_CID_MPEG_VIDEO_H264_CHROMA_QP_INDEX_OFFSET (V4L2_CID_MPEG_BASE+384)
#define V4L2_CID_MPEG_VIDEO_MPEG4_I_FRAME_QP (V4L2_CID_MPEG_BASE+400)
#define V4L2_CID_MPEG_VIDEO_MPEG4_P_FRAME_QP (V4L2_CID_MPEG_BASE+401)
#define V4L2_CID_MPEG_VIDEO_MPEG4_B_FRAME_QP (V4L2_CID_MPEG_BASE+402)
@@ -1002,9 +1003,6 @@ enum v4l2_mpeg_vidc_video_roi_type {
V4L2_CID_MPEG_VIDC_VIDEO_ROI_TYPE_2BYTE = 2,
};
-#define V4L2_CID_MPEG_VIDC_VENC_CHROMA_QP_OFFSET \
- (V4L2_CID_MPEG_MSM_VIDC_BASE + 132)
-
/* Camera class control IDs */
#define V4L2_CID_CAMERA_CLASS_BASE (V4L2_CTRL_CLASS_CAMERA | 0x900)
diff --git a/include/uapi/sound/compress_offload.h b/include/uapi/sound/compress_offload.h
index 493c676..8d00752 100644
--- a/include/uapi/sound/compress_offload.h
+++ b/include/uapi/sound/compress_offload.h
@@ -173,6 +173,7 @@ enum sndrv_compress_encoder {
SNDRV_COMPRESS_ENABLE_ADJUST_SESSION_CLOCK = 10,
SNDRV_COMPRESS_ADJUST_SESSION_CLOCK = 11,
SNDRV_COMPRESS_LATENCY_MODE = 12,
+ SNDRV_COMPRESS_IN_TTP_OFFSET = 13,
};
#define SNDRV_COMPRESS_MIN_BLK_SIZE SNDRV_COMPRESS_MIN_BLK_SIZE
@@ -186,6 +187,7 @@ enum sndrv_compress_encoder {
SNDRV_COMPRESS_ENABLE_ADJUST_SESSION_CLOCK
#define SNDRV_COMPRESS_ADJUST_SESSION_CLOCK SNDRV_COMPRESS_ADJUST_SESSION_CLOCK
#define SNDRV_COMPRESS_LATENCY_MODE SNDRV_COMPRESS_LATENCY_MODE
+#define SNDRV_COMPRESS_IN_TTP_OFFSET SNDRV_COMPRESS_IN_TTP_OFFSET
/**
* struct snd_compr_metadata - compressed stream metadata
diff --git a/kernel/sched/boost.c b/kernel/sched/boost.c
index 8a00649..bfdf8c2 100644
--- a/kernel/sched/boost.c
+++ b/kernel/sched/boost.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2012-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2020, The Linux Foundation. All rights reserved.
*/
#include "sched.h"
@@ -78,12 +78,10 @@ static void sched_full_throttle_boost_exit(void)
static void sched_conservative_boost_enter(void)
{
update_cgroup_boost_settings();
- sched_task_filter_util = sysctl_sched_min_task_util_for_boost;
}
static void sched_conservative_boost_exit(void)
{
- sched_task_filter_util = sysctl_sched_min_task_util_for_colocation;
restore_cgroup_boost_settings();
}
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index db6ad21..08c4eb0 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5043,7 +5043,7 @@ bool is_sched_lib_based_app(pid_t pid)
char *libname, *lib_list;
struct vm_area_struct *vma;
char path_buf[LIB_PATH_LENGTH];
- char tmp_lib_name[LIB_PATH_LENGTH];
+ char *tmp_lib_name;
bool found = false;
struct task_struct *p;
struct mm_struct *mm;
@@ -5051,11 +5051,16 @@ bool is_sched_lib_based_app(pid_t pid)
if (strnlen(sched_lib_name, LIB_PATH_LENGTH) == 0)
return false;
+ tmp_lib_name = kmalloc(LIB_PATH_LENGTH, GFP_KERNEL);
+ if (!tmp_lib_name)
+ return false;
+
rcu_read_lock();
p = find_process_by_pid(pid);
if (!p) {
rcu_read_unlock();
+ kfree(tmp_lib_name);
return false;
}
@@ -5093,6 +5098,7 @@ bool is_sched_lib_based_app(pid_t pid)
mmput(mm);
put_task_struct:
put_task_struct(p);
+ kfree(tmp_lib_name);
return found;
}
diff --git a/kernel/sched/core_ctl.c b/kernel/sched/core_ctl.c
index 09a7aa5..dc4d2eb 100644
--- a/kernel/sched/core_ctl.c
+++ b/kernel/sched/core_ctl.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2014-2020, The Linux Foundation. All rights reserved.
*/
#define pr_fmt(fmt) "core_ctl: " fmt
@@ -77,7 +77,6 @@ ATOMIC_NOTIFIER_HEAD(core_ctl_notifier);
static unsigned int last_nr_big;
static unsigned int get_active_cpu_count(const struct cluster_data *cluster);
-static void cpuset_next(struct cluster_data *cluster);
/* ========================= sysfs interface =========================== */
@@ -89,8 +88,7 @@ static ssize_t store_min_cpus(struct cluster_data *state,
if (sscanf(buf, "%u\n", &val) != 1)
return -EINVAL;
- state->min_cpus = min(val, state->max_cpus);
- cpuset_next(state);
+ state->min_cpus = min(val, state->num_cpus);
wake_up_core_ctl_thread(state);
return count;
@@ -111,8 +109,6 @@ static ssize_t store_max_cpus(struct cluster_data *state,
val = min(val, state->num_cpus);
state->max_cpus = val;
- state->min_cpus = min(state->min_cpus, state->max_cpus);
- cpuset_next(state);
wake_up_core_ctl_thread(state);
return count;
@@ -990,8 +986,6 @@ static void move_cpu_lru(struct cpu_data *cpu_data)
spin_unlock_irqrestore(&state_lock, flags);
}
-static void cpuset_next(struct cluster_data *cluster) { }
-
static bool should_we_isolate(int cpu, struct cluster_data *cluster)
{
return true;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 692d3fc..f832fab 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -175,7 +175,6 @@ unsigned int sched_capacity_margin_down[NR_CPUS] = {
unsigned int sysctl_sched_min_task_util_for_boost = 51;
/* 0.68ms default for 20ms window size scaled to 1024 */
unsigned int sysctl_sched_min_task_util_for_colocation = 35;
-unsigned int sched_task_filter_util = 35;
__read_mostly unsigned int sysctl_sched_prefer_spread;
#endif
unsigned int sched_small_task_threshold = 102;
@@ -3939,17 +3938,6 @@ struct find_best_target_env {
bool strict_max;
};
-static inline bool prefer_spread_on_idle(int cpu)
-{
- if (likely(!sysctl_sched_prefer_spread))
- return false;
-
- if (is_min_capacity_cpu(cpu))
- return sysctl_sched_prefer_spread >= 1;
-
- return sysctl_sched_prefer_spread > 1;
-}
-
static inline void adjust_cpus_for_packing(struct task_struct *p,
int *target_cpu, int *best_idle_cpu,
int shallowest_idle_cstate,
@@ -10483,8 +10471,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
env.prefer_spread = (prefer_spread_on_idle(this_cpu) &&
!((sd->flags & SD_ASYM_CPUCAPACITY) &&
- !cpumask_test_cpu(this_cpu,
- &asym_cap_sibling_cpus)));
+ !is_asym_cap_cpu(this_cpu)));
cpumask_and(cpus, sched_domain_span(sd), cpu_active_mask);
@@ -11613,7 +11600,8 @@ static bool silver_has_big_tasks(void)
for_each_possible_cpu(cpu) {
if (!is_min_capacity_cpu(cpu))
break;
- if (cpu_rq(cpu)->walt_stats.nr_big_tasks)
+
+ if (walt_big_tasks(cpu))
return true;
}
@@ -11690,7 +11678,7 @@ static int idle_balance(struct rq *this_rq, struct rq_flags *rf)
if (prefer_spread && !force_lb &&
(sd->flags & SD_ASYM_CPUCAPACITY) &&
- !(cpumask_test_cpu(this_cpu, &asym_cap_sibling_cpus)))
+ !is_asym_cap_cpu(this_cpu))
avg_idle = this_rq->avg_idle;
if (avg_idle < curr_cost + sd->max_newidle_lb_cost) {
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index e6e5d09..60786da 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -927,9 +927,10 @@ static void dump_throttled_rt_tasks(struct rt_rq *rt_rq)
rt_rq, cpu_of(rq_of_rt_rq(rt_rq)));
pos += snprintf(pos, end - pos,
- "rt_period_timer: expires=%lld now=%llu period=%llu\n",
+ "rt_period_timer: expires=%lld now=%llu runtime=%llu period=%llu\n",
hrtimer_get_expires_ns(&rt_b->rt_period_timer),
- ktime_get_ns(), sched_rt_period(rt_rq));
+ ktime_get_ns(), sched_rt_runtime(rt_rq),
+ sched_rt_period(rt_rq));
if (bitmap_empty(array->bitmap, MAX_RT_PRIO))
goto out;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 979ed34..f800274 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2687,6 +2687,11 @@ extern void add_new_task_to_grp(struct task_struct *new);
#define RESTRAINED_BOOST_DISABLE -3
#define MAX_NUM_BOOST_TYPE (RESTRAINED_BOOST+1)
+static inline bool is_asym_cap_cpu(int cpu)
+{
+ return cpumask_test_cpu(cpu, &asym_cap_sibling_cpus);
+}
+
static inline int asym_cap_siblings(int cpu1, int cpu2)
{
return (cpumask_test_cpu(cpu1, &asym_cap_sibling_cpus) &&
@@ -2800,7 +2805,6 @@ static inline int same_freq_domain(int src_cpu, int dst_cpu)
#define CPU_RESERVED 1
extern enum sched_boost_policy boost_policy;
-extern unsigned int sched_task_filter_util;
static inline enum sched_boost_policy sched_boost_policy(void)
{
return boost_policy;
@@ -2926,7 +2930,7 @@ static inline enum sched_boost_policy task_boost_policy(struct task_struct *p)
* under conservative boost.
*/
if (sched_boost() == CONSERVATIVE_BOOST &&
- task_util(p) <= sched_task_filter_util)
+ task_util(p) <= sysctl_sched_min_task_util_for_boost)
policy = SCHED_BOOST_NONE;
}
@@ -2997,6 +3001,8 @@ static inline struct sched_cluster *rq_cluster(struct rq *rq)
return NULL;
}
+static inline bool is_asym_cap_cpu(int cpu) { return false; }
+
static inline int asym_cap_siblings(int cpu1, int cpu2) { return 0; }
static inline bool asym_cap_sibling_group_has_capacity(int dst_cpu, int margin)
diff --git a/kernel/sched/walt.c b/kernel/sched/walt.c
index f42c052..3ef27fc 100644
--- a/kernel/sched/walt.c
+++ b/kernel/sched/walt.c
@@ -1862,7 +1862,7 @@ static void update_history(struct rq *rq, struct task_struct *p,
p->ravg.pred_demand = pred_demand;
p->ravg.pred_demand_scaled = pred_demand_scaled;
- if (demand_scaled > sched_task_filter_util)
+ if (demand_scaled > sysctl_sched_min_task_util_for_colocation)
p->unfilter = sysctl_sched_task_unfilter_period;
else
if (p->unfilter)
diff --git a/kernel/sched/walt.h b/kernel/sched/walt.h
index 4089158..c4e3bc5 100644
--- a/kernel/sched/walt.h
+++ b/kernel/sched/walt.h
@@ -445,8 +445,24 @@ static int in_sched_bug;
} \
})
+static inline bool prefer_spread_on_idle(int cpu)
+{
+ if (likely(!sysctl_sched_prefer_spread))
+ return false;
+
+ if (is_min_capacity_cpu(cpu))
+ return sysctl_sched_prefer_spread >= 1;
+
+ return sysctl_sched_prefer_spread > 1;
+}
+
#else /* CONFIG_SCHED_WALT */
+static inline bool prefer_spread_on_idle(int cpu)
+{
+ return false;
+}
+
static inline void walt_sched_init_rq(struct rq *rq) { }
static inline void walt_rotate_work_init(void) { }
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 23f49da..8f828de 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -140,6 +140,7 @@ static int ten_thousand = 10000;
#ifdef CONFIG_PERF_EVENTS
static int six_hundred_forty_kb = 640 * 1024;
#endif
+static int max_kswapd_threads = MAX_KSWAPD_THREADS;
static int two_hundred_fifty_five = 255;
static int __maybe_unused two_hundred_million = 200000000;
@@ -1774,6 +1775,15 @@ static struct ctl_table vm_table[] = {
.extra1 = &zero,
},
{
+ .procname = "kswapd_threads",
+ .data = &kswapd_threads,
+ .maxlen = sizeof(kswapd_threads),
+ .mode = 0644,
+ .proc_handler = kswapd_threads_sysctl_handler,
+ .extra1 = &one,
+ .extra2 = &max_kswapd_threads,
+ },
+ {
.procname = "watermark_scale_factor",
.data = &watermark_scale_factor,
.maxlen = sizeof(watermark_scale_factor),
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 7a9baca..75b31bc1 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -134,7 +134,6 @@
config TRACING
bool
- select DEBUG_FS
select RING_BUFFER
select STACKTRACE if STACKTRACE_SUPPORT
select TRACEPOINTS
diff --git a/mm/filemap.c b/mm/filemap.c
index 494a638..1542132 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1029,7 +1029,14 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync,
if (wait_page->bit_nr != key->bit_nr)
return 0;
- /* Stop walking if it's locked */
+ /*
+ * Stop walking if it's locked.
+ * Is this safe if put_and_wait_on_page_locked() is in use?
+ * Yes: the waker must hold a reference to this page, and if PG_locked
+ * has now already been set by another task, that task must also hold
+ * a reference to the *same usage* of this page; so there is no need
+ * to walk on to wake even the put_and_wait_on_page_locked() callers.
+ */
if (test_bit(key->bit_nr, &key->page->flags))
return -1;
@@ -1097,25 +1104,44 @@ static void wake_up_page(struct page *page, int bit)
wake_up_page_bit(page, bit);
}
+/*
+ * A choice of three behaviors for wait_on_page_bit_common():
+ */
+enum behavior {
+ EXCLUSIVE, /* Hold ref to page and take the bit when woken, like
+ * __lock_page() waiting on then setting PG_locked.
+ */
+ SHARED, /* Hold ref to page and check the bit when woken, like
+ * wait_on_page_writeback() waiting on PG_writeback.
+ */
+ DROP, /* Drop ref to page before wait, no check when woken,
+ * like put_and_wait_on_page_locked() on PG_locked.
+ */
+};
+
static inline int wait_on_page_bit_common(wait_queue_head_t *q,
- struct page *page, int bit_nr, int state, bool lock)
+ struct page *page, int bit_nr, int state, enum behavior behavior)
{
struct wait_page_queue wait_page;
wait_queue_entry_t *wait = &wait_page.wait;
+ bool bit_is_set;
bool thrashing = false;
+ bool delayacct = false;
unsigned long pflags;
int ret = 0;
if (bit_nr == PG_locked &&
!PageUptodate(page) && PageWorkingset(page)) {
- if (!PageSwapBacked(page))
+ if (!PageSwapBacked(page)) {
delayacct_thrashing_start();
+ delayacct = true;
+ }
psi_memstall_enter(&pflags);
thrashing = true;
}
init_wait(wait);
- wait->flags = lock ? WQ_FLAG_EXCLUSIVE : 0;
+ wait->flags = behavior == EXCLUSIVE ? WQ_FLAG_EXCLUSIVE : 0;
wait->func = wake_page_function;
wait_page.page = page;
wait_page.bit_nr = bit_nr;
@@ -1132,14 +1158,17 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q,
spin_unlock_irq(&q->lock);
- if (likely(test_bit(bit_nr, &page->flags))) {
- io_schedule();
- }
+ bit_is_set = test_bit(bit_nr, &page->flags);
+ if (behavior == DROP)
+ put_page(page);
- if (lock) {
+ if (likely(bit_is_set))
+ io_schedule();
+
+ if (behavior == EXCLUSIVE) {
if (!test_and_set_bit_lock(bit_nr, &page->flags))
break;
- } else {
+ } else if (behavior == SHARED) {
if (!test_bit(bit_nr, &page->flags))
break;
}
@@ -1148,12 +1177,23 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q,
ret = -EINTR;
break;
}
+
+ if (behavior == DROP) {
+ /*
+ * We can no longer safely access page->flags:
+ * even if CONFIG_MEMORY_HOTREMOVE is not enabled,
+ * there is a risk of waiting forever on a page reused
+ * for something that keeps it locked indefinitely.
+ * But best check for -EINTR above before breaking.
+ */
+ break;
+ }
}
finish_wait(q, wait);
if (thrashing) {
- if (!PageSwapBacked(page))
+ if (delayacct)
delayacct_thrashing_end();
psi_memstall_leave(&pflags);
}
@@ -1172,18 +1212,37 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q,
void wait_on_page_bit(struct page *page, int bit_nr)
{
wait_queue_head_t *q = page_waitqueue(page);
- wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, false);
+ wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, SHARED);
}
EXPORT_SYMBOL(wait_on_page_bit);
int wait_on_page_bit_killable(struct page *page, int bit_nr)
{
wait_queue_head_t *q = page_waitqueue(page);
- return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, false);
+ return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, SHARED);
}
EXPORT_SYMBOL(wait_on_page_bit_killable);
/**
+ * put_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked
+ * @page: The page to wait for.
+ *
+ * The caller should hold a reference on @page. They expect the page to
+ * become unlocked relatively soon, but do not wish to hold up migration
+ * (for example) by holding the reference while waiting for the page to
+ * come unlocked. After this function returns, the caller should not
+ * dereference @page.
+ */
+void put_and_wait_on_page_locked(struct page *page)
+{
+ wait_queue_head_t *q;
+
+ page = compound_head(page);
+ q = page_waitqueue(page);
+ wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, DROP);
+}
+
+/**
* add_page_wait_queue - Add an arbitrary waiter to a page's wait queue
* @page: Page defining the wait queue of interest
* @waiter: Waiter to add to the queue
@@ -1312,7 +1371,8 @@ void __lock_page(struct page *__page)
{
struct page *page = compound_head(__page);
wait_queue_head_t *q = page_waitqueue(page);
- wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, true);
+ wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE,
+ EXCLUSIVE);
}
EXPORT_SYMBOL(__lock_page);
@@ -1320,7 +1380,8 @@ int __lock_page_killable(struct page *__page)
{
struct page *page = compound_head(__page);
wait_queue_head_t *q = page_waitqueue(page);
- return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE, true);
+ return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE,
+ EXCLUSIVE);
}
EXPORT_SYMBOL_GPL(__lock_page_killable);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2f1a179..479a070 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1536,8 +1536,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
if (!get_page_unless_zero(page))
goto out_unlock;
spin_unlock(vmf->ptl);
- wait_on_page_locked(page);
- put_page(page);
+ put_and_wait_on_page_locked(page);
goto out;
}
@@ -1573,8 +1572,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
if (!get_page_unless_zero(page))
goto out_unlock;
spin_unlock(vmf->ptl);
- wait_on_page_locked(page);
- put_page(page);
+ put_and_wait_on_page_locked(page);
goto out;
}
diff --git a/mm/migrate.c b/mm/migrate.c
index 57de458..7c148b7 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -325,16 +325,13 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
/*
* Once radix-tree replacement of page migration started, page_count
- * *must* be zero. And, we don't want to call wait_on_page_locked()
- * against a page without get_page().
- * So, we use get_page_unless_zero(), here. Even failed, page fault
- * will occur again.
+ * is zero; but we must not call put_and_wait_on_page_locked() without
+ * a ref. Use get_page_unless_zero(), and just fault again if it fails.
*/
if (!get_page_unless_zero(page))
goto out;
pte_unmap_unlock(ptep, ptl);
- wait_on_page_locked(page);
- put_page(page);
+ put_and_wait_on_page_locked(page);
return;
out:
pte_unmap_unlock(ptep, ptl);
@@ -368,8 +365,7 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd)
if (!get_page_unless_zero(page))
goto unlock;
spin_unlock(ptl);
- wait_on_page_locked(page);
- put_page(page);
+ put_and_wait_on_page_locked(page);
return;
unlock:
spin_unlock(ptl);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7c70c40..7de87d2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4646,7 +4646,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
if (current->flags & PF_MEMALLOC)
goto nopage;
- if (fatal_signal_pending(current) && !(gfp_mask & __GFP_NOFAIL))
+ if (fatal_signal_pending(current) && !(gfp_mask & __GFP_NOFAIL) &&
+ (gfp_mask & __GFP_FS))
goto nopage;
/* Try direct reclaim and then allocating */
@@ -7492,7 +7493,7 @@ unsigned long free_reserved_area(void *start, void *end, int poison, char *s)
#ifdef CONFIG_HAVE_MEMBLOCK
memblock_dbg("memblock_free: [%#016llx-%#016llx] %pS\n",
- __pa(start), __pa(end), (void *)_RET_IP_);
+ (u64)__pa(start), (u64)__pa(end), (void *)_RET_IP_);
#endif
return pages;
@@ -7871,6 +7872,21 @@ int watermark_boost_factor_sysctl_handler(struct ctl_table *table, int write,
return 0;
}
+int kswapd_threads_sysctl_handler(struct ctl_table *table, int write,
+ void __user *buffer, size_t *length, loff_t *ppos)
+{
+ int rc;
+
+ rc = proc_dointvec_minmax(table, write, buffer, length, ppos);
+ if (rc)
+ return rc;
+
+ if (write)
+ update_kswapd_threads();
+
+ return 0;
+}
+
int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write,
void __user *buffer, size_t *length, loff_t *ppos)
{
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 91071a4..98d6a33 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -135,6 +135,13 @@ struct scan_control {
struct vm_area_struct *target_vma;
};
+/*
+ * Number of active kswapd threads
+ */
+#define DEF_KSWAPD_THREADS_PER_NODE 1
+int kswapd_threads = DEF_KSWAPD_THREADS_PER_NODE;
+int kswapd_threads_current = DEF_KSWAPD_THREADS_PER_NODE;
+
#ifdef ARCH_HAS_PREFETCH
#define prefetch_prev_lru_page(_page, _base, _field) \
do { \
@@ -1468,14 +1475,8 @@ static unsigned long shrink_page_list(struct list_head *page_list,
count_memcg_page_event(page, PGLAZYFREED);
} else if (!mapping || !__remove_mapping(mapping, page, true))
goto keep_locked;
- /*
- * At this point, we have no other references and there is
- * no way to pick any more up (removed from LRU, removed
- * from pagecache). Can use non-atomic bitops now (and
- * we obviously don't have to worry about waking up a process
- * waiting on the page lock, because there are no references.
- */
- __ClearPageLocked(page);
+
+ unlock_page(page);
free_it:
nr_reclaimed++;
@@ -4081,21 +4082,83 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
restore their cpu bindings. */
static int kswapd_cpu_online(unsigned int cpu)
{
- int nid;
+ int nid, hid;
+ int nr_threads = kswapd_threads_current;
for_each_node_state(nid, N_MEMORY) {
pg_data_t *pgdat = NODE_DATA(nid);
const struct cpumask *mask;
mask = cpumask_of_node(pgdat->node_id);
-
- if (cpumask_any_and(cpu_online_mask, mask) < nr_cpu_ids)
- /* One of our CPUs online: restore mask */
- set_cpus_allowed_ptr(pgdat->kswapd, mask);
+ if (cpumask_any_and(cpu_online_mask, mask) < nr_cpu_ids) {
+ for (hid = 0; hid < nr_threads; hid++) {
+ /* One of our CPUs online: restore mask */
+ set_cpus_allowed_ptr(pgdat->kswapd[hid], mask);
+ }
+ }
}
return 0;
}
+static void update_kswapd_threads_node(int nid)
+{
+ pg_data_t *pgdat;
+ int drop, increase;
+ int last_idx, start_idx, hid;
+ int nr_threads = kswapd_threads_current;
+
+ pgdat = NODE_DATA(nid);
+ last_idx = nr_threads - 1;
+ if (kswapd_threads < nr_threads) {
+ drop = nr_threads - kswapd_threads;
+ for (hid = last_idx; hid > (last_idx - drop); hid--) {
+ if (pgdat->kswapd[hid]) {
+ kthread_stop(pgdat->kswapd[hid]);
+ pgdat->kswapd[hid] = NULL;
+ }
+ }
+ } else {
+ increase = kswapd_threads - nr_threads;
+ start_idx = last_idx + 1;
+ for (hid = start_idx; hid < (start_idx + increase); hid++) {
+ pgdat->kswapd[hid] = kthread_run(kswapd, pgdat,
+ "kswapd%d:%d", nid, hid);
+ if (IS_ERR(pgdat->kswapd[hid])) {
+ pr_err("Failed to start kswapd%d on node %d\n",
+ hid, nid);
+ pgdat->kswapd[hid] = NULL;
+ /*
+ * We are out of resources. Do not start any
+ * more threads.
+ */
+ break;
+ }
+ }
+ }
+}
+
+void update_kswapd_threads(void)
+{
+ int nid;
+
+ if (kswapd_threads_current == kswapd_threads)
+ return;
+
+ /*
+ * Hold the memory hotplug lock to avoid racing with memory
+ * hotplug initiated updates
+ */
+ mem_hotplug_begin();
+ for_each_node_state(nid, N_MEMORY)
+ update_kswapd_threads_node(nid);
+
+ pr_info("kswapd_thread count changed, old:%d new:%d\n",
+ kswapd_threads_current, kswapd_threads);
+ kswapd_threads_current = kswapd_threads;
+ mem_hotplug_done();
+}
+
+
/*
* This kswapd start function will be called by init and node-hot-add.
* On node-hot-add, kswapd will moved to proper cpus if cpus are hot-added.
@@ -4104,18 +4167,25 @@ int kswapd_run(int nid)
{
pg_data_t *pgdat = NODE_DATA(nid);
int ret = 0;
+ int hid, nr_threads;
- if (pgdat->kswapd)
+ if (pgdat->kswapd[0])
return 0;
- pgdat->kswapd = kthread_run(kswapd, pgdat, "kswapd%d", nid);
- if (IS_ERR(pgdat->kswapd)) {
- /* failure at boot is fatal */
- BUG_ON(system_state < SYSTEM_RUNNING);
- pr_err("Failed to start kswapd on node %d\n", nid);
- ret = PTR_ERR(pgdat->kswapd);
- pgdat->kswapd = NULL;
+ nr_threads = kswapd_threads;
+ for (hid = 0; hid < nr_threads; hid++) {
+ pgdat->kswapd[hid] = kthread_run(kswapd, pgdat, "kswapd%d:%d",
+ nid, hid);
+ if (IS_ERR(pgdat->kswapd[hid])) {
+ /* failure at boot is fatal */
+ BUG_ON(system_state < SYSTEM_RUNNING);
+ pr_err("Failed to start kswapd%d on node %d\n",
+ hid, nid);
+ ret = PTR_ERR(pgdat->kswapd[hid]);
+ pgdat->kswapd[hid] = NULL;
+ }
}
+ kswapd_threads_current = nr_threads;
return ret;
}
@@ -4125,11 +4195,16 @@ int kswapd_run(int nid)
*/
void kswapd_stop(int nid)
{
- struct task_struct *kswapd = NODE_DATA(nid)->kswapd;
+ struct task_struct *kswapd;
+ int hid;
+ int nr_threads = kswapd_threads_current;
- if (kswapd) {
- kthread_stop(kswapd);
- NODE_DATA(nid)->kswapd = NULL;
+ for (hid = 0; hid < nr_threads; hid++) {
+ kswapd = NODE_DATA(nid)->kswapd[hid];
+ if (kswapd) {
+ kthread_stop(kswapd);
+ NODE_DATA(nid)->kswapd[hid] = NULL;
+ }
}
}
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 4f6da79..ddbfa79 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2510,7 +2510,7 @@ static inline bool tcp_need_reset(int state)
{
return (1 << state) &
(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT | TCPF_FIN_WAIT1 |
- TCPF_FIN_WAIT2 | TCPF_SYN_RECV);
+ TCPF_FIN_WAIT2 | TCPF_SYN_RECV | TCPF_SYN_SENT);
}
static void tcp_rtx_queue_purge(struct sock *sk)
@@ -2572,8 +2572,7 @@ int tcp_disconnect(struct sock *sk, int flags)
*/
tcp_send_active_reset(sk, gfp_any());
sk->sk_err = ECONNRESET;
- } else if (old_state == TCP_SYN_SENT)
- sk->sk_err = ECONNRESET;
+ }
tcp_clear_xmit_timers(sk);
__skb_queue_purge(&sk->sk_receive_queue);
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 1cc20ed..b0e8970 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -2029,6 +2029,9 @@ static bool tcp_can_coalesce_send_queue_head(struct sock *sk, int len)
struct sk_buff *skb, *next;
skb = tcp_send_head(sk);
+ if (!skb)
+ return false;
+
tcp_for_write_queue_from_safe(skb, next, sk) {
if (len <= skb->len)
break;
diff --git a/security/Kconfig b/security/Kconfig
index 3d68322..e483bbc 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -6,10 +6,6 @@
source security/keys/Kconfig
-if ARCH_QCOM
-source security/pfe/Kconfig
-endif
-
config SECURITY_DMESG_RESTRICT
bool "Restrict unprivileged access to the kernel syslog"
default n
diff --git a/security/Makefile b/security/Makefile
index 47bffaa..4d2d378 100644
--- a/security/Makefile
+++ b/security/Makefile
@@ -10,7 +10,6 @@
subdir-$(CONFIG_SECURITY_APPARMOR) += apparmor
subdir-$(CONFIG_SECURITY_YAMA) += yama
subdir-$(CONFIG_SECURITY_LOADPIN) += loadpin
-subdir-$(CONFIG_ARCH_QCOM) += pfe
# always enable default capabilities
obj-y += commoncap.o
@@ -27,7 +26,6 @@
obj-$(CONFIG_SECURITY_YAMA) += yama/
obj-$(CONFIG_SECURITY_LOADPIN) += loadpin/
obj-$(CONFIG_CGROUP_DEVICE) += device_cgroup.o
-obj-$(CONFIG_ARCH_QCOM) += pfe/
# Object integrity file lists
subdir-$(CONFIG_INTEGRITY) += integrity
diff --git a/security/pfe/Kconfig b/security/pfe/Kconfig
deleted file mode 100644
index 47c8a03..0000000
--- a/security/pfe/Kconfig
+++ /dev/null
@@ -1,42 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-menu "Qualcomm Technologies, Inc Per File Encryption security device drivers"
- depends on ARCH_QCOM
-
-config PFT
- bool "Per-File-Tagger driver"
- depends on SECURITY
- default n
- help
- This driver is used for tagging enterprise files.
- It is part of the Per-File-Encryption (PFE) feature.
- The driver is tagging files when created by
- registered application.
- Tagged files are encrypted using the dm-req-crypt driver.
-
-config PFK
- bool "Per-File-Key driver"
- depends on SECURITY
- depends on SECURITY_SELINUX
- default n
- help
- This driver is used for storing eCryptfs information
- in file node.
- This is part of eCryptfs hardware enhanced solution
- provided by Qualcomm Technologies, Inc.
- Information is used when file is encrypted later using
- ICE or dm crypto engine
-
-config PFK_WRAPPED_KEY_SUPPORTED
- bool "Per-File-Key driver with wrapped key support"
- depends on SECURITY
- depends on SECURITY_SELINUX
- depends on QSEECOM
- depends on PFK
- default n
- help
- Adds wrapped key support in PFK driver. Instead of setting
- the key directly in ICE, it unwraps the key and sets the key
- in ICE.
- It ensures the key is protected within a secure environment
- and only the wrapped key is present in the kernel.
-endmenu
diff --git a/security/pfe/Makefile b/security/pfe/Makefile
deleted file mode 100644
index 5758772..0000000
--- a/security/pfe/Makefile
+++ /dev/null
@@ -1,7 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0
-ccflags-y += -Isecurity/selinux -Isecurity/selinux/include
-ccflags-y += -Ifs/crypto
-ccflags-y += -Idrivers/misc
-
-obj-$(CONFIG_PFT) += pft.o
-obj-$(CONFIG_PFK) += pfk.o pfk_kc.o pfk_ice.o pfk_ext4.o pfk_f2fs.o
diff --git a/security/pfe/pfk.c b/security/pfe/pfk.c
deleted file mode 100644
index a46c39d..0000000
--- a/security/pfe/pfk.c
+++ /dev/null
@@ -1,554 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-/*
- * Per-File-Key (PFK).
- *
- * This driver is responsible for overall management of various
- * Per File Encryption variants that work on top of or as part of different
- * file systems.
- *
- * The driver has the following purpose :
- * 1) Define priorities between PFE's if more than one is enabled
- * 2) Extract key information from inode
- * 3) Load and manage various keys in ICE HW engine
- * 4) It should be invoked from various layers in FS/BLOCK/STORAGE DRIVER
- * that need to take decision on HW encryption management of the data
- * Some examples:
- * BLOCK LAYER: when it takes decision on whether 2 chunks can be united
- * to one encryption / decryption request sent to the HW
- *
- * UFS DRIVER: when it need to configure ICE HW with a particular key slot
- * to be used for encryption / decryption
- *
- * PFE variants can differ on particular way of storing the cryptographic info
- * inside inode, actions to be taken upon file operations, etc., but the common
- * properties are described above
- *
- */
-
-#define pr_fmt(fmt) "pfk [%s]: " fmt, __func__
-
-#include <linux/module.h>
-#include <linux/fs.h>
-#include <linux/errno.h>
-#include <linux/printk.h>
-#include <linux/bio.h>
-#include <linux/security.h>
-#include <crypto/algapi.h>
-#include <crypto/ice.h>
-
-#include <linux/pfk.h>
-
-#include "pfk_kc.h"
-#include "objsec.h"
-#include "pfk_ice.h"
-#include "pfk_ext4.h"
-#include "pfk_f2fs.h"
-#include "pfk_internal.h"
-
-static bool pfk_ready;
-
-
-/* might be replaced by a table when more than one cipher is supported */
-#define PFK_SUPPORTED_KEY_SIZE 32
-#define PFK_SUPPORTED_SALT_SIZE 32
-
-/* Various PFE types and function tables to support each one of them */
-enum pfe_type {EXT4_CRYPT_PFE, F2FS_CRYPT_PFE, INVALID_PFE};
-
-typedef int (*pfk_parse_inode_type)(const struct bio *bio,
- const struct inode *inode,
- struct pfk_key_info *key_info,
- enum ice_cryto_algo_mode *algo,
- bool *is_pfe);
-
-typedef bool (*pfk_allow_merge_bio_type)(const struct bio *bio1,
- const struct bio *bio2, const struct inode *inode1,
- const struct inode *inode2);
-
-static const pfk_parse_inode_type pfk_parse_inode_ftable[] = {
- &pfk_ext4_parse_inode, /* EXT4_CRYPT_PFE */
- &pfk_f2fs_parse_inode, /* F2FS_CRYPT_PFE */
-};
-
-static const pfk_allow_merge_bio_type pfk_allow_merge_bio_ftable[] = {
- &pfk_ext4_allow_merge_bio, /* EXT4_CRYPT_PFE */
- &pfk_f2fs_allow_merge_bio, /* F2FS_CRYPT_PFE */
-};
-
-static void __exit pfk_exit(void)
-{
- pfk_ready = false;
- pfk_ext4_deinit();
- pfk_f2fs_deinit();
- pfk_kc_deinit();
-}
-
-static int __init pfk_init(void)
-{
- int ret = 0;
-
- ret = pfk_ext4_init();
- if (ret != 0)
- goto fail;
-
- ret = pfk_f2fs_init();
- if (ret != 0)
- goto fail;
-
- pfk_ready = true;
- pr_debug("Driver initialized successfully\n");
-
- return 0;
-
-fail:
- pr_err("Failed to init driver\n");
- return -ENODEV;
-}
-
-/*
- * If more than one type is supported simultaneously, this function will also
- * set the priority between them
- */
-static enum pfe_type pfk_get_pfe_type(const struct inode *inode)
-{
- if (!inode)
- return INVALID_PFE;
-
- if (pfk_is_ext4_type(inode))
- return EXT4_CRYPT_PFE;
-
- if (pfk_is_f2fs_type(inode))
- return F2FS_CRYPT_PFE;
-
- return INVALID_PFE;
-}
-
-/**
- * inode_to_filename() - get the filename from inode pointer.
- * @inode: inode pointer
- *
- * it is used for debug prints.
- *
- * Return: filename string or "unknown".
- */
-char *inode_to_filename(const struct inode *inode)
-{
- struct dentry *dentry = NULL;
- char *filename = NULL;
-
- if (!inode)
- return "NULL";
-
- if (hlist_empty(&inode->i_dentry))
- return "unknown";
-
- dentry = hlist_entry(inode->i_dentry.first, struct dentry, d_u.d_alias);
- filename = dentry->d_iname;
-
- return filename;
-}
-
-/**
- * pfk_is_ready() - driver is initialized and ready.
- *
- * Return: true if the driver is ready.
- */
-static inline bool pfk_is_ready(void)
-{
- return pfk_ready;
-}
-
-/**
- * pfk_bio_get_inode() - get the inode from a bio.
- * @bio: Pointer to BIO structure.
- *
- * Walk the bio struct links to get the inode.
- * Please note, that in general bio may consist of several pages from
- * several files, but in our case we always assume that all pages come
- * from the same file, since our logic ensures it. That is why we only
- * walk through the first page to look for inode.
- *
- * Return: pointer to the inode struct if successful, or NULL otherwise.
- *
- */
-static struct inode *pfk_bio_get_inode(const struct bio *bio)
-{
- if (!bio)
- return NULL;
- if (!bio_has_data((struct bio *)bio))
- return NULL;
- if (!bio->bi_io_vec)
- return NULL;
- if (!bio->bi_io_vec->bv_page)
- return NULL;
-
- if (PageAnon(bio->bi_io_vec->bv_page)) {
- struct inode *inode;
-
- /* Using direct-io (O_DIRECT) without page cache */
- inode = dio_bio_get_inode((struct bio *)bio);
- pr_debug("inode on direct-io, inode = 0x%pK.\n", inode);
-
- return inode;
- }
-
- if (!page_mapping(bio->bi_io_vec->bv_page))
- return NULL;
-
- return page_mapping(bio->bi_io_vec->bv_page)->host;
-}
-
-/**
- * pfk_key_size_to_key_type() - translate key size to key size enum
- * @key_size: key size in bytes
- * @key_size_type: pointer to store the output enum (can be null)
- *
- * return 0 in case of success, error otherwise (i.e not supported key size)
- */
-int pfk_key_size_to_key_type(size_t key_size,
- enum ice_crpto_key_size *key_size_type)
-{
- /*
- * currently only 32 bit key size is supported
- * in the future, table with supported key sizes might
- * be introduced
- */
-
- if (key_size != PFK_SUPPORTED_KEY_SIZE) {
- pr_err("not supported key size %zu\n", key_size);
- return -EINVAL;
- }
-
- if (key_size_type)
- *key_size_type = ICE_CRYPTO_KEY_SIZE_256;
-
- return 0;
-}
-
-/*
- * Retrieves filesystem type from inode's superblock
- */
-bool pfe_is_inode_filesystem_type(const struct inode *inode,
- const char *fs_type)
-{
- if (!inode || !fs_type)
- return false;
-
- if (!inode->i_sb)
- return false;
-
- if (!inode->i_sb->s_type)
- return false;
-
- return (strcmp(inode->i_sb->s_type->name, fs_type) == 0);
-}
-
-/**
- * pfk_get_key_for_bio() - get the encryption key to be used for a bio
- *
- * @bio: pointer to the BIO
- * @key_info: pointer to the key information which will be filled in
- * @algo_mode: optional pointer to the algorithm identifier which will be set
- * @is_pfe: will be set to false if the BIO should be left unencrypted
- *
- * Return: 0 if a key is being used, otherwise a -errno value
- */
-static int pfk_get_key_for_bio(const struct bio *bio,
- struct pfk_key_info *key_info,
- enum ice_cryto_algo_mode *algo_mode,
- bool *is_pfe, unsigned int *data_unit)
-{
- const struct inode *inode;
- enum pfe_type which_pfe;
- const struct blk_encryption_key *key = NULL;
- char *s_type = NULL;
-
- inode = pfk_bio_get_inode(bio);
- which_pfe = pfk_get_pfe_type(inode);
- s_type = (char *)pfk_kc_get_storage_type();
-
- /*
- * Update dun based on storage type.
- * 512 byte dun - For ext4 emmc
- * 4K dun - For ext4 ufs, f2fs ufs and f2fs emmc
- */
-
- if (data_unit) {
- if (!bio_dun(bio) && !memcmp(s_type, "sdcc", strlen("sdcc")))
- *data_unit = 1 << ICE_CRYPTO_DATA_UNIT_512_B;
- else
- *data_unit = 1 << ICE_CRYPTO_DATA_UNIT_4_KB;
- }
-
- if (which_pfe != INVALID_PFE) {
- /* Encrypted file; override ->bi_crypt_key */
- pr_debug("parsing inode %lu with PFE type %d\n",
- inode->i_ino, which_pfe);
- return (*(pfk_parse_inode_ftable[which_pfe]))
- (bio, inode, key_info, algo_mode, is_pfe);
- }
-
- /*
- * bio is not for an encrypted file. Use ->bi_crypt_key if it was set.
- * Otherwise, don't encrypt/decrypt the bio.
- */
-#ifdef CONFIG_DM_DEFAULT_KEY
- key = bio->bi_crypt_key;
-#endif
- if (!key) {
- *is_pfe = false;
- return -EINVAL;
- }
-
- /* Note: the "salt" is really just the second half of the XTS key. */
- BUILD_BUG_ON(sizeof(key->raw) !=
- PFK_SUPPORTED_KEY_SIZE + PFK_SUPPORTED_SALT_SIZE);
- key_info->key = &key->raw[0];
- key_info->key_size = PFK_SUPPORTED_KEY_SIZE;
- key_info->salt = &key->raw[PFK_SUPPORTED_KEY_SIZE];
- key_info->salt_size = PFK_SUPPORTED_SALT_SIZE;
- if (algo_mode)
- *algo_mode = ICE_CRYPTO_ALGO_MODE_AES_XTS;
- return 0;
-}
-
-/**
- * pfk_load_key_start() - loads PFE encryption key to the ICE
- * Can also be invoked from non
- * PFE context, in this case it
- * is not relevant and is_pfe
- * flag is set to false
- *
- * @bio: Pointer to the BIO structure
- * @ice_setting: Pointer to ice setting structure that will be filled with
- * ice configuration values, including the index to which the key was loaded
- * @is_pfe: will be false if inode is not relevant to PFE, in such a case
- * it should be treated as non PFE by the block layer
- *
- * Returns the index where the key is stored in encryption hw and additional
- * information that will be used later for configuration of the encryption hw.
- *
- * Must be followed by pfk_load_key_end when key is no longer used by ice
- *
- */
-int pfk_load_key_start(const struct bio *bio, struct ice_device *ice_dev,
- struct ice_crypto_setting *ice_setting, bool *is_pfe,
- bool async)
-{
- int ret = 0;
- struct pfk_key_info key_info = {NULL, NULL, 0, 0};
- enum ice_cryto_algo_mode algo_mode = ICE_CRYPTO_ALGO_MODE_AES_XTS;
- enum ice_crpto_key_size key_size_type = 0;
- unsigned int data_unit = 1 << ICE_CRYPTO_DATA_UNIT_512_B;
- u32 key_index = 0;
-
- if (!is_pfe) {
- pr_err("is_pfe is NULL\n");
- return -EINVAL;
- }
-
- /*
- * only a few errors below can indicate that
- * this function was not invoked within PFE context,
- * otherwise we will consider it PFE
- */
- *is_pfe = true;
-
- if (!pfk_is_ready())
- return -ENODEV;
-
- if (!ice_setting) {
- pr_err("ice setting is NULL\n");
- return -EINVAL;
- }
-
- ret = pfk_get_key_for_bio(bio, &key_info, &algo_mode, is_pfe,
- &data_unit);
-
- if (ret != 0)
- return ret;
-
- ret = pfk_key_size_to_key_type(key_info.key_size, &key_size_type);
- if (ret != 0)
- return ret;
-
- ret = pfk_kc_load_key_start(key_info.key, key_info.key_size,
- key_info.salt, key_info.salt_size, &key_index, async,
- data_unit, ice_dev);
- if (ret) {
- if (ret != -EBUSY && ret != -EAGAIN)
- pr_err("start: could not load key into pfk key cache, error %d\n",
- ret);
-
- return ret;
- }
-
- ice_setting->key_size = key_size_type;
- ice_setting->algo_mode = algo_mode;
- /* hardcoded for now */
- ice_setting->key_mode = ICE_CRYPTO_USE_LUT_SW_KEY;
- ice_setting->key_index = key_index;
-
- pr_debug("loaded key for file %s key_index %d\n",
- inode_to_filename(pfk_bio_get_inode(bio)), key_index);
-
- return 0;
-}
-
-/**
- * pfk_load_key_end() - marks the PFE key as no longer used by ICE
- * Can also be invoked from non
- * PFE context, in this case it is not
- * relevant and is_pfe flag is
- * set to false
- *
- * @bio: Pointer to the BIO structure
- * @is_pfe: Pointer to is_pfe flag, which will be true if function was invoked
- * from PFE context
- */
-int pfk_load_key_end(const struct bio *bio, struct ice_device *ice_dev,
- bool *is_pfe)
-{
- int ret = 0;
- struct pfk_key_info key_info = {NULL, NULL, 0, 0};
-
- if (!is_pfe) {
- pr_err("is_pfe is NULL\n");
- return -EINVAL;
- }
-
- /* only a few errors below can indicate that
- * this function was not invoked within PFE context,
- * otherwise we will consider it PFE
- */
- *is_pfe = true;
-
- if (!pfk_is_ready())
- return -ENODEV;
-
- ret = pfk_get_key_for_bio(bio, &key_info, NULL, is_pfe, NULL);
- if (ret != 0)
- return ret;
-
- pfk_kc_load_key_end(key_info.key, key_info.key_size,
- key_info.salt, key_info.salt_size, ice_dev);
-
- pr_debug("finished using key for file %s\n",
- inode_to_filename(pfk_bio_get_inode(bio)));
-
- return 0;
-}
-
-/**
- * pfk_allow_merge_bio() - Check if 2 BIOs can be merged.
- * @bio1: Pointer to first BIO structure.
- * @bio2: Pointer to second BIO structure.
- *
- * Prevent merging of BIOs from encrypted and non-encrypted
- * files, or files encrypted with different key.
- * Also prevent non encrypted and encrypted data from the same file
- * to be merged (ecryptfs header if stored inside file should be non
- * encrypted)
- * This API is called by the file system block layer.
- *
- * Return: true if the BIOs allowed to be merged, false
- * otherwise.
- */
-bool pfk_allow_merge_bio(const struct bio *bio1, const struct bio *bio2)
-{
- const struct blk_encryption_key *key1 = NULL;
- const struct blk_encryption_key *key2 = NULL;
- const struct inode *inode1;
- const struct inode *inode2;
- enum pfe_type which_pfe1;
- enum pfe_type which_pfe2;
-
- if (!pfk_is_ready())
- return false;
-
- if (!bio1 || !bio2)
- return false;
-
- if (bio1 == bio2)
- return true;
-
-#ifdef CONFIG_DM_DEFAULT_KEY
- key1 = bio1->bi_crypt_key;
- key2 = bio2->bi_crypt_key;
-#endif
-
- inode1 = pfk_bio_get_inode(bio1);
- inode2 = pfk_bio_get_inode(bio2);
-
- which_pfe1 = pfk_get_pfe_type(inode1);
- which_pfe2 = pfk_get_pfe_type(inode2);
-
- /*
- * If one bio is for an encrypted file and the other is for a different
- * type of encrypted file or for blocks that are not part of an
- * encrypted file, do not merge.
- */
- if (which_pfe1 != which_pfe2)
- return false;
-
- if (which_pfe1 != INVALID_PFE) {
- /* Both bios are for the same type of encrypted file. */
- return (*(pfk_allow_merge_bio_ftable[which_pfe1]))(bio1, bio2,
- inode1, inode2);
- }
-
- /*
- * Neither bio is for an encrypted file. Merge only if the default keys
- * are the same (or both are NULL).
- */
- return key1 == key2 ||
- (key1 && key2 &&
- !crypto_memneq(key1->raw, key2->raw, sizeof(key1->raw)));
-}
-
-int pfk_fbe_clear_key(const unsigned char *key, size_t key_size,
- const unsigned char *salt, size_t salt_size)
-{
- int ret = -EINVAL;
-
- if (!key || !salt)
- return ret;
-
- ret = pfk_kc_remove_key_with_salt(key, key_size, salt, salt_size);
- if (ret)
- pr_err("Clear key error: ret value %d\n", ret);
- return ret;
-}
-
-/**
- * Flush key table on storage core reset. During core reset key configuration
- * is lost in ICE. We need to flash the cache, so that the keys will be
- * reconfigured again for every subsequent transaction
- */
-void pfk_clear_on_reset(struct ice_device *ice_dev)
-{
- if (!pfk_is_ready())
- return;
-
- pfk_kc_clear_on_reset(ice_dev);
-}
-
-int pfk_remove(struct ice_device *ice_dev)
-{
- return pfk_kc_clear(ice_dev);
-}
-
-int pfk_initialize_key_table(struct ice_device *ice_dev)
-{
- return pfk_kc_initialize_key_table(ice_dev);
-}
-
-module_init(pfk_init);
-module_exit(pfk_exit);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("Per-File-Key driver");
diff --git a/security/pfe/pfk_ext4.c b/security/pfe/pfk_ext4.c
deleted file mode 100644
index 0ccd46b..0000000
--- a/security/pfe/pfk_ext4.c
+++ /dev/null
@@ -1,177 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-/*
- * Per-File-Key (PFK) - EXT4
- *
- * This driver is used for working with EXT4 crypt extension
- *
- * The key information is stored in node by EXT4 when file is first opened
- * and will be later accessed by Block Device Driver to actually load the key
- * to encryption hw.
- *
- * PFK exposes API's for loading and removing keys from encryption hw
- * and also API to determine whether 2 adjacent blocks can be agregated by
- * Block Layer in one request to encryption hw.
- *
- */
-
-#define pr_fmt(fmt) "pfk_ext4 [%s]: " fmt, __func__
-
-#include <linux/module.h>
-#include <linux/fs.h>
-#include <linux/errno.h>
-#include <linux/printk.h>
-
-#include "fscrypt_ice.h"
-#include "pfk_ext4.h"
-//#include "ext4_ice.h"
-
-static bool pfk_ext4_ready;
-
-/*
- * pfk_ext4_deinit() - Deinit function, should be invoked by upper PFK layer
- */
-void pfk_ext4_deinit(void)
-{
- pfk_ext4_ready = false;
-}
-
-/*
- * pfk_ecryptfs_init() - Init function, should be invoked by upper PFK layer
- */
-int __init pfk_ext4_init(void)
-{
- pfk_ext4_ready = true;
- pr_info("PFK EXT4 inited successfully\n");
-
- return 0;
-}
-
-/**
- * pfk_ecryptfs_is_ready() - driver is initialized and ready.
- *
- * Return: true if the driver is ready.
- */
-static inline bool pfk_ext4_is_ready(void)
-{
- return pfk_ext4_ready;
-}
-
-/**
- * pfk_is_ext4_type() - return true if inode belongs to ICE EXT4 PFE
- * @inode: inode pointer
- */
-bool pfk_is_ext4_type(const struct inode *inode)
-{
- if (!pfe_is_inode_filesystem_type(inode, "ext4"))
- return false;
-
- return fscrypt_should_be_processed_by_ice(inode);
-}
-
-/**
- * pfk_ext4_parse_cipher() - parse cipher from inode to enum
- * @inode: inode
- * @algo: pointer to store the output enum (can be null)
- *
- * return 0 in case of success, error otherwise (i.e not supported cipher)
- */
-static int pfk_ext4_parse_cipher(const struct inode *inode,
- enum ice_cryto_algo_mode *algo)
-{
- /*
- * currently only AES XTS algo is supported
- * in the future, table with supported ciphers might
- * be introduced
- */
-
- if (!inode)
- return -EINVAL;
-
- if (!fscrypt_is_aes_xts_cipher(inode)) {
- pr_err("ext4 alghoritm is not supported by pfk\n");
- return -EINVAL;
- }
-
- if (algo)
- *algo = ICE_CRYPTO_ALGO_MODE_AES_XTS;
-
- return 0;
-}
-
-int pfk_ext4_parse_inode(const struct bio *bio,
- const struct inode *inode,
- struct pfk_key_info *key_info,
- enum ice_cryto_algo_mode *algo,
- bool *is_pfe)
-{
- int ret = 0;
-
- if (!is_pfe)
- return -EINVAL;
-
- /*
- * only a few errors below can indicate that
- * this function was not invoked within PFE context,
- * otherwise we will consider it PFE
- */
- *is_pfe = true;
-
- if (!pfk_ext4_is_ready())
- return -ENODEV;
-
- if (!inode)
- return -EINVAL;
-
- if (!key_info)
- return -EINVAL;
-
- key_info->key = fscrypt_get_ice_encryption_key(inode);
- if (!key_info->key) {
- pr_err("could not parse key from ext4\n");
- return -EINVAL;
- }
-
- key_info->key_size = fscrypt_get_ice_encryption_key_size(inode);
- if (!key_info->key_size) {
- pr_err("could not parse key size from ext4\n");
- return -EINVAL;
- }
-
- key_info->salt = fscrypt_get_ice_encryption_salt(inode);
- if (!key_info->salt) {
- pr_err("could not parse salt from ext4\n");
- return -EINVAL;
- }
-
- key_info->salt_size = fscrypt_get_ice_encryption_salt_size(inode);
- if (!key_info->salt_size) {
- pr_err("could not parse salt size from ext4\n");
- return -EINVAL;
- }
-
- ret = pfk_ext4_parse_cipher(inode, algo);
- if (ret != 0) {
- pr_err("not supported cipher\n");
- return ret;
- }
-
- return 0;
-}
-
-bool pfk_ext4_allow_merge_bio(const struct bio *bio1,
- const struct bio *bio2, const struct inode *inode1,
- const struct inode *inode2)
-{
- /* if there is no ext4 pfk, don't disallow merging blocks */
- if (!pfk_ext4_is_ready())
- return true;
-
- if (!inode1 || !inode2)
- return false;
-
- return fscrypt_is_ice_encryption_info_equal(inode1, inode2);
-}
diff --git a/security/pfe/pfk_ext4.h b/security/pfe/pfk_ext4.h
deleted file mode 100644
index bca23f3..0000000
--- a/security/pfe/pfk_ext4.h
+++ /dev/null
@@ -1,30 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef _PFK_EXT4_H_
-#define _PFK_EXT4_H_
-
-#include <linux/types.h>
-#include <linux/fs.h>
-#include <crypto/ice.h>
-#include "pfk_internal.h"
-
-bool pfk_is_ext4_type(const struct inode *inode);
-
-int pfk_ext4_parse_inode(const struct bio *bio,
- const struct inode *inode,
- struct pfk_key_info *key_info,
- enum ice_cryto_algo_mode *algo,
- bool *is_pfe);
-
-bool pfk_ext4_allow_merge_bio(const struct bio *bio1,
- const struct bio *bio2, const struct inode *inode1,
- const struct inode *inode2);
-
-int __init pfk_ext4_init(void);
-
-void pfk_ext4_deinit(void);
-
-#endif /* _PFK_EXT4_H_ */
diff --git a/security/pfe/pfk_f2fs.c b/security/pfe/pfk_f2fs.c
deleted file mode 100644
index 5ea79ace..0000000
--- a/security/pfe/pfk_f2fs.c
+++ /dev/null
@@ -1,188 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-/*
- * Per-File-Key (PFK) - f2fs
- *
- * This driver is used for working with EXT4/F2FS crypt extension
- *
- * The key information is stored in node by EXT4/F2FS when file is first opened
- * and will be later accessed by Block Device Driver to actually load the key
- * to encryption hw.
- *
- * PFK exposes API's for loading and removing keys from encryption hw
- * and also API to determine whether 2 adjacent blocks can be agregated by
- * Block Layer in one request to encryption hw.
- *
- */
-
-#define pr_fmt(fmt) "pfk_f2fs [%s]: " fmt, __func__
-
-#include <linux/module.h>
-#include <linux/fs.h>
-#include <linux/errno.h>
-#include <linux/printk.h>
-
-#include "fscrypt_ice.h"
-#include "pfk_f2fs.h"
-
-static bool pfk_f2fs_ready;
-
-/*
- * pfk_f2fs_deinit() - Deinit function, should be invoked by upper PFK layer
- */
-void pfk_f2fs_deinit(void)
-{
- pfk_f2fs_ready = false;
-}
-
-/*
- * pfk_f2fs_init() - Init function, should be invoked by upper PFK layer
- */
-int __init pfk_f2fs_init(void)
-{
- pfk_f2fs_ready = true;
- pr_info("PFK F2FS inited successfully\n");
-
- return 0;
-}
-
-/**
- * pfk_f2fs_is_ready() - driver is initialized and ready.
- *
- * Return: true if the driver is ready.
- */
-static inline bool pfk_f2fs_is_ready(void)
-{
- return pfk_f2fs_ready;
-}
-
-/**
- * pfk_is_f2fs_type() - return true if inode belongs to ICE F2FS PFE
- * @inode: inode pointer
- */
-bool pfk_is_f2fs_type(const struct inode *inode)
-{
- if (!pfe_is_inode_filesystem_type(inode, "f2fs"))
- return false;
-
- return fscrypt_should_be_processed_by_ice(inode);
-}
-
-/**
- * pfk_f2fs_parse_cipher() - parse cipher from inode to enum
- * @inode: inode
- * @algo: pointer to store the output enum (can be null)
- *
- * return 0 in case of success, error otherwise (i.e not supported cipher)
- */
-static int pfk_f2fs_parse_cipher(const struct inode *inode,
- enum ice_cryto_algo_mode *algo)
-{
- /*
- * currently only AES XTS algo is supported
- * in the future, table with supported ciphers might
- * be introduced
- */
- if (!inode)
- return -EINVAL;
-
- if (!fscrypt_is_aes_xts_cipher(inode)) {
- pr_err("f2fs alghoritm is not supported by pfk\n");
- return -EINVAL;
- }
-
- if (algo)
- *algo = ICE_CRYPTO_ALGO_MODE_AES_XTS;
-
- return 0;
-}
-
-int pfk_f2fs_parse_inode(const struct bio *bio,
- const struct inode *inode,
- struct pfk_key_info *key_info,
- enum ice_cryto_algo_mode *algo,
- bool *is_pfe)
-{
- int ret = 0;
-
- if (!is_pfe)
- return -EINVAL;
-
- /*
- * only a few errors below can indicate that
- * this function was not invoked within PFE context,
- * otherwise we will consider it PFE
- */
- *is_pfe = true;
-
- if (!pfk_f2fs_is_ready())
- return -ENODEV;
-
- if (!inode)
- return -EINVAL;
-
- if (!key_info)
- return -EINVAL;
-
- key_info->key = fscrypt_get_ice_encryption_key(inode);
- if (!key_info->key) {
- pr_err("could not parse key from f2fs\n");
- return -EINVAL;
- }
-
- key_info->key_size = fscrypt_get_ice_encryption_key_size(inode);
- if (!key_info->key_size) {
- pr_err("could not parse key size from f2fs\n");
- return -EINVAL;
- }
-
- key_info->salt = fscrypt_get_ice_encryption_salt(inode);
- if (!key_info->salt) {
- pr_err("could not parse salt from f2fs\n");
- return -EINVAL;
- }
-
- key_info->salt_size = fscrypt_get_ice_encryption_salt_size(inode);
- if (!key_info->salt_size) {
- pr_err("could not parse salt size from f2fs\n");
- return -EINVAL;
- }
-
- ret = pfk_f2fs_parse_cipher(inode, algo);
- if (ret != 0) {
- pr_err("not supported cipher\n");
- return ret;
- }
-
- return 0;
-}
-
-bool pfk_f2fs_allow_merge_bio(const struct bio *bio1,
- const struct bio *bio2, const struct inode *inode1,
- const struct inode *inode2)
-{
- bool mergeable;
-
- /* if there is no f2fs pfk, don't disallow merging blocks */
- if (!pfk_f2fs_is_ready())
- return true;
-
- if (!inode1 || !inode2)
- return false;
-
- mergeable = fscrypt_is_ice_encryption_info_equal(inode1, inode2);
- if (!mergeable)
- return false;
-
-
- /* ICE allows only consecutive iv_key stream. */
- if (!bio_dun(bio1) && !bio_dun(bio2))
- return true;
- else if (!bio_dun(bio1) || !bio_dun(bio2))
- return false;
-
- return bio_end_dun(bio1) == bio_dun(bio2);
-}
diff --git a/security/pfe/pfk_f2fs.h b/security/pfe/pfk_f2fs.h
deleted file mode 100644
index 3c6f7ec..0000000
--- a/security/pfe/pfk_f2fs.h
+++ /dev/null
@@ -1,30 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef _PFK_F2FS_H_
-#define _PFK_F2FS_H_
-
-#include <linux/types.h>
-#include <linux/fs.h>
-#include <crypto/ice.h>
-#include "pfk_internal.h"
-
-bool pfk_is_f2fs_type(const struct inode *inode);
-
-int pfk_f2fs_parse_inode(const struct bio *bio,
- const struct inode *inode,
- struct pfk_key_info *key_info,
- enum ice_cryto_algo_mode *algo,
- bool *is_pfe);
-
-bool pfk_f2fs_allow_merge_bio(const struct bio *bio1,
- const struct bio *bio2, const struct inode *inode1,
- const struct inode *inode2);
-
-int __init pfk_f2fs_init(void);
-
-void pfk_f2fs_deinit(void);
-
-#endif /* _PFK_F2FS_H_ */
diff --git a/security/pfe/pfk_ice.c b/security/pfe/pfk_ice.c
deleted file mode 100644
index 8d3928f9..0000000
--- a/security/pfe/pfk_ice.c
+++ /dev/null
@@ -1,205 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/errno.h>
-#include <linux/io.h>
-#include <linux/interrupt.h>
-#include <linux/delay.h>
-#include <linux/async.h>
-#include <linux/mm.h>
-#include <linux/of.h>
-#include <linux/device-mapper.h>
-#include <soc/qcom/scm.h>
-#include <soc/qcom/qseecomi.h>
-#include <soc/qcom/qtee_shmbridge.h>
-#include <crypto/ice.h>
-#include "pfk_ice.h"
-
-/**********************************/
-/** global definitions **/
-/**********************************/
-
-#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE 0x5
-#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE 0x6
-
-/* index 0 and 1 is reserved for FDE */
-#define MIN_ICE_KEY_INDEX 2
-
-#define MAX_ICE_KEY_INDEX 31
-
-#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_ID \
- TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_SIP, TZ_SVC_ES, \
- TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE)
-
-#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_ID \
- TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_SIP, \
- TZ_SVC_ES, TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE)
-
-#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_PARAM_ID \
- TZ_SYSCALL_CREATE_PARAM_ID_2( \
- TZ_SYSCALL_PARAM_TYPE_VAL, TZ_SYSCALL_PARAM_TYPE_VAL)
-
-#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_PARAM_ID \
- TZ_SYSCALL_CREATE_PARAM_ID_6( \
- TZ_SYSCALL_PARAM_TYPE_VAL, \
- TZ_SYSCALL_PARAM_TYPE_BUF_RW, TZ_SYSCALL_PARAM_TYPE_VAL, \
- TZ_SYSCALL_PARAM_TYPE_VAL, TZ_SYSCALL_PARAM_TYPE_VAL, \
- TZ_SYSCALL_PARAM_TYPE_VAL)
-
-#define CONTEXT_SIZE 0x1000
-
-#define ICE_BUFFER_SIZE 64
-
-#define PFK_UFS "ufs"
-#define PFK_SDCC "sdcc"
-#define PFK_UFS_CARD "ufscard"
-
-#define UFS_CE 10
-#define SDCC_CE 20
-#define UFS_CARD_CE 30
-
-enum {
- ICE_CIPHER_MODE_XTS_128 = 0,
- ICE_CIPHER_MODE_CBC_128 = 1,
- ICE_CIPHER_MODE_XTS_256 = 3,
- ICE_CIPHER_MODE_CBC_256 = 4
-};
-
-static int set_key(uint32_t index, const uint8_t *key, const uint8_t *salt,
- unsigned int data_unit, struct ice_device *ice_dev)
-{
- struct scm_desc desc = {0};
- int ret = 0;
- uint32_t smc_id = 0;
- char *tzbuf = NULL;
- uint32_t key_size = ICE_BUFFER_SIZE / 2;
- struct qtee_shm shm;
-
- ret = qtee_shmbridge_allocate_shm(ICE_BUFFER_SIZE, &shm);
- if (ret)
- return -ENOMEM;
-
- tzbuf = shm.vaddr;
-
- memcpy(tzbuf, key, key_size);
- memcpy(tzbuf+key_size, salt, key_size);
- dmac_flush_range(tzbuf, tzbuf + ICE_BUFFER_SIZE);
-
- smc_id = TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_ID;
-
- desc.arginfo = TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_PARAM_ID;
- desc.args[0] = index;
- desc.args[1] = shm.paddr;
- desc.args[2] = shm.size;
- desc.args[3] = ICE_CIPHER_MODE_XTS_256;
- desc.args[4] = data_unit;
-
- if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS_CARD))
- desc.args[5] = UFS_CARD_CE;
- else if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_SDCC))
- desc.args[5] = SDCC_CE;
- else if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS))
- desc.args[5] = UFS_CE;
-
- ret = scm_call2_noretry(smc_id, &desc);
- if (ret)
- pr_err("%s:SCM call Error: 0x%x\n", __func__, ret);
-
- qtee_shmbridge_free_shm(&shm);
- return ret;
-}
-
-static int clear_key(uint32_t index, struct ice_device *ice_dev)
-{
- struct scm_desc desc = {0};
- int ret = 0;
- uint32_t smc_id = 0;
-
- smc_id = TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_ID;
-
- desc.arginfo = TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_PARAM_ID;
- desc.args[0] = index;
-
- if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS_CARD))
- desc.args[1] = UFS_CARD_CE;
- else if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_SDCC))
- desc.args[1] = SDCC_CE;
- else if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS))
- desc.args[1] = UFS_CE;
-
- ret = scm_call2_noretry(smc_id, &desc);
- if (ret)
- pr_err("%s:SCM call Error: 0x%x\n", __func__, ret);
- return ret;
-}
-
-int qti_pfk_ice_set_key(uint32_t index, uint8_t *key, uint8_t *salt,
- struct ice_device *ice_dev, unsigned int data_unit)
-{
- int ret = 0, ret1 = 0;
-
- if (index < MIN_ICE_KEY_INDEX || index > MAX_ICE_KEY_INDEX) {
- pr_err("%s Invalid index %d\n", __func__, index);
- return -EINVAL;
- }
- if (!key || !salt) {
- pr_err("%s Invalid key/salt\n", __func__);
- return -EINVAL;
- }
-
- ret = enable_ice_setup(ice_dev);
- if (ret) {
- pr_err("%s: could not enable clocks: %d\n", __func__, ret);
- goto out;
- }
-
- ret = set_key(index, key, salt, data_unit, ice_dev);
- if (ret) {
- pr_err("%s: Set Key Error: %d\n", __func__, ret);
- if (ret == -EBUSY) {
- if (disable_ice_setup(ice_dev))
- pr_err("%s: clock disable failed\n", __func__);
- goto out;
- }
- /* Try to invalidate the key to keep ICE in proper state */
- ret1 = clear_key(index, ice_dev);
- if (ret1)
- pr_err("%s: Invalidate key error: %d\n", __func__, ret);
- }
-
- ret1 = disable_ice_setup(ice_dev);
- if (ret)
- pr_err("%s: Error %d disabling clocks\n", __func__, ret);
-
-out:
- return ret;
-}
-
-int qti_pfk_ice_invalidate_key(uint32_t index, struct ice_device *ice_dev)
-{
- int ret = 0;
-
- if (index < MIN_ICE_KEY_INDEX || index > MAX_ICE_KEY_INDEX) {
- pr_err("%s Invalid index %d\n", __func__, index);
- return -EINVAL;
- }
-
- ret = enable_ice_setup(ice_dev);
- if (ret) {
- pr_err("%s: could not enable clocks: 0x%x\n", __func__, ret);
- return ret;
- }
-
- ret = clear_key(index, ice_dev);
- if (ret)
- pr_err("%s: Invalidate key error: %d\n", __func__, ret);
-
- if (disable_ice_setup(ice_dev))
- pr_err("%s: could not disable clocks\n", __func__);
-
- return ret;
-}
diff --git a/security/pfe/pfk_ice.h b/security/pfe/pfk_ice.h
deleted file mode 100644
index 527fb61..0000000
--- a/security/pfe/pfk_ice.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef PFK_ICE_H_
-#define PFK_ICE_H_
-
-/*
- * PFK ICE
- *
- * ICE keys configuration through scm calls.
- *
- */
-
-#include <linux/types.h>
-#include <crypto/ice.h>
-
-int qti_pfk_ice_set_key(uint32_t index, uint8_t *key, uint8_t *salt,
- struct ice_device *ice_dev, unsigned int data_unit);
-int qti_pfk_ice_invalidate_key(uint32_t index, struct ice_device *ice_dev);
-
-#endif /* PFK_ICE_H_ */
diff --git a/security/pfe/pfk_internal.h b/security/pfe/pfk_internal.h
deleted file mode 100644
index 7a800d3..0000000
--- a/security/pfe/pfk_internal.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef _PFK_INTERNAL_H_
-#define _PFK_INTERNAL_H_
-
-#include <linux/types.h>
-#include <crypto/ice.h>
-
-struct pfk_key_info {
- const unsigned char *key;
- const unsigned char *salt;
- size_t key_size;
- size_t salt_size;
-};
-
-int pfk_key_size_to_key_type(size_t key_size,
- enum ice_crpto_key_size *key_size_type);
-
-bool pfe_is_inode_filesystem_type(const struct inode *inode,
- const char *fs_type);
-
-char *inode_to_filename(const struct inode *inode);
-
-#endif /* _PFK_INTERNAL_H_ */
diff --git a/security/pfe/pfk_kc.c b/security/pfe/pfk_kc.c
deleted file mode 100644
index 5a0a557..0000000
--- a/security/pfe/pfk_kc.c
+++ /dev/null
@@ -1,870 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-/*
- * PFK Key Cache
- *
- * Key Cache used internally in PFK.
- * The purpose of the cache is to save access time to QSEE when loading keys.
- * Currently the cache is the same size as the total number of keys that can
- * be loaded to ICE. Since this number is relatively small, the algorithms for
- * cache eviction are simple, linear and based on last usage timestamp, i.e
- * the node that will be evicted is the one with the oldest timestamp.
- * Empty entries always have the oldest timestamp.
- */
-
-#include <linux/module.h>
-#include <linux/mutex.h>
-#include <linux/spinlock.h>
-#include <linux/errno.h>
-#include <linux/string.h>
-#include <linux/jiffies.h>
-#include <linux/slab.h>
-#include <linux/printk.h>
-#include <linux/sched/signal.h>
-
-#include "pfk_kc.h"
-#include "pfk_ice.h"
-
-
-/** the first available index in ice engine */
-#define PFK_KC_STARTING_INDEX 2
-
-/** currently the only supported key and salt sizes */
-#define PFK_KC_KEY_SIZE 32
-#define PFK_KC_SALT_SIZE 32
-
-/** Table size */
-#define PFK_KC_TABLE_SIZE ((32) - (PFK_KC_STARTING_INDEX))
-
-/** The maximum key and salt size */
-#define PFK_MAX_KEY_SIZE PFK_KC_KEY_SIZE
-#define PFK_MAX_SALT_SIZE PFK_KC_SALT_SIZE
-#define PFK_UFS "ufs"
-#define PFK_UFS_CARD "ufscard"
-
-static DEFINE_SPINLOCK(kc_lock);
-static unsigned long flags;
-static bool kc_ready;
-static char *s_type = "sdcc";
-
-/**
- * enum pfk_kc_entry_state - state of the entry inside kc table
- *
- * @FREE: entry is free
- * @ACTIVE_ICE_PRELOAD: entry is actively used by ICE engine
- and cannot be used by others. SCM call
- to load key to ICE is pending to be performed
- * @ACTIVE_ICE_LOADED: entry is actively used by ICE engine and
- cannot be used by others. SCM call to load the
- key to ICE was successfully executed and key is
- now loaded
- * @INACTIVE_INVALIDATING: entry is being invalidated during file close
- and cannot be used by others until invalidation
- is complete
- * @INACTIVE: entry's key is already loaded, but is not
- currently being used. It can be re-used for
- optimization and to avoid SCM call cost or
- it can be taken by another key if there are
- no FREE entries
- * @SCM_ERROR: error occurred while scm call was performed to
- load the key to ICE
- */
-enum pfk_kc_entry_state {
- FREE,
- ACTIVE_ICE_PRELOAD,
- ACTIVE_ICE_LOADED,
- INACTIVE_INVALIDATING,
- INACTIVE,
- SCM_ERROR
-};
-
-struct kc_entry {
- unsigned char key[PFK_MAX_KEY_SIZE];
- size_t key_size;
-
- unsigned char salt[PFK_MAX_SALT_SIZE];
- size_t salt_size;
-
- u64 time_stamp;
- u32 key_index;
-
- struct task_struct *thread_pending;
-
- enum pfk_kc_entry_state state;
-
- /* ref count for the number of requests in the HW queue for this key */
- int loaded_ref_cnt;
- int scm_error;
-};
-
-/**
- * kc_is_ready() - driver is initialized and ready.
- *
- * Return: true if the key cache is ready.
- */
-static inline bool kc_is_ready(void)
-{
- return kc_ready;
-}
-
-static inline void kc_spin_lock(void)
-{
- spin_lock_irqsave(&kc_lock, flags);
-}
-
-static inline void kc_spin_unlock(void)
-{
- spin_unlock_irqrestore(&kc_lock, flags);
-}
-
-/**
- * pfk_kc_get_storage_type() - return the hardware storage type.
- *
- * Return: storage type queried during bootup.
- */
-const char *pfk_kc_get_storage_type(void)
-{
- return s_type;
-}
-
-/**
- * kc_entry_is_available() - checks whether the entry is available
- *
- * Return true if it is , false otherwise or if invalid
- * Should be invoked under spinlock
- */
-static bool kc_entry_is_available(const struct kc_entry *entry)
-{
- if (!entry)
- return false;
-
- return (entry->state == FREE || entry->state == INACTIVE);
-}
-
-/**
- * kc_entry_wait_till_available() - waits till entry is available
- *
- * Returns 0 in case of success or -ERESTARTSYS if the wait was interrupted
- * by signal
- *
- * Should be invoked under spinlock
- */
-static int kc_entry_wait_till_available(struct kc_entry *entry)
-{
- int res = 0;
-
- while (!kc_entry_is_available(entry)) {
- set_current_state(TASK_INTERRUPTIBLE);
- if (signal_pending(current)) {
- res = -ERESTARTSYS;
- break;
- }
- /* assuming only one thread can try to invalidate
- * the same entry
- */
- entry->thread_pending = current;
- kc_spin_unlock();
- schedule();
- kc_spin_lock();
- }
- set_current_state(TASK_RUNNING);
-
- return res;
-}
-
-/**
- * kc_entry_start_invalidating() - moves entry to state
- * INACTIVE_INVALIDATING
- * If entry is in use, waits till
- * it gets available
- * @entry: pointer to entry
- *
- * Return 0 in case of success, otherwise error
- * Should be invoked under spinlock
- */
-static int kc_entry_start_invalidating(struct kc_entry *entry)
-{
- int res;
-
- res = kc_entry_wait_till_available(entry);
- if (res)
- return res;
-
- entry->state = INACTIVE_INVALIDATING;
-
- return 0;
-}
-
-/**
- * kc_entry_finish_invalidating() - moves entry to state FREE
- * wakes up all the tasks waiting
- * on it
- *
- * @entry: pointer to entry
- *
- * Return 0 in case of success, otherwise error
- * Should be invoked under spinlock
- */
-static void kc_entry_finish_invalidating(struct kc_entry *entry)
-{
- if (!entry)
- return;
-
- if (entry->state != INACTIVE_INVALIDATING)
- return;
-
- entry->state = FREE;
-}
-
-/**
- * kc_min_entry() - compare two entries to find one with minimal time
- * @a: ptr to the first entry. If NULL the other entry will be returned
- * @b: pointer to the second entry
- *
- * Return the entry which timestamp is the minimal, or b if a is NULL
- */
-static inline struct kc_entry *kc_min_entry(struct kc_entry *a,
- struct kc_entry *b)
-{
- if (!a)
- return b;
-
- if (time_before64(b->time_stamp, a->time_stamp))
- return b;
-
- return a;
-}
-
-/**
- * kc_entry_at_index() - return entry at specific index
- * @index: index of entry to be accessed
- *
- * Return entry
- * Should be invoked under spinlock
- */
-static struct kc_entry *kc_entry_at_index(int index,
- struct ice_device *ice_dev)
-{
- return (struct kc_entry *)(ice_dev->key_table) + index;
-}
-
-/**
- * kc_find_key_at_index() - find kc entry starting at specific index
- * @key: key to look for
- * @key_size: the key size
- * @salt: salt to look for
- * @salt_size: the salt size
- * @sarting_index: index to start search with, if entry found, updated with
- * index of that entry
- *
- * Return entry or NULL in case of error
- * Should be invoked under spinlock
- */
-static struct kc_entry *kc_find_key_at_index(const unsigned char *key,
- size_t key_size, const unsigned char *salt, size_t salt_size,
- struct ice_device *ice_dev, int *starting_index)
-{
- struct kc_entry *entry = NULL;
- int i = 0;
-
- for (i = *starting_index; i < PFK_KC_TABLE_SIZE; i++) {
- entry = kc_entry_at_index(i, ice_dev);
-
- if (salt != NULL) {
- if (entry->salt_size != salt_size)
- continue;
-
- if (memcmp(entry->salt, salt, salt_size) != 0)
- continue;
- }
-
- if (entry->key_size != key_size)
- continue;
-
- if (memcmp(entry->key, key, key_size) == 0) {
- *starting_index = i;
- return entry;
- }
- }
-
- return NULL;
-}
-
-/**
- * kc_find_key() - find kc entry
- * @key: key to look for
- * @key_size: the key size
- * @salt: salt to look for
- * @salt_size: the salt size
- *
- * Return entry or NULL in case of error
- * Should be invoked under spinlock
- */
-static struct kc_entry *kc_find_key(const unsigned char *key, size_t key_size,
- const unsigned char *salt, size_t salt_size,
- struct ice_device *ice_dev)
-{
- int index = 0;
-
- return kc_find_key_at_index(key, key_size, salt, salt_size,
- ice_dev, &index);
-}
-
-/**
- * kc_find_oldest_entry_non_locked() - finds the entry with minimal timestamp
- * that is not locked
- *
- * Returns entry with minimal timestamp. Empty entries have timestamp
- * of 0, therefore they are returned first.
- * If all the entries are locked, will return NULL
- * Should be invoked under spin lock
- */
-static struct kc_entry *kc_find_oldest_entry_non_locked(
- struct ice_device *ice_dev)
-{
- struct kc_entry *curr_min_entry = NULL;
- struct kc_entry *entry = NULL;
- int i = 0;
-
- for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
- entry = kc_entry_at_index(i, ice_dev);
-
- if (entry->state == FREE)
- return entry;
-
- if (entry->state == INACTIVE)
- curr_min_entry = kc_min_entry(curr_min_entry, entry);
- }
-
- return curr_min_entry;
-}
-
-/**
- * kc_update_timestamp() - updates timestamp of entry to current
- *
- * @entry: entry to update
- *
- */
-static void kc_update_timestamp(struct kc_entry *entry)
-{
- if (!entry)
- return;
-
- entry->time_stamp = get_jiffies_64();
-}
-
-/**
- * kc_clear_entry() - clear the key from entry and mark entry not in use
- *
- * @entry: pointer to entry
- *
- * Should be invoked under spinlock
- */
-static void kc_clear_entry(struct kc_entry *entry)
-{
- if (!entry)
- return;
-
- memset(entry->key, 0, entry->key_size);
- memset(entry->salt, 0, entry->salt_size);
-
- entry->key_size = 0;
- entry->salt_size = 0;
-
- entry->time_stamp = 0;
- entry->scm_error = 0;
-
- entry->state = FREE;
-
- entry->loaded_ref_cnt = 0;
- entry->thread_pending = NULL;
-}
-
-/**
- * kc_update_entry() - replaces the key in given entry and
- * loads the new key to ICE
- *
- * @entry: entry to replace key in
- * @key: key
- * @key_size: key_size
- * @salt: salt
- * @salt_size: salt_size
- * @data_unit: dun size
- *
- * The previous key is securely released and wiped, the new one is loaded
- * to ICE.
- * Should be invoked under spinlock
- * Caller to validate that key/salt_size matches the size in struct kc_entry
- */
-static int kc_update_entry(struct kc_entry *entry, const unsigned char *key,
- size_t key_size, const unsigned char *salt, size_t salt_size,
- unsigned int data_unit, struct ice_device *ice_dev)
-{
- int ret;
-
- kc_clear_entry(entry);
-
- memcpy(entry->key, key, key_size);
- entry->key_size = key_size;
-
- memcpy(entry->salt, salt, salt_size);
- entry->salt_size = salt_size;
-
- /* Mark entry as no longer free before releasing the lock */
- entry->state = ACTIVE_ICE_PRELOAD;
- kc_spin_unlock();
-
- ret = qti_pfk_ice_set_key(entry->key_index, entry->key,
- entry->salt, ice_dev, data_unit);
-
- kc_spin_lock();
- return ret;
-}
-
-/**
- * pfk_kc_init() - init function
- *
- * Return 0 in case of success, error otherwise
- */
-static int pfk_kc_init(struct ice_device *ice_dev)
-{
- int i = 0;
- struct kc_entry *entry = NULL;
-
- kc_spin_lock();
- for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
- entry = kc_entry_at_index(i, ice_dev);
- entry->key_index = PFK_KC_STARTING_INDEX + i;
- }
- kc_ready = true;
- kc_spin_unlock();
-
- return 0;
-}
-
-/**
- * pfk_kc_denit() - deinit function
- *
- * Return 0 in case of success, error otherwise
- */
-int pfk_kc_deinit(void)
-{
- kc_ready = false;
-
- return 0;
-}
-
-/**
- * pfk_kc_load_key_start() - retrieve the key from cache or add it if
- * it's not there and return the ICE hw key index in @key_index.
- * @key: pointer to the key
- * @key_size: the size of the key
- * @salt: pointer to the salt
- * @salt_size: the size of the salt
- * @key_index: the pointer to key_index where the output will be stored
- * @async: whether scm calls are allowed in the caller context
- *
- * If key is present in cache, than the key_index will be retrieved from cache.
- * If it is not present, the oldest entry from kc table will be evicted,
- * the key will be loaded to ICE via QSEE to the index that is the evicted
- * entry number and stored in cache.
- * Entry that is going to be used is marked as being used, it will mark
- * as not being used when ICE finishes using it and pfk_kc_load_key_end
- * will be invoked.
- * As QSEE calls can only be done from a non-atomic context, when @async flag
- * is set to 'false', it specifies that it is ok to make the calls in the
- * current context. Otherwise, when @async is set, the caller should retry the
- * call again from a different context, and -EAGAIN error will be returned.
- *
- * Return 0 in case of success, error otherwise
- */
-int pfk_kc_load_key_start(const unsigned char *key, size_t key_size,
- const unsigned char *salt, size_t salt_size, u32 *key_index,
- bool async, unsigned int data_unit, struct ice_device *ice_dev)
-{
- int ret = 0;
- struct kc_entry *entry = NULL;
- bool entry_exists = false;
-
- if (!kc_is_ready())
- return -ENODEV;
-
- if (!key || !salt || !key_index) {
- pr_err("%s key/salt/key_index NULL\n", __func__);
- return -EINVAL;
- }
-
- if (key_size != PFK_KC_KEY_SIZE) {
- pr_err("unsupported key size %zu\n", key_size);
- return -EINVAL;
- }
-
- if (salt_size != PFK_KC_SALT_SIZE) {
- pr_err("unsupported salt size %zu\n", salt_size);
- return -EINVAL;
- }
-
- kc_spin_lock();
-
- entry = kc_find_key(key, key_size, salt, salt_size, ice_dev);
- if (!entry) {
- if (async) {
- pr_debug("%s task will populate entry\n", __func__);
- kc_spin_unlock();
- return -EAGAIN;
- }
-
- entry = kc_find_oldest_entry_non_locked(ice_dev);
- if (!entry) {
- /* could not find a single non locked entry,
- * return EBUSY to upper layers so that the
- * request will be rescheduled
- */
- kc_spin_unlock();
- return -EBUSY;
- }
- } else {
- entry_exists = true;
- }
-
- pr_debug("entry with index %d is in state %d\n",
- entry->key_index, entry->state);
-
- switch (entry->state) {
- case (INACTIVE):
- if (entry_exists) {
- kc_update_timestamp(entry);
- entry->state = ACTIVE_ICE_LOADED;
-
- if (!strcmp(ice_dev->ice_instance_type,
- (char *)PFK_UFS) ||
- !strcmp(ice_dev->ice_instance_type,
- (char *)PFK_UFS_CARD)) {
- if (async)
- entry->loaded_ref_cnt++;
- } else {
- entry->loaded_ref_cnt++;
- }
- break;
- }
- case (FREE):
- ret = kc_update_entry(entry, key, key_size, salt, salt_size,
- data_unit, ice_dev);
- if (ret) {
- entry->state = SCM_ERROR;
- entry->scm_error = ret;
- pr_err("%s: key load error (%d)\n", __func__, ret);
- } else {
- kc_update_timestamp(entry);
- entry->state = ACTIVE_ICE_LOADED;
-
- /*
- * In case of UFS only increase ref cnt for async calls,
- * sync calls from within work thread do not pass
- * requests further to HW
- */
- if (!strcmp(ice_dev->ice_instance_type,
- (char *)PFK_UFS) ||
- !strcmp(ice_dev->ice_instance_type,
- (char *)PFK_UFS_CARD)) {
- if (async)
- entry->loaded_ref_cnt++;
- } else {
- entry->loaded_ref_cnt++;
- }
- }
- break;
- case (ACTIVE_ICE_PRELOAD):
- case (INACTIVE_INVALIDATING):
- ret = -EAGAIN;
- break;
- case (ACTIVE_ICE_LOADED):
- kc_update_timestamp(entry);
-
- if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS) ||
- !strcmp(ice_dev->ice_instance_type,
- (char *)PFK_UFS_CARD)) {
- if (async)
- entry->loaded_ref_cnt++;
- } else {
- entry->loaded_ref_cnt++;
- }
- break;
- case(SCM_ERROR):
- ret = entry->scm_error;
- kc_clear_entry(entry);
- entry->state = FREE;
- break;
- default:
- pr_err("invalid state %d for entry with key index %d\n",
- entry->state, entry->key_index);
- ret = -EINVAL;
- }
-
- *key_index = entry->key_index;
- kc_spin_unlock();
-
- return ret;
-}
-
-/**
- * pfk_kc_load_key_end() - finish the process of key loading that was started
- * by pfk_kc_load_key_start
- * by marking the entry as not
- * being in use
- * @key: pointer to the key
- * @key_size: the size of the key
- * @salt: pointer to the salt
- * @salt_size: the size of the salt
- *
- */
-void pfk_kc_load_key_end(const unsigned char *key, size_t key_size,
- const unsigned char *salt, size_t salt_size,
- struct ice_device *ice_dev)
-{
- struct kc_entry *entry = NULL;
- struct task_struct *tmp_pending = NULL;
- int ref_cnt = 0;
-
- if (!kc_is_ready())
- return;
-
- if (!key || !salt)
- return;
-
- if (key_size != PFK_KC_KEY_SIZE)
- return;
-
- if (salt_size != PFK_KC_SALT_SIZE)
- return;
-
- kc_spin_lock();
-
- entry = kc_find_key(key, key_size, salt, salt_size, ice_dev);
- if (!entry) {
- kc_spin_unlock();
- pr_err("internal error, there should an entry to unlock\n");
-
- return;
- }
- ref_cnt = --entry->loaded_ref_cnt;
-
- if (ref_cnt < 0)
- pr_err("internal error, ref count should never be negative\n");
-
- if (!ref_cnt) {
- entry->state = INACTIVE;
- /*
- * wake-up invalidation if it's waiting
- * for the entry to be released
- */
- if (entry->thread_pending) {
- tmp_pending = entry->thread_pending;
- entry->thread_pending = NULL;
-
- kc_spin_unlock();
- wake_up_process(tmp_pending);
- return;
- }
- }
-
- kc_spin_unlock();
-}
-
-/**
- * pfk_kc_remove_key_with_salt() - remove the key and salt from cache
- * and from ICE engine.
- * @key: pointer to the key
- * @key_size: the size of the key
- * @salt: pointer to the key
- * @salt_size: the size of the key
- *
- * Return 0 in case of success, error otherwise (also in case of non
- * (existing key)
- */
-int pfk_kc_remove_key_with_salt(const unsigned char *key, size_t key_size,
- const unsigned char *salt, size_t salt_size)
-{
- struct kc_entry *entry = NULL;
- struct list_head *ice_dev_list = NULL;
- struct ice_device *ice_dev;
- int res = 0;
-
- if (!kc_is_ready())
- return -ENODEV;
-
- if (!key)
- return -EINVAL;
-
- if (!salt)
- return -EINVAL;
-
- if (key_size != PFK_KC_KEY_SIZE)
- return -EINVAL;
-
- if (salt_size != PFK_KC_SALT_SIZE)
- return -EINVAL;
-
- kc_spin_lock();
-
- ice_dev_list = get_ice_dev_list();
- if (!ice_dev_list) {
- pr_err("%s: Did not find ICE device head\n", __func__);
- return -ENODEV;
- }
- list_for_each_entry(ice_dev, ice_dev_list, list) {
- entry = kc_find_key(key, key_size, salt, salt_size, ice_dev);
- if (entry) {
- pr_debug("%s: Found entry for ice_dev number %d\n",
- __func__, ice_dev->device_no);
-
- break;
- }
- pr_debug("%s: Can't find entry for ice_dev number %d\n",
- __func__, ice_dev->device_no);
- }
-
- if (!entry) {
- pr_debug("%s: Cannot find entry for any ice device\n",
- __func__);
- kc_spin_unlock();
- return -EINVAL;
- }
-
- res = kc_entry_start_invalidating(entry);
- if (res != 0) {
- kc_spin_unlock();
- return res;
- }
- kc_clear_entry(entry);
-
- kc_spin_unlock();
-
- qti_pfk_ice_invalidate_key(entry->key_index, ice_dev);
-
- kc_spin_lock();
- kc_entry_finish_invalidating(entry);
- kc_spin_unlock();
-
- return 0;
-}
-
-/**
- * pfk_kc_clear() - clear the table and remove all keys from ICE
- *
- * Return 0 on success, error otherwise
- *
- */
-int pfk_kc_clear(struct ice_device *ice_dev)
-{
- struct kc_entry *entry = NULL;
- int i = 0;
- int res = 0;
-
- if (!kc_is_ready())
- return -ENODEV;
-
- kc_spin_lock();
- for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
- entry = kc_entry_at_index(i, ice_dev);
- res = kc_entry_start_invalidating(entry);
- if (res != 0) {
- kc_spin_unlock();
- goto out;
- }
- kc_clear_entry(entry);
- }
- kc_spin_unlock();
-
- for (i = 0; i < PFK_KC_TABLE_SIZE; i++)
- qti_pfk_ice_invalidate_key(
- kc_entry_at_index(i, ice_dev)->key_index, ice_dev);
-
- /* fall through */
- res = 0;
-out:
- kc_spin_lock();
- for (i = 0; i < PFK_KC_TABLE_SIZE; i++)
- kc_entry_finish_invalidating(kc_entry_at_index(i, ice_dev));
- kc_spin_unlock();
-
- return res;
-}
-
-/**
- * pfk_kc_clear_on_reset() - clear the table and remove all keys from ICE
- * The assumption is that at this point we don't have any pending transactions
- * Also, there is no need to clear keys from ICE
- *
- * Return 0 on success, error otherwise
- *
- */
-void pfk_kc_clear_on_reset(struct ice_device *ice_dev)
-{
- struct kc_entry *entry = NULL;
- int i = 0;
-
- if (!kc_is_ready())
- return;
-
- kc_spin_lock();
- for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
- entry = kc_entry_at_index(i, ice_dev);
- kc_clear_entry(entry);
- }
- kc_spin_unlock();
-}
-
-static int pfk_kc_find_storage_type(char **device)
-{
- char boot[20] = {'\0'};
- char *match = (char *)strnstr(saved_command_line,
- "androidboot.bootdevice=",
- strlen(saved_command_line));
- if (match) {
- memcpy(boot, (match + strlen("androidboot.bootdevice=")),
- sizeof(boot) - 1);
- if (strnstr(boot, PFK_UFS, strlen(boot)))
- *device = PFK_UFS;
-
- return 0;
- }
- return -EINVAL;
-}
-
-int pfk_kc_initialize_key_table(struct ice_device *ice_dev)
-{
- int res = 0;
- struct kc_entry *kc_table;
-
- kc_table = kzalloc(PFK_KC_TABLE_SIZE*sizeof(struct kc_entry),
- GFP_KERNEL);
- if (!kc_table) {
- res = -ENOMEM;
- pr_err("%s: Error %d allocating memory for key table\n",
- __func__, res);
- }
- ice_dev->key_table = kc_table;
- pfk_kc_init(ice_dev);
-
- return res;
-}
-
-static int __init pfk_kc_pre_init(void)
-{
- return pfk_kc_find_storage_type(&s_type);
-}
-
-static void __exit pfk_kc_exit(void)
-{
- s_type = NULL;
-}
-
-module_init(pfk_kc_pre_init);
-module_exit(pfk_kc_exit);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("Per-File-Key-KC driver");
diff --git a/security/pfe/pfk_kc.h b/security/pfe/pfk_kc.h
deleted file mode 100644
index cc89827..0000000
--- a/security/pfe/pfk_kc.h
+++ /dev/null
@@ -1,29 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef PFK_KC_H_
-#define PFK_KC_H_
-
-#include <linux/types.h>
-#include <crypto/ice.h>
-
-
-int pfk_kc_deinit(void);
-int pfk_kc_load_key_start(const unsigned char *key, size_t key_size,
- const unsigned char *salt, size_t salt_size, u32 *key_index,
- bool async, unsigned int data_unit, struct ice_device *ice_dev);
-void pfk_kc_load_key_end(const unsigned char *key, size_t key_size,
- const unsigned char *salt, size_t salt_size,
- struct ice_device *ice_dev);
-int pfk_kc_remove_key_with_salt(const unsigned char *key, size_t key_size,
- const unsigned char *salt, size_t salt_size);
-int pfk_kc_clear(struct ice_device *ice_dev);
-void pfk_kc_clear_on_reset(struct ice_device *ice_dev);
-int pfk_kc_initialize_key_table(struct ice_device *ice_dev);
-const char *pfk_kc_get_storage_type(void);
-extern char *saved_command_line;
-
-
-#endif /* PFK_KC_H_ */
diff --git a/security/security.c b/security/security.c
index 1e8151d..1baf585 100644
--- a/security/security.c
+++ b/security/security.c
@@ -623,14 +623,6 @@ int security_inode_create(struct inode *dir, struct dentry *dentry, umode_t mode
}
EXPORT_SYMBOL_GPL(security_inode_create);
-int security_inode_post_create(struct inode *dir, struct dentry *dentry,
- umode_t mode)
-{
- if (unlikely(IS_PRIVATE(dir)))
- return 0;
- return call_int_hook(inode_post_create, 0, dir, dentry, mode);
-}
-
int security_inode_link(struct dentry *old_dentry, struct inode *dir,
struct dentry *new_dentry)
{
diff --git a/security/selinux/include/classmap.h b/security/selinux/include/classmap.h
index f97e7c1..06e836b 100644
--- a/security/selinux/include/classmap.h
+++ b/security/selinux/include/classmap.h
@@ -115,7 +115,7 @@ struct security_class_mapping secclass_map[] = {
{ COMMON_IPC_PERMS, NULL } },
{ "netlink_route_socket",
{ COMMON_SOCK_PERMS,
- "nlmsg_read", "nlmsg_write", NULL } },
+ "nlmsg_read", "nlmsg_write", "nlmsg_readpriv", NULL } },
{ "netlink_tcpdiag_socket",
{ COMMON_SOCK_PERMS,
"nlmsg_read", "nlmsg_write", NULL } },
diff --git a/security/selinux/include/objsec.h b/security/selinux/include/objsec.h
index 17901da..25b69dc 100644
--- a/security/selinux/include/objsec.h
+++ b/security/selinux/include/objsec.h
@@ -26,7 +26,8 @@
#include <linux/in.h>
#include <linux/spinlock.h>
#include <net/net_namespace.h>
-#include "security.h"
+#include "flask.h"
+#include "avc.h"
struct task_security_struct {
u32 osid; /* SID prior to last execve */
@@ -63,8 +64,6 @@ struct inode_security_struct {
u32 sid; /* SID of this object */
u16 sclass; /* security class of this object */
unsigned char initialized; /* initialization flag */
- u32 tag; /* Per-File-Encryption tag */
- void *pfk_data; /* Per-File-Key data from ecryptfs */
spinlock_t lock;
};
diff --git a/security/selinux/include/security.h b/security/selinux/include/security.h
index a7beab0..f068ee1 100644
--- a/security/selinux/include/security.h
+++ b/security/selinux/include/security.h
@@ -15,6 +15,7 @@
#include <linux/types.h>
#include <linux/refcount.h>
#include <linux/workqueue.h>
+#include "flask.h"
#define SECSID_NULL 0x00000000 /* unspecified SID */
#define SECSID_WILD 0xffffffff /* wildcard SID */
@@ -103,6 +104,7 @@ struct selinux_state {
bool checkreqprot;
bool initialized;
bool policycap[__POLICYDB_CAPABILITY_MAX];
+ bool android_netlink_route;
struct selinux_avc *avc;
struct selinux_ss *ss;
};
@@ -175,6 +177,13 @@ static inline bool selinux_policycap_nnp_nosuid_transition(void)
return state->policycap[POLICYDB_CAPABILITY_NNP_NOSUID_TRANSITION];
}
+static inline bool selinux_android_nlroute_getlink(void)
+{
+ struct selinux_state *state = &selinux_state;
+
+ return state->android_netlink_route;
+}
+
int security_mls_enabled(struct selinux_state *state);
int security_load_policy(struct selinux_state *state,
void *data, size_t len);
@@ -390,5 +399,6 @@ extern void avtab_cache_init(void);
extern void ebitmap_cache_init(void);
extern void hashtab_cache_init(void);
extern int security_sidtab_hash_stats(struct selinux_state *state, char *page);
+extern void selinux_nlmsg_init(void);
#endif /* _SELINUX_SECURITY_H_ */
diff --git a/security/selinux/nlmsgtab.c b/security/selinux/nlmsgtab.c
index 9cec812..5c42997 100644
--- a/security/selinux/nlmsgtab.c
+++ b/security/selinux/nlmsgtab.c
@@ -28,7 +28,7 @@ struct nlmsg_perm {
u32 perm;
};
-static const struct nlmsg_perm nlmsg_route_perms[] =
+static struct nlmsg_perm nlmsg_route_perms[] =
{
{ RTM_NEWLINK, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
{ RTM_DELLINK, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
@@ -206,3 +206,27 @@ int selinux_nlmsg_lookup(u16 sclass, u16 nlmsg_type, u32 *perm)
return err;
}
+
+static void nlmsg_set_getlink_perm(u32 perm)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(nlmsg_route_perms); i++) {
+ if (nlmsg_route_perms[i].nlmsg_type == RTM_GETLINK) {
+ nlmsg_route_perms[i].perm = perm;
+ break;
+ }
+ }
+}
+
+/**
+ * Use nlmsg_readpriv as the permission for RTM_GETLINK messages if the
+ * netlink_route_getlink policy capability is set. Otherwise use nlmsg_read.
+ */
+void selinux_nlmsg_init(void)
+{
+ if (selinux_android_nlroute_getlink())
+ nlmsg_set_getlink_perm(NETLINK_ROUTE_SOCKET__NLMSG_READPRIV);
+ else
+ nlmsg_set_getlink_perm(NETLINK_ROUTE_SOCKET__NLMSG_READ);
+}
diff --git a/security/selinux/ss/policydb.c b/security/selinux/ss/policydb.c
index 92a182febb..9fc103c 100644
--- a/security/selinux/ss/policydb.c
+++ b/security/selinux/ss/policydb.c
@@ -2400,6 +2400,10 @@ int policydb_read(struct policydb *p, void *fp)
p->reject_unknown = !!(le32_to_cpu(buf[1]) & REJECT_UNKNOWN);
p->allow_unknown = !!(le32_to_cpu(buf[1]) & ALLOW_UNKNOWN);
+ if ((le32_to_cpu(buf[1]) & POLICYDB_CONFIG_ANDROID_NETLINK_ROUTE)) {
+ p->android_netlink_route = 1;
+ }
+
if (p->policyvers >= POLICYDB_VERSION_POLCAP) {
rc = ebitmap_read(&p->policycaps, fp);
if (rc)
diff --git a/security/selinux/ss/policydb.h b/security/selinux/ss/policydb.h
index 215f8f3..dbb0ed5 100644
--- a/security/selinux/ss/policydb.h
+++ b/security/selinux/ss/policydb.h
@@ -238,6 +238,7 @@ struct genfs {
/* The policy database */
struct policydb {
int mls_enabled;
+ int android_netlink_route;
/* symbol tables */
struct symtab symtab[SYM_NUM];
@@ -324,6 +325,7 @@ extern int policydb_write(struct policydb *p, void *fp);
#define PERM_SYMTAB_SIZE 32
#define POLICYDB_CONFIG_MLS 1
+#define POLICYDB_CONFIG_ANDROID_NETLINK_ROUTE (1 << 31)
/* the config flags related to unknown classes/perms are bits 2 and 3 */
#define REJECT_UNKNOWN 0x00000002
diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
index f722bad..9a13c78 100644
--- a/security/selinux/ss/services.c
+++ b/security/selinux/ss/services.c
@@ -2107,6 +2107,9 @@ static void security_load_policycaps(struct selinux_state *state)
pr_info("SELinux: unknown policy capability %u\n",
i);
}
+
+ state->android_netlink_route = p->android_netlink_route;
+ selinux_nlmsg_init();
}
static int security_preserve_bools(struct selinux_state *state,